From: Marc Geddes (firstname.lastname@example.org)
Date: Fri Feb 17 2006 - 22:21:05 MST
--- H C <email@example.com> wrote:
> 'Probability Theory : The Logic of Science' by E. T.
Um, 'Uncommon Priors Require Origin Disputes' by
And please don't try to copy Eliezer's obnoxious lines
(the nauseating 'Sincerly'). Come up with your own
Mitch was specfically calling for people on the SL4
list to attempt to define the problem of goal
stability. It appears no one but me has the balls to
attempt it - all afraid of looking stupid I suppose -
there's no more worries for me on that score ;)
The solution to goal stability couldn't be frigging
I've been screaming the metaphysics of the universe
out on the transhumanist lists for a couple of years
now. My reward was to be laballed a crack-pot and
I was even lumped in with that Mentifex idiot!
The answer is so frigging obvious it's painful. The
Bayesian framework needed to be extended to deal with
the notion of 'a probability of a probability'. I've
been saying over and over again for years that the
Bayesian framework needed to be extended. Robin
Hanson's new paper has the right idea that I was
looking for and there's a precise new theory in there
for how this could be done:
Bayesian probability theory can be reformulated as a
new kind of fuzzy set theory. Then the notion of 'a
probability of a probability' amounts to the notion of
classes of sets (how sets can be grouped together).
In order to make this precise you needed to extend set
theory - by creating TWO different definitions of a
One definition of a *Set* concerns Sets which have
physical and mental concepts as members. The other
definition of a *Set* concerns Sets which have *other
sets* as members. Not all collections of sets are
premitted to be grouped together into a larger set
(Mathematical *classes* which are not sets are really
'failed sets' with no mathematical existence).
What does all this have to do with goal stability?
It's obvious! Imagine different successive states for
a self-improving AGI - each an 'improvement' on the
last. Each 'state' of an AGI system is really a
Bayesian reasoning machine, consisting of a web of
probabilistic associations. So each 'state' of the
AGI is really a fuzzy set. When the AI changes to a
new state, this amounts to a new 'fuzzy set' being
created. The problem of goal stability amounts to
ensuring that all the new fuzzy sets (all the future
states of the AI) fall into the same *class* (namely
the class of 'Friendly' Bayesian reasoning machines).
In other words there needs to be a way of assessing
the 'probability of the probability' (because Bayesian
reasoning has to be used to analyze the different
mathematical 'states' of an AGI system - and each such
'state' is itself a Bayesian reasoning machine - a web
of statistical associations - or a 'fuzzy set').
As mentioned above, this is done by extending set
theory to create TWO different notions of a fuzzy set,
corresponding to TWO different kinds of probability
theory (one class of probabilities deals with physical
and mental concepts, the other class of probabilities
deal with mathematical concepts - i.e 'probabilities
of probabilities' - a 'probability' itself being a
To think I was lumped in the same category as
"Till shade is gone, till water is gone, into the shadow with teeth bared, screaming defiance with the last breath, to spit in Sightblinder’s eye on the last day”
Do you Yahoo!?
Listen to over 20 online radio stations and watch the latest music videos on Yahoo! Music.
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:59 MDT