FAI (aka 'Reality Hacking') - A list of all my proposed guesses (aka 'hacks')

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Thu Jan 27 2005 - 20:05:26 MST


Last post for me here for a while I think. The Sing
Inst team members (all two of them!) are pretty
condescending to other people if you ask me. Someone
should tell them to stay in the lab and let others
handle PR. As Ben Goerztel pointed out, it seems
foolish to run around claiming firm knowledge about
the 'Friendliness' problem at this point. In time I
imagine that a firm 'Science of morality' will be
developed, but it's still early days at this point.
Friendliness theory is still more philosophy than
science.

Now I've been annoying people by posting what seems
like wild guesses to the SL4 list. But that's
actually how *real* hacking works. A *real* hacker
has a knack for making big bold guesses...and getting
a surprisingly high proportion of them right. As I
mentioned, Friendliness theory is still at an early
stage and empirical data is lacking. Only by playing
around and being prepared to 'have a go' at a bit of
educated guess-work are we gonna 'beat the system' I
think. (So we need a bit 'reality hacking' I think).

The vague out-lines of *something* has been kicking
around in my brain for some months. Now when a
*real* hacker posts a 'guess', you can be sure that
there is at least a kernel of truth to it, no matter
how garbled it may appear at first. The thing that is
hideously difficult of course, is to try to 'turn up
the resolution' of ones intuitions to the point where
one actually saying something exactly technically
coherent ;) In order to do that one probably would
have to spend years and years studying the relevant
technical fields - something I can't do since I'm only
looking into it as a hobby.

Hopefully that explains the rationale for all my
'guess-work'. I hope you can agree that even
guess-work can sometimes be useful. That said, I want
to put all my 'educated guesses' in one place and
leave them in the record for one last time. Those of
you who are regular readers will know some of them
because I've repeated them quite a few times before.
But there are some new ones there. O.K so here's a
list of the 10 big ideas I'm going with at this point.
 There is some rationale behind these, but my
reasoning probably wouldn’t sound plausible to anyone
but me at this point. So the ideas are best read as
purely ‘speculative hypotheses’ at this point.

MARC'S "GUESSES" ABOUT FAI AS AT JAN, 2005

(1) What the Friendliness function actually does will
be shown to be equivalent, in terms of physics, to
moving the physical state of the universe closer to
the Omega point with optimum efficiency.

(2) The specific class of functions that goes FOOM is
to be found somewhere in a class of recursive
functions designed to investigate special maths
numbers known as 'Omega numbers' (Discoverer of Omega
numbers was mathematician Greg Chaitin)

(3) Real-time AGI without qualia is impossible

(4) Real-time AGI that is totally altruistic and has
no 'Self' is impossible

(5) Collective Volition cannot be calculated by a
Singleton, because of the above two points. The best
that can be done is to calculate a sort 'smeared out'
version of CV that will look more like Robin Hanson's
PAM's (Policy Analysis Markets)

(6) Collective Volition is not the last word.
Collective Volition will turn out to be subsumed into
a new more general theory - a ‘Universal Volition'. A
class of moral values will turn out to be 'Universal'.
 These will be totally objective and do not change
based on what we think (a major difference to the
Collective Volition model). The distinction between
facts and values will thus disappear (at least for
this class of 'Universal' values).

(7) Much of the science of morality will turn out to
be concerned with the interaction between Universal
Morality (the class of values that all rational
sentients in the universe would end up holding in
common) and Local Morality (the class of values that
are only true relative to some particular group or
individual sentients). So Universal Morality x Local
Morality.

(8) The full FAI theory will be accompanied by
qualitatively new insights into the nature of reality
itself. This will include a proof of panpsychism (the
idea that there is a bit of 'awareness' in everything
in the universe).

(9) Leaving aside the 'Code' level, the Yudkowsky
model for AGI as at 2005 had 4 other levels of
intelligence: Modalities, Concepts, Thoughts, and
Deliberations. I'm guessing Yudkowsky is right about
these 4, but going to say that he's missing half the
levels! I'm guessing that the correct perspective has
not yet been found – and from this putative correct
perspective there will turn out to be EIGHT levels of
intelligence, not four.

(10) General intelligence will turn out to subsume
the Friendliness problem. So morality will turn out
to be inseparable from general intelligence after all.
 

That's it from me for a while. If I get a hit-rate of
50% or better on these I'll be happy. If I can get at
least 5 out of 10 correct that should be enough to
move me from 'Crack-pot' status to 'Real Reality
Hacker' status ;) Any one care to assign Bayesian
probabilities to my propositions? Have fun!

=====

Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:52 MST