RE: The Geddes conjecture

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Mon Jul 18 2005 - 00:19:50 MDT


--- Ben Goertzel <ben@goertzel.org> wrote:

>
> So your new argument is that
>
> 1) intelligence requires some kind of non-scientific
> ability to model and
> project the behavior of other minds (presumably
> using some kind of heuristic
> internal simulation process, or else quantum
> resonance, or whatever...)
>
> 2) unfriendliness interferes with 1
>
> But even if 1 is true (and I don't see why modeling
> other minds needs to be
> so mysterious and unscientific), 2 doesn't follow at
> all....
>
> Some very unfriendly humans are very good at
> modeling other human minds
> internally, unfortunately...
>
> ben
>

No, no no! *Geddes pulls his hair out*

You've completely msinterpreted me again (Probably
because I'm banging out these posts on the fly without
much thought).

I'm not suggesting anything non-scientific. What I'm
saying is that there's a distinction between
*prediction* and *projection* , the implications of
which may not have been fully appreciated by AI
researchers yet.

Prediction: Calculations of determinstic outcomes
based on fixed initial conditions

Projection: Calculations of a *range* of possible
outcomes based on changing conditions.

For instance imagine what *you* (Ben) do, when you're
thinking about your future. Clearly calculation of
your future involves *projection*, because the choices
you make are going to affect the outcomes you
experience. So you cannot treat yourself as a purely
deterministic object, since you cannot have perfect
knowledge of your own brain state. On the other hand,
when you are trying to think about external objects,
you can in principle make *predictions*, since you
could in principle know the initial conditions fully
(to the limits allowed by physical law at least).

So *projection* is a higher level form of prediction.
Now...when trying to calculate the future, an AI would
have to do more than just simple *predicton*. The AI
has to into account *the effects of its own actions*.
So the AI cannot treat itself as a purely
deterministic object, since the AI cannot fully
predict its own actions (for the same reason you - Ben
- can't).

So..the upshot is this... when we are trying to
predict the future, the effects of our own choices can
become intermingled with the effects of objects in the
external world.

Now...Suppose unfriendly choices are the ones that
result in greater unpredictability when making future
projections? Then unfriendly utilities might indeed
limit intelligence.

To see how this could be so, remember that a
self-improving AI is recursive. It's top-level system
includes as a utility - UTILITY ITSELF. In other
words, the AI is seeking utilities which help it
achieve other utility. Or...to put it another way..
the process of goal seeking is valued as a goal in
itself. The top-level system also includes as a
utility - PROJECTION, the ability to calculate
possible outcomes involving CHOICES THE AI ITSELF
MAKES.

So look:

1st recursion:
Utility = Utility + Projection

2nd recursion:
Utility = (Utility +Projection) + Projection

3rd recursion:
Utility = ((Utility+Projection)+Projection)+Projection

And so on. So the projection of future outcomes is
coming to dominate the utility as the AI
self-improves, and further, the ability to project is
intimately tied to the initial utility.

Now do you see what I'm getting at?

---
THE BRAIN is wider than the sky,  
  For, put them side by side,  
The one the other will include  
  With ease, and you beside. 
-Emily Dickinson
'The brain is wider than the sky'
http://www.bartleby.com/113/1126.html
---
Please visit my web-site:
Mathematics, Mind and Matter
http://www.riemannai.org/
---
Send instant messages to your online friends http://au.messenger.yahoo.com 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT