The Geddes conjecture

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Thu Jul 14 2005 - 22:59:18 MDT


--- Ben Goertzel <ben@goertzel.org> wrote:

>
> Marc Geddes wrote:
>
> > I’m positing that an unfriendly AI cannot
> self-improve
> > past a certain point (i.e it’s intelligence level
> will
> > be limited by it’s degree of unfriendliness). I
> posit
> > that only a friendly ai can undergo unlimited
> > recursive self-improvement.
>
> I find this to be a most improbable statement...

You find it 'improbable', Robin finds it 'lunatic' and
M.Wilson thinks I'm 'merely engaging in wishful
thinking' This is great! :D It shows that my theory
is actually crazy enough to be true.

Niels Bohr Quote:

http://www.quotedb.com/quotes/2039

>
> I am curious, however, how you are construing the
> sense of "Friendly" here.

I define a 'friendly' AI as one that respects volition
(seeks harmonious relationships with other sentients)
and is growth oriented (always seeks to better itself
subject to the volition-respecting restraint).

>
> For instance, what about a superhuman AI whose goal
> is to advance science,
> math and technology as far as possible?
>
> What fundamental limits on self-improvement do you
> think such an AI is going
> to encounter?
>
> Can you please outline your argument, using this
> science-focused AI as an
> example?

I'll try. Let me first use a bit of maths notation to
loosely define what I was actually suggesting.

Let's call this: 'The Geddes Conjecture'

---
The Geddes conjecture in maths terms:
Function Intelligence (Predictions) 
= Big Oh [ Function Friendliness (Utilities) ]
Intelligence I loosey define as the ability to make
accurate prediction  and friendliness I loosely
defined above as utilities which tend to promote
volition and growth (self-betterment).
A graph of the intelligence function would have IQ
along one axis, and a set of all predictions made
(mapped to a single point) on the other axis (the
graph gives the implied IQ for a given set of
predictions).
A graph of the friendliness function would have some
suitable measure of volition/growth along one axis,
and the set of utilities of the goal system (mapped to
a point) along the other axis (the graph gives the
implied level of friendliness for a given set of
utilities).
The Geddes conjecture then states that I think the
intelligence level of any sentient is bounded (in the
limit) by the level of friendliness of that sentient. 
--- 
Now in the case of the example you gave,  an AI whose
goal is to advance science, math and technology as far
as possible, it seems to me that such an AI might
actually become friendly in the long-run (but it
wouldn't be friendly to start with!)  Because
advancing science requires an ever increasing ability
to make accurate predictions, and according to the
Geddes conjecture, this would require adding utilities
for respecting volition.
How could unfriendly utilities be limiting the
predictive ability?  The answer I think, is that
growth is somehow connected to respecting volition. 
The process of interacting with other sentients in a
harmonious way actually helps us to grow (become
better people ourselves).  So I think growth is a
*moving towards* altruism.  This sounds vaguely
plausible.  As an observed human fact it does seem
like learning to harmoniously interact with others in
some sense makes us 'more than we are' (expands our
circle of being).
Let me try to give another reason for a link between
friendly utilities and intelligence.  What *use* is
morality?  If it doesn't have some sort of innate use
to us, it's not clear why we couldn't simply dispense
with the concept altogether.  So surely we should
assume that being moral is connected with our own
well-being somehow.  That being the case, a link
between friendly utilities and getting smarter sounds
vaguely plausible.
Note that 'harmonious relationships' (gettiing along
with others) is analogous to 'health' (which consists
of internal mental and physical states of a sentient
functioing harmoniously).  There's an analogy between
the social and personal sphere.  Could there be
something more here than just analogy?  Perhaps
hurting others adversely affects mental health in
general?  Note that as an observed fact, evil people
do tend to suffer from mental instability more than
decent folks.  
This are admittedly all rather weak sounding
arguments, buyt they are suggestive none the less. 
They do I think, move my conjecture from being
'ludicrous' to being 'vaguely plausible'  
> 
> I might agree that there are limits to the
> intelligence of an AI embodying a
> classic human notion of "evil", if only because once
> a system becomes smart
> enough, it may inevitably understand how idiotic
> this human notion of "evil"
> is.  The emotional complexes underlying human evil
> and destructiveness may
> well be tied to limitations in intelligence.
> 
> However, as Eli has ably pointed out over the years
> (following in the
> footsteps of many prior futurists and sf authors),
> there are many ways for a
> superhuman AI's goal system to threaten human life,
> even if that AI has no
> "evil" in it.
> 
> -- Ben G
> 
> 
> 
> 
I don't think 'evil' is an active agency like
'friendliness' is.  I define evil simply to be an
absence of friendliness in a sentient.  If a
superhuman AI threatens human life, it's evil by my
definition.  
---
THE BRAIN is wider than the sky,  
  For, put them side by side,  
The one the other will include  
  With ease, and you beside. 
-Emily Dickinson
'The brain is wider than the sky'
http://www.bartleby.com/113/1126.html
---
Please visit my web-site:
Mathematics, Mind and Matter
http://www.riemannai.org/
---
Send instant messages to your online friends http://au.messenger.yahoo.com 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT