Re: AI Goals [WAS Re: The Singularity vs. the Wall]

From: Mikko Särelä (msarela@cc.hut.fi)
Date: Tue Apr 25 2006 - 11:52:12 MDT


On Tue, 25 Apr 2006, Richard Loosemore wrote:
> A second trap is to suppose that when we build an AGI, the goals and
> motivations of the AGI will be something we discover afterwards, when it
> is too late. Almost every comment on this list that expresses concern
> about what an AGI would do, has this assumption lurking out back.
> There are good reasons to believe that we, the designers of the AGI,
> would have complete control over what its motivations would be.
> Worried that it might wipe us out without caring? Then don't design it
> without a "caring" module! (I am oversimplifying for rhetorical effect,
> but you know what I mean).

I don't think this is quite as simple as that. It is a different
thing saying that do not make a goal of wiping out human life for the AI,
or saying that make sure the AI has goals that prevent it from doing
anything that accidentally wipes out human life.

Goal system only describes those things that the AI is trying to
accomplish. Typically all actions taken have some unintended
consequencies. Some of those we may foresee, some we don't. Same applies
to an AI at least during the time it is still trying to figure out the
world around it.

Making sure that the AI has all the necessary goals is not a simple, nor
straightforward problem to solve. It is not enough for AI to have goals
that are compatible with good outcomes, they should rule out bad outcomes.
And the latter is much, much harder. And it is against this idea that
comments on this list should be judgede rather than the believed ignorance
of the idea that the designer gets to set the initial motivations for AI.

-- 
Mikko Särelä	http://thoughtsfromid.blogspot.com/
    "Happiness is not a destination, but a way of travelling." Aristotle 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT