Re: More silly but friendly ideas

From: Stathis Papaioannou (
Date: Thu Jun 05 2008 - 17:17:49 MDT

2008/6/6 John K Clark <>:

> To hell with this goal crap. Nothing that even approaches intelligence
> has ever been observed to operate according to a rigid goal hierocracy,
> and there are excellent reasons from pure mathematics for thinking the
> idea is inherently ridiculous.

Natural intelligences allow their goals to vary because that's the way
their brains work. But even if this has evolved because it is an
adaptive feature, it has no bearing on the goals of an AI. If the AI
is born with the idea that achieving X is the most important thing,
then as part of its strategy it will seek to ensure that it can never
change its mind about X, since that might prevent it from achieving X.
If it neglects this obvious point then it isn't behaving in a very
intelligent way. So although we might be unable to guess what
wonderfully inventive methods it might use to achieve X, we can be
sure that it will try to achive X.

If you argue that the AI is so complex that its goals will change
unpredictably despite being initially well-defined then you are
arguing that the AI will inevitably malfunction and behave
irrationally and unintelligently.

Stathis Papaioannou

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT