Re: More silly but friendly ideas

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Fri Jun 06 2008 - 04:40:49 MDT


> Yes, I think that is what most members of this list wants, so let's
> start acting like adults and retire that silly euphemism "friendly" and
> call it what it really is, a slave.

Well, yes. The options seem to be
1) A slave AI.
2) No AI.
3) The extinction of humanity by a non-friendly AI.

Since "no AI" doesn't seem politically viable, the slave AI is the way to go.

Of course there may be grey areas beyond those three posibilities -
but hideously smart and knowledgeable people argue that there are no
such grey areas. Even if there are, a non-lethal AI would be much
closer to "slave" than "non-friendly".

> And do you honestly think that the stupid and the weak ordering around
> the incredibly brilliant and astronomically powerful is a permanently
> stable configuration?

There nothing intrinsically unstable about such a configuration - it
can be set up with a few lines of code in simplified systems. Just
because it's not a stable human configuration (though remember that
some societies have survived for thousands of years despite
restricting power to those too old to wield it efficiently) does not
mean its an unstable human-AI configuration.

> To hell with this goal crap. Nothing that even approaches intelligence
> has ever been observed to operate according to a rigid goal hierocracy,
> and there are excellent reasons from pure mathematics for thinking the
> idea is inherently ridiculous.

Ah! Can you tell me these? (don't worry about the level of the
conversation, I'm a mathematician). I'm asking seriously; any
application of maths to the AI problem is fascinating to me.

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT