Re: guaranteeing friendliness

From: Chris Capel (pdf23ds@gmail.com)
Date: Tue Nov 29 2005 - 18:46:18 MST


On 11/29/05, Richard Loosemore <rpwl@lightlink.com> wrote:
> I repeat: why is extreme smartness capable of extreme persuasion?

You're right that not all find this obvious. I certainly don't, while
I believe Robin does. For those who don't, the reasoning, I believe,
goes like this: human beings are routinely led to believe and do
things that they personally, at other times in their life, would have
found preposterous, and that all outside observers likewise do. This,
among many other things, indicates that the normal human mind is
susceptible to various forms of manipulation. Intelligence, as has
been discussed recently, doesn't seem to be a strong protection
against many form of irrationality. A full understanding of these
human weaknesses, it seems, would allow one a great deal of persuasive
power that could be used to a large number of ends.

Part of it is a mental experiment. Imagine someone who is as
charismatic as, say, Steve Jobs, or Bill Clinton, perhaps. The most
charismatic person in your experience. An AI would be at least that
persuasive. A seed AI would be better. (Perhaps someone else could do
a better job with this idea than I.)

> > The succinct answer is "Someone only marginally smarter than most
> > humans appears to be able to pretty consistently convince them to
> > let the AI out. The capabilities of something *MUCH* smarter than
> > most humans should be assumed to be much greater.".
>
> I can't understand what you are saying here. Who is the "someone" you
> are referring to, who is convincing "them" to let "the" AI out?

I think Robin is referring here to the AI Box experiments performed by
Eliezer. Since he was able to convince people who otherwise said they
would definitely not let an AI out, to let him out, (I believe the
count is 2 out of 3 now?) we can assume at a bare minimum that a
superintelligent AI could perform at least as well. And when the
stakes are as high as they are (which may not bear directly to your
question, but does bear on the utility of the question) if the
experiments were to find even one out of ten convinced, that would be
much too high a rate to justify the risk of relying on an AI box to
contain a seed AI.

Chris Capel

--
"What is it like to be a bat? What is it like to bat a bee? What is it
like to be a bee being batted? What is it like to be a batted bee?"
-- The Mind's I (Hofstadter, Dennet)


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT