From: nuzz604 (firstname.lastname@example.org)
Date: Wed Nov 30 2005 - 11:40:28 MST
----- Original Message -----
From: "David Picon Alvarez" <email@example.com>
> Making a long story very very short, because extreme smartness can give
> extremely interesting rewards.
Smartness does not in itself give the extreme ability to persuade.
Smartness enables one to more easily figure out a strategy or solution to do
the persuasion or other tasks, with a minimum number of clues (at least one
but probably more, depending on the way the mind functions).
> Also, and more to the point, because extreme smartness to the point of
> having a complete theory of mind of the opponent means you can find
> paths exist in his future space of development and follow those paths that
> lead to your own release. This smartness differential though, to ensure a
> complete theory of mind, needs to be overwhelming, not just that of a
> over another human.
We are still talking about the AI as it is in a box, right? I agree that an
AI can probably inevitably persuade -most- people to let it out of the box.
However, just because it is smart does not mean that it knows how a specific
person's mind works, or even any human mind in general. Let's not forget
that it's only form of communication is text. This is not sufficient to
form a complete theory of mind in a reasonable amount of time.
If an IQ 170 professional persuader (lawyer, politician) gets sent to jail,
does he have the ability to convince each and every IQ 100 jail guard to let
him out of his cell and all the way out the front door to freedom? Maybe he
can persuade some, but not all. They are trained not to do this.
I am not trying to say that keeping an AI in a box is a good strategy, but
some AI researchers might think so. They may keep an AI in a box and hope
that they can keep it there until they believe that they have made it
friendly. I think many of us need to be careful before making claims about
AI behavior or making claims that intelligence is all powerful (intelligence
is nothing without facts). The bottom line is if you have to worry about
keeping an AI in a box, you probably aren't doing a good job in making it
friendly in the first place.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT