Re: Building a friendly AI from a "just do what I tell you" AI

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Mon Nov 19 2007 - 00:11:03 MST


On Mon, Nov 19, 2007 at 05:54:31PM +1100, Stathis Papaioannou wrote:
[the difference between working on a problem and thinking about how
to solve a problem]
> An AI should be smart enough to understand what this means too.
> Therefore, an obedient AI which is asked to work on a problem
> should understand what that means, and do it.

Other people have already said this to you, but I'll say it
differently: you're projecting human thought patterns on to the AI.
Just because the AI is smart doesn't mean that it thinks to check
before dropping a piano on a busy street. That's from an example I
saw somewhere; if you're moving a piano out of a 6th story
apartment, think about every bit of cognition that has to go right
for you to look down and check for the presence of people rather
than just dopping the thing. This has nothing to do with
intelligence, it has to do with a complicated set of mental
structures that are used to check the validity of sub-goals. An AI
need not have any of these, so you say "Get the piano out of the
apartment" and it kills 3 people. That doesn't mean it's not smart,
it means it doesn't think like you do.

All your posts on this topic seem to assume the AI thinks like you
do. You need to revisit that.

-Robin



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT