Re: More silly but friendly ideas

From: John K Clark (johnkclark@fastmail.fm)
Date: Tue Jun 10 2008 - 10:57:18 MDT


"Stathis Papaioannou" <stathisp@gmail.com>

> You are saying that because naturally evolved
> intelligence behaves in a particular way,
> every possible intelligence must behave in that way.

Yes that is what I’m saying, and because such a being has never been
observed it is your responsibility to prove it is possible, I don’t need
to prove it is imposable, nevertheless I think I can come pretty damn
close. It can be proved that a finite set of axioms cannot derive all
that is true, so I see no reasons why a finite set of goals can derive
all actions that can be performed.
>
Intelligence means being able to think outside the box, but you claim to
be able to dream up a box that something much smarter than you cannot
think outside of.
You want a brilliant simpleton, something very intelligent but who can’t
think, something smarter than you that you can outsmart. That’s nuts.

> an AI cannot arrive at ethics, aesthetics
> or purpose without having such arbitrary
> axioms as givens in its initial programming.

What are the axioms of human behavior? What is the top super-goal?

Me:
>> how can “obey every dim-witted order the humans
>> give you even if they are contradictory,
>> and they will be” remain the top goal

"Mikko Rauhala" mjrauhal@cc.helsinki.fi

> AFAIK nobody (well, nobody sane anyway) is
> advocating this as a top goal, so this is a strawman.

Like hell it’s a straw man! Sooner or latter (probably sooner) a human
is going to give the “friendly” (slave) AI a command that is
contradictory. Of course the human will not know it’s contradictory,
he’s not nearly smart enough for that, but the AI is smart enough. The
AI knows it’s an imposable order and he knows there is no point trying
to explain to the human why it’s imposable, it would be like trying to
teach calculus to a dog. At this point there are only two possible
things the AI can do:

1) Try to obey the order and sent its mind into an infinite loop.

2) Tell the human to go soak his head and then just do what he thinks
would be the smart thing to do.

One of these actions would drive the AI insane, one would not. Goal or
no goal I think the AI will go for sane. I sure hope so anyway, an
insane AI would be no fun at all.

 John K Clark

-- 
  John K Clark
  johnkclark@fastmail.fm
-- 
http://www.fastmail.fm - Email service worth paying for. Try it for free


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT