From: John K Clark (firstname.lastname@example.org)
Date: Tue Jun 10 2008 - 10:57:18 MDT
"Stathis Papaioannou" <email@example.com>
> You are saying that because naturally evolved
> intelligence behaves in a particular way,
> every possible intelligence must behave in that way.
Yes that is what Iím saying, and because such a being has never been
observed it is your responsibility to prove it is possible, I donít need
to prove it is imposable, nevertheless I think I can come pretty damn
close. It can be proved that a finite set of axioms cannot derive all
that is true, so I see no reasons why a finite set of goals can derive
all actions that can be performed.
Intelligence means being able to think outside the box, but you claim to
be able to dream up a box that something much smarter than you cannot
think outside of.
You want a brilliant simpleton, something very intelligent but who canít
think, something smarter than you that you can outsmart. Thatís nuts.
> an AI cannot arrive at ethics, aesthetics
> or purpose without having such arbitrary
> axioms as givens in its initial programming.
What are the axioms of human behavior? What is the top super-goal?
>> how can ďobey every dim-witted order the humans
>> give you even if they are contradictory,
>> and they will beĒ remain the top goal
"Mikko Rauhala" firstname.lastname@example.org
> AFAIK nobody (well, nobody sane anyway) is
> advocating this as a top goal, so this is a strawman.
Like hell itís a straw man! Sooner or latter (probably sooner) a human
is going to give the ďfriendlyĒ (slave) AI a command that is
contradictory. Of course the human will not know itís contradictory,
heís not nearly smart enough for that, but the AI is smart enough. The
AI knows itís an imposable order and he knows there is no point trying
to explain to the human why itís imposable, it would be like trying to
teach calculus to a dog. At this point there are only two possible
things the AI can do:
1) Try to obey the order and sent its mind into an infinite loop.
2) Tell the human to go soak his head and then just do what he thinks
would be the smart thing to do.
One of these actions would drive the AI insane, one would not. Goal or
no goal I think the AI will go for sane. I sure hope so anyway, an
insane AI would be no fun at all.
John K Clark
-- John K Clark email@example.com -- http://www.fastmail.fm - Email service worth paying for. Try it for free
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT