Re: AI debate at San Jose State U.

From: Woody Long (ironanchorpress@earthlink.net)
Date: Fri Oct 21 2005 - 13:39:23 MDT


> [Original Message]
> From: Richard Loosemore <rpwl@lightlink.com>
> To: <sl4@sl4.org>
> Date: 10/21/2005 2:30:37 PM
> Subject: Re: AI debate at San Jose State U.
>
> Is it really the case that to be humanoid, an intelligence must get
> pissed off at losing, feel selfishness, etc.?
>
> The answer is a resounding NO! This is one of those cases where we all
> need to take psychology more seriously: from the psych point of view,
> the motivational/emotional system is somewhat orthogonal to the
> cognitive part .... which means that you could have the same
> intelligence, and yet be free to design it with all sorts of different
> choices for the motivational/emotional apparatus.
>
> To be sure, if you wanted to make a thinking system that was *very*
> human-like you would have to put in all the same mot/emot mechanisms
> that we have. But when you think about it a bit more, you find yourself
> asking *whose* mot/emot system you are going to emulate? Hannibal
> Lecter's? Mahatma Ghandi's? There is an enormous variation just among
> individual instances of human beings. Arguably you can have people who
> are utterly placid, selfless and who have never felt a violent emotion
> in their lives. And others with a violence system that is just
> downright missing (not just controlled and suppressed, but not there).

I agree with this theory of SAI relativity. It is true for both humanoid
"fully human intelligent" SAI, and goal based, human-equivalent SAI, as
differentiated in a prior post. These goals also can be toxic, or missing,
which leads to unfriendly, toxic results.

> The idea of "self-interest" is, I agree, slightly more subtle. Self
> interest might not be just an ad-hoc motivational drive like the others,
> it might be THE basic drive, without which the system would just sit
> there and vegetate.

This reminds of Sony's current research called the Playground Experiment,
where they are testing their " Intelligent Adaptive Curiosity Engine." This
engine of self-interest sounds very much like the basic drive of the robot.
http://playground.csl.sony.fr/en/page3.xml

> I will now jump forward and say what I believe are the main conclusions
> that we would come to if we did analyse the issues in more depth: we
> would conclude that *if* we try to build a roughly humanoid AGI *but* we
> give it a mot/emot system of the right sort (basically, empathic towards
> other creatures), we will discover that its Friendliness will be far,
> far more guaranteeable than if we dismiss the humanoid design as bad and
> try to build some kind of "normative" AI system.
>
> After all, we agree that Friendliness is important, right? So should we
> not pursue the avenue I have suggested, if there is a possibility that
> we would arrive at the spectacular, but counterintuitive, conclusion
> that giving an AGI the right sort of motivational system would be the
> best possible guarantee of getting a Friendly system?

The Japanese robot makers are clearly in the business of making fully human
intelligent humanoid robots. The purpose section of Sony robot patents make
this clear. They state that their purpose is to make as human-like a robot
as they can, to increase its entertainment value. So the field of humanoid
SAI research and development is a well-funded and undeniable reality. And
they believe that using the principle of harmony they can make them
absolutely friendly, even though they are self-aware
and driven by human-like self interest.

Example of a current, friendly intelligent system exhibiting self
awareness, self interest, and self destruction --
 
1. Self destruction - the Sony Aibo dog is able to execute a biting
behavior, which with sharp teeth or large force could harm people or their
pets. However Sony built it so that when it meets a certain amount of light
resistance, it self-destroys this behavior; and its jaw goes slack. If a
hobbiest trys to get inside and tinker with this behavior, it self destroys
the system, and is rendered inoperable. It implements what they call in
Japan the principle of harmony. It also could be considered an
implementation of Asimov's First Law of Robotics.
 
2. Self awareness and self interest - The Sony Aibo dog has tactile sensors
and it can tell when it is being "stroked" or "hit". If its being stroked,
"it" - the dog self - forms a positive self interest or affection for the
person stroking it, and develops a self interest in being near it so that
when it sees the person, based on ITS so formed self interest alone, it
goes up to the person. Coversely when it is "hit" it forms a negative self
interest or aversion for the person, and develops a self interest in
avoiding him, and so based on this self interest alone, it moves away from
the person.

In the same way, future humanoid SAI will be safe-built and friendly. So,
as a singulatarian, I support such friendly humanoid SAI, which I believe
will someday evolve into a super intelligent humanoid technological
singularity. My guess is Sony in 10 to 15 years will complete the creation
of their authentic, friendly humanoid SAI.

Ken Woody Long
artificial-lifeforms-lab.blogspot.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT