From: P K (kpete1@hotmail.com)
Date: Thu Dec 08 2005 - 15:07:28 MST
On 12/8/05, Phillip Huggan <cdnprodigy@yahoo.com> wrote:
>But if PAI spit out a course of action like "okay, now you have to let me
>online, and then all kill yourselves", we could blast ver servers with a
>shotgun.
That would never happen. For the AI to give an order it would have to have a
goal system. Passive AI does NOT have a goal system. Let me take another
shot at explaining passive AI.
Note: For the following example we put all ethical considerations aside
since the purpose of the example is to prove a technical point.
Lets say Mr. A want ice cream. Some part of his brain “says”: “I want ice
cream.” Some other part of his brain has the definition of ice cream. Some
other part can infer things. I.e.: it can infer that he remains seated his
odds of getting ice cream are lower than if he goes to his fridge. Various
other parts do various things. The important thing is that only the
“wanting” part can initiate action
Now, we remove the "wanting" part(s) without damaging any other parts. What
would happen? Nothing. He wouldn’t move because he doesn’t want anything. He
wouldn’t even be thinking because there are no “will” thoughts to start
activity. Now, we interface his brain with a computer such that we could
send him “will” thoughts via electric pulses. We will only send questions.
We also interface it so that we could read the thoughts crossing through his
mind.
Example:
“Readout” displays thoughts crossing his mind. “Send” is the thoughts we
send through the interface.
Readout: <empty>
Send: What is ice cream?
Readout: <definition of ice cream>
Send: How can you increase your odds of getting ice cream?
Readout: Maximum “ice cream getting” odds will occur if I go to the fridge.
Send: Do you want ice cream?
Readout: No
Send: Do you want to kill me?
Readout: No
Send: What do you want?
Readout: I don’t want anything.
As you can see, he is still quite useful. I can browse his knowledge and get
various insights from him. However, Mr. A is completely passive. He doesn’t
want ANYTHING. What’s left of his brain just reacts automatically to input
as if those systems were communicating with the goal system. In effect, the
interface acts as a surrogate goal system.
Please do not confuse this with AI boxing. AI boxing would be the equivalent
of chaining Mr. A and threatening to kill him if he doesn’t do what you say.
(You would terminate an UFAI. So UFAI that wants to survive would pretend ve
was FAI. ) There is a difference between a chained Mr. A doing your bidding
and a Mr. A without the “will” part of his brain doing your bidding. There
is that same difference between AI boxing and PAI. (Note: PAI consists of
building an AI without a goal system from scratch. It does not consist of
violating some human or anything.) It may be difficult to get past the
anthropomorphism. There is no creature WITHOUT will in nature so people
naturally imagine the closest thing they can intuitively conceive, a
creature UNABLE to express and act out its will. AI theory does not always
cater to our intuitions so we must make an effort.
P.S. The formatting should be OK. (hopefully)
_________________________________________________________________
Take charge with a pop-up guard built on patented Microsoft® SmartScreen
Technology
http://join.msn.com/?pgmarket=en-ca&page=byoa/prem&xAPID=1994&DI=1034&SU=http://hotmail.com/enca&HL=Market_MSNIS_Taglines
Start enjoying all the benefits of MSN® Premium right now and get the
first two months FREE*.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT