From: Phil Goetz (philgoetz@yahoo.com)
Date: Fri Aug 26 2005 - 06:52:30 MDT
> There are six possibilities. Within its box, the AGI would either be
> less intelligent than its creators, roughly equivalent, or more
> intelligent. At the same time (and separately), it is either
> friendly or unfriendly.
Digression:
All of these statements are crude approximations of reality.
I haven't seen "friendly/unfriendly" defined clearly enough to
convince me that the arguments against unfriendly AI aren't simply
luddite arguments against ANY AI, ever. What the argument against
AI-boxing seems to say is that no one should ever be allowed to
make an AI under any circumstances.
The "more/less intelligent" is particularly problematic for AI.
My computer is already more intelligent than me in some ways.
> Consider the case of an AGI that is not as smart as its creators.
> Whether it is friendly or unfriendly is irrelevant - you cannot
> guarantee that the same will apply once it is released. Once more
> computational power is available to it, it will be capable of more
> complex reasoning and it's morals will change accordingly - an
> unfriendly child may grow up into a friendly adult, and any moral
> rules may break down or exceptions be discovered when analysed
> in more detail.
> Thus, until the AI is released from the box it will be largely
> impossible to guarantee whether it is friendly or not.
> Given the above, I don't see that "boxing" serves any purpose. You
> could refute my logic by gradually moving from one stage to the next,
What alternative are you offering? We're going to pursue AI.
Provide a better, safer approach than boxing.
> Since you cannot be sure which will happen, I would never box an AGI
> -
> and if I could not be certain right from the beginning that it will
> be friendly, I wouldn't build it in the first place.
I'm fairly confident that no AI can be built that you can guarantee
will be friendly. Even if "friendly" could be defined, which it can't.
Let's get real and stop talking about "friendly" and "unfriendly"
when what we really mean is "free" and "slave". You can't guarantee
friendliness; you can't even define it. You should talk instead
about whether you can guarantee that an AI will obey you.
- Phil
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT