From: Gordon Worley (redbird@rbisland.cx)
Date: Mon Apr 01 2002 - 19:28:33 MST
On Monday, April 1, 2002, at 07:49 PM, Ben Goertzel wrote:
> Another point is that if your posited AI's are really SO similar to
> humans,
> then they will likely have the human sympathy for other humans (flawed
> and
> partial as this sympathy obviously is).
>
> What you are positing is an AI with a human emotional orientation but
> NOT a
> human compassion toward other humans.
This is a good way to get in something not worth of it's own thread, but
good for an established one.
I recently ready /Do Androids Dream of Electric Sheep/ by Philip K.
Dick. In case anyone is not aware, Dick is a good SF writer and this
particular book was the basis for the movie /Blade Runner/ (which, for
all its flaws, I consider one of the better bits of SF cinema, depending
on which particular cut of the film we are talking about).
Interestingly, this 35 year old book hits on a topic of prime interest
to modern AI researchers.
This next part is a bit of spoiler to bring everyone who hasn't read the
book up to speed and refresh others' memories. No plot is discussed,
just a couple of ideas presented.
The androids have no empathy. Humans, on the other hand, are unique in
having empathy for other living creatures. Even androids.
Okay, that's it for spoilers.
As now theorized with a degree of certainty, humans evolved in a group
environment that has led them to be programmed to behave in ways that,
while mostly focused on propagating one's own genes, allow them to help
their fellow humans to degrees directly proportional to degree of
relatedness (i.e. similarity of genes). Consequently, this means that
humans will do act in situations to help the group rather than
themselves. For example, one humans might give his life in exchange for
saving others. The number of others any particular human is willing to
die for depends on degree of relatedness. While one might be willing to
die for just one or two siblings, it will take millions of complete
strangers to evoke the same kind of response.
It is not part of the human emotional orientation to be quite as
ruthless as an AI would be. True, some humans are that ruthless, but
the Universe is statistical and the number is very small, especially
since the rest have a tendency to join forces and kill those individuals
who would exploit them. An AI, however, has no evolved sense of
compassion for other AIs, let alone humans. The consequence is that an
AI will act in its own best interests all of the time since the AI
doesn't have some genetic material and reproduction needs that will
force it to develop a moral system that optimizes the performance of
genes that it doesn't have. This is the part where Friendliness comes
in.
An AI doesn't have a survival instinct in a literal sense, but most
likely it will decide that it would rather live than die. If it decides
it doesn't matter whether it is running or not, it won't be running for
long and will be a failed AI. Any mind has to want to live to keep
living (I am being very careful about my terminology on this point, so
I'll note that even if what we call the mind in humans decides it
doesn't care whether it lives or dies, the brain is hardcoded to want to
stay alive).
To get back to where this thread started, distributed AI computing is an
engineering issue. It has nothing to do with how the AI will behave
outside of any effects caused by the infrastructure. If being
distributed over the Internet causes the AI to be unFriendly, for
example (I don't see how, but this is just an example), then running a
distributed AI is a bad idea. My intuition tells me it won't matter,
but I also have a feeling that might the naive outlook, with some really
tricky stuff happening when the brain is distributed.
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose redbird@rbisland.cx it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT