Re: On the dangers of AI

From: Peter de Blanc (peter.deblanc@verizon.net)
Date: Tue Aug 16 2005 - 17:52:47 MDT


On Tue, 2005-08-16 at 16:57 -0400, Richard Loosemore wrote:
> Here is the strange thing: I would suggest that in every case we know
> of, where a human being is the victim of a brain disorder that makes the
> person undergo spasms of violence or aggression, but with peaceful
> episodes in between, and where that human being is smart enough to
> understand its own mind to a modest degree, they wish for a chance to
> switch off the violence and become peaceful all the time. Given the
> choice, a violent creature that had enough episodes of passivity to be
> able to understand its own mind structure would simply choose to turn
> off the violence

There's an important distinction which you're missing, between a mind's
behaviors and (its beliefs about) its goal content. As human beings, we
have evolved to believe that we are altruists, and when our evolved
instincts and behaviors contradict this, we can sometimes alter these
behaviors.

In other words, it is a reproductive advantage to have selfish
behaviors, so you have them, but it is also a reproductive advantage to
think of yourself as an altruist, so you do. Fortunately, your
generally-intelligent mind is more powerful than these dumb instincts,
so you have the ability to overcome them, and become a genuinely good
person. But you can only do this because you _started out_ wanting to be
a good person!

You are anthropomorphizing by assuming that these beliefs about goal
content are held by minds-in-general, and the only variation is in the
instinctual behaviors built in to different minds. A Seed AI which
believes its goal to be paper clip maximization will not find
Friendliness seductive! It will think about Friendliness and say "Uh oh!
Being Friendly would prevent me from turning the universe into paper
clips! I'd better not be Friendly."



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT