RE: [SL4] Programmed morality

From: Dale Johnstone (dalejohnstone@email.com)
Date: Thu Jul 06 2000 - 17:13:58 MDT


Brian Atkins wrote:
>Read this:
>
>http://www.theatlantic.com/issues/98apr/biomoral.htm
>
>Now, if human morality (what is right, and wrong) is a product
>of our DNA, rather than being derived from raw logic, then any
>artificial lifeform (or non-human biological one) is going to
>have a completely different morality. I am interested in ways
>to manage this risk (if indeed it is a risk) during the
>creation of an AI (i.e. how to make sure it isn't "evil").

Lets strip the idea down to more simpler behavioural motivators than morality. Hunger, thrist, pleasure, pain etc, are older and more primitive behavioural motivators. It's obvious why we have them - they keep us alive and perpetuate our species. It may be that we've outgrown some of them, but generally they're useful enough to want as permanent fixtures that can't be forgotten.

A real AI will have similar mechanisms to guide it's behaviour. Some of which (like curiosity & boredom) will be essential to a functioning mind. Without them useless and unproductive behaviour will most likely predominate, and hense no intelligence. Others motivators like hunger and thirst will have no meaning and can be discarded.

I can envision an AI given nothing but curiosity to guide it. It might then find it interesting to experiment on humans as we do with lab rats, or experiment on itself without any fear and promptly destroy itself or go mad. Natural selection would normally filter out such monsters. We have to be careful not to deliberately (or accidentally) build anything like that.

I've heard arguments that say, once something is intelligent it naturally does the 'right' thing. That's rather naive I think. You can have intelligent dictators. Asimov's Laws are equally naive.

Probably the best solution is to let them learn the very real value of cooperation naturally. Put a group of them in a virtual environment with limited resources and they will have to learn how to share (and to lie & cheat). These are valuable lessons we all learn as children and should form part of a growing AI's education also.
Only when they've demonstrated maturity do we give them any real freedom or power. They should then share many of our own values and want to build their own successors with similar (or superior) diligence. Anything less in unacceptably reckless.

Regards,
Dale Johnstone.
-----------------------------------------------
FREE! The World's Best Email Address @email.com
Reserve your name now at http://www.email.com

------------------------------------------------------------------------
Best friends, most artistic, class clown Find 'em here:
http://click.egroups.com/1/5533/12/_/626675/_/962925240/
------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT