Re: [SL4] Programmed morality

From: Dale Johnstone (dalejohnstone@email.com)
Date: Sun Jul 09 2000 - 13:21:58 MDT


* Apologies if you recieve this twice. It didn't appear to get sent. Damn web mail.

Peter Voss wrote:
>
>Dale Johnstone wrote:
>> ... A real AI will have similar mechanisms to guide it's
>behaviour. Some of which (like curiosity & boredom) will be
>essential to a functioning mind. Without them useless and
>unproductive behaviour will most likely
>predominate, and hense no intelligence. Others motivators like
>hunger and thirst will have no meaning and can be discarded...
>
>I agree. However, there also has to be something more fundamental
>like 'pain/ pleasure' (ie. stop doing this/ do more of this)

A lot of this discussion hinges on the implementation details of an AI mind. I can't speak for Eli's but in mine 'curiosity' is a low level force that drives various sub-systems. (I'm not talking about human curiosity, I only use the name as it conveniently describes the effect within various sub-systems.) Mechanisms isomorphic to pleasure and pain are also there.

My point was that certain mechanisms are required to push the AI's mind in a useful direction - to stop it just sitting there consuming CPU cycles doing nothing useful. These drives, the 'reason it gets out of bed in the morning', the forces that move it's mind from one thought to the next - all can be modified. The hard part is finding out what they are to begin with.

Thus, what kind of modifications are possible will be AI design specific.

Arbitrary high-level 'rules' (a la Asimov) would be a crazy idea and distort the reasoning of it's mind in much the same way certain irrational human insticts do. A little fear or anxiety (for instance) would be a good thing though.

These instincts should mirror a useful survival strategy. The mind (if it's smart enough) can then reason that these are valuable and worth keeping. I'm hopeful this will be enough. It should also reason that human beings are a rich source of stimulation and are useful to have around, perhaps even worth protecting.

<snip>
>
>> Probably the best solution is to let them learn the very real
>value of cooperation naturally. Put a group of them in a virtual
>environment with limited resources and they will have to learn how
>to share (and to lie & cheat). These are valuable lessons we all
>learn as children and should form part of a growing AI's education
>also. Only when they've demonstrated maturity do we give them any
>real freedom or power. They should then share many of our own values
>and want to build their own successors with similar (or superior)
>diligence. Anything less in unacceptably reckless.
>
>I agree that 'anything less' is risky, but I don't see that we have
>a choice (other than not doing AI):

We can't all decide not to do AI, so that isn't a choice.

>1) General machine intelligence will invariably be connected to the
>Web during development & learning.

A simulation can receive any input (be it from the web or whatever), but since everything is virtual it can't do any real damage. We can limit it's ability to communicate outside if it's causing trouble by emailing newspapers about it's incarceration. :) Think about what you'd need to contain & study a computer virus. It isn't that hard.

>2) It seems that the only effective way to get true AI going is the
>seed route. In this scenario, there may not be a community of
>(roughly equal) AIs. Only one will bootstrap to superior
>intelligence.

There are many variations on 'the seed route' the radius of the feedback loop being one of them. Does the AI improve itself with lots of tiny improvement steps, or with larger more radical redesigns? Even this can be variable with each iteration. Minds are complex things. I don't expect there to be only one path to their creation. It's probably easier to say what it won't be.

Computing power (pre-nanotech) might indeed be a problem, but we can always get two of whatever the first is running on.

You may be correct in that only one will reach the singularity. Exponential growth means whoever is in the lead should win. However the AI may decide to make a billion+ copies of itself on the way & coordinate as a society, or group mind. By that time it's already out of our hands. I expect we'll be uploaded into an archive & our atoms used more efficiently.

>I am very concerned about the risks of run-away AI (unlike Eli I
>*do* care what happens to me). I'm desperately searching for ways of
>trying to predict (with whatever limited certainty) what goal system
>an AI might choose. Any ideas?

The AI doesn't choose, yet. We build the AI, we choose. Then using it's value mechanism (which we designed) plus it's experience, it chooses a better one.
Also note the fact that we didn't choose our own value mechanism, evolution did. It's a good place to start.

Now malicious AI and/or Nanotech is what scares me.

Regards,
Dale Johnstone.
-----------------------------------------------
FREE! The World's Best Email Address @email.com
Reserve your name now at http://www.email.com



GO.com. Click to find what you're looking for



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT