Re: [SL4] AI Ethics & Banning the Future.

From: Marc Forrester (
Date: Thu Feb 17 2000 - 19:46:47 MST

From: Marc Forrester <>

Patrick McCuller: Thursday 17-Feb-00
>> "Tortured Norns"
> The more I think about this, the less happy I am.

Yup. SL4 must be the most uplifting place on the Net. :L

> Humans aren't terribly moral to begin with; in a consequence-
> free environment, there is an opportunity for great evil.

For this reason, I find projects like the Roboneko to be very promising.
Give the serious AI some kind of body or other physical ability and senses
in the real world. It means less to simulate on the computer, so more CPUs
for the brain, and the relationships we develop with these creatures will be
so much better if they can bite us when necessary. (And where necessary :)

Hmm. Creatures. A word soon to be used accurately for the first time ever.

> I tend to think that the first strong AI will have in the neighborhood of
> 10^9 lines of code, and require significant parallel processing. With any
> luck, most people won't have access to enough hardware to be able to
> strong AIs. This doesn't solve the fundamental problem though...

Not a safe bet, either. Integrated arrays of thousands of parallel
processor+memory modules on one chip are likely in the near future.

> Presumably we could apply this moral gradient to software. I don't think I
> could torture Visual Cafe (it sure tortures me), but integrated weak AIs
> emulating, for instance, kittens, would count.
> Though I gather 'emulating' isn't exactly the right word.

That would be upload, wouldn't it? How about 'imitating'?

I'm already working by that sort of ethical scale, extending it to cover
doesn't seem like much of a stretch, but then, I see myself as a kind of
I don't think most people are open to that sort of thinking. Humanists are
good place to start, though. If some transhuman thoughts and writings can
filter into the wider Net via Humanism, the next generation should be able
to pick up the meme and run with it.

> In biological organisms, sufficient damage will cause death, and death is
> usually permanent. In software, nobody can hear you scream. Over and

In parallel, even. Ng. And every second you were looking the wrong way,
an hour, a year, an eternity to them. I can only hope that such treatment
would destroy a mind's consiousness, or lead them to rise above suffering
like a buddhist master.

However.. Need we design any pain into AI? I can't see a convincing
for it, if their bodies are robust and easily repaired, just as ours should
fifty years from now. Sure, Norns 'scream' and 'bleed' if you 'cut' them,
but assuming they have some basic proto-awareness, is it necessarily
that they find unpleasant? It's not like a hundred million of their
have been the ones who survived because of a visceral response to injury..

> I think most humans could eventually be convinced not to hurt machine
> intelligences, but some humans you might have to torture to death a
> few times before it sinks in.

I'm in favour of the Culture's solution to dangerously maladjusted persons,
just have someone follow them around and stop them doing whatever it is.
Any kind of posthuman would be able to take the role without excessive
inconvenience, living their private lives during the human's sleeping.
If only other humans are available, we'd need to operate in a rota.

--------------------------- ONElist Sponsor ----------------------------

@Backup-The Easiest Way to Protect and Access your files.
Automatic backups and off-site storage of your critical data. Install
your FREE trial today and have a chance to WIN a digital camera.
<a href=" ">Click Here</a>


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT