Re: [SL4] AI Ethics & Banning the Future.

From: Marc Forrester (A1200@mharr.f9.co.uk)
Date: Thu Feb 17 2000 - 19:46:47 MST


From: Marc Forrester <A1200@mharr.f9.co.uk>

Patrick McCuller: Thursday 17-Feb-00
>> "Tortured Norns"
>> http://www.geocities.com/SiliconValley/Park/2495/
>
> The more I think about this, the less happy I am.

Yup. SL4 must be the most uplifting place on the Net. :L

> Humans aren't terribly moral to begin with; in a consequence-
> free environment, there is an opportunity for great evil.

For this reason, I find projects like the Roboneko to be very promising.
Give the serious AI some kind of body or other physical ability and senses
in the real world. It means less to simulate on the computer, so more CPUs
for the brain, and the relationships we develop with these creatures will be
so much better if they can bite us when necessary. (And where necessary :)

Hmm. Creatures. A word soon to be used accurately for the first time ever.

> I tend to think that the first strong AI will have in the neighborhood of
> 10^9 lines of code, and require significant parallel processing. With any
> luck, most people won't have access to enough hardware to be able to
torture
> strong AIs. This doesn't solve the fundamental problem though...

Not a safe bet, either. Integrated arrays of thousands of parallel
processor+memory modules on one chip are likely in the near future.

> Presumably we could apply this moral gradient to software. I don't think I
> could torture Visual Cafe (it sure tortures me), but integrated weak AIs
> emulating, for instance, kittens, would count.
>
> Though I gather 'emulating' isn't exactly the right word.

That would be upload, wouldn't it? How about 'imitating'?

I'm already working by that sort of ethical scale, extending it to cover
software
doesn't seem like much of a stretch, but then, I see myself as a kind of
software.
I don't think most people are open to that sort of thinking. Humanists are
a
good place to start, though. If some transhuman thoughts and writings can
filter into the wider Net via Humanism, the next generation should be able
to pick up the meme and run with it.

> In biological organisms, sufficient damage will cause death, and death is
> usually permanent. In software, nobody can hear you scream. Over and
over.

In parallel, even. Ng. And every second you were looking the wrong way,
an hour, a year, an eternity to them. I can only hope that such treatment
would destroy a mind's consiousness, or lead them to rise above suffering
like a buddhist master.

However.. Need we design any pain into AI? I can't see a convincing
argument
for it, if their bodies are robust and easily repaired, just as ours should
be
fifty years from now. Sure, Norns 'scream' and 'bleed' if you 'cut' them,
but assuming they have some basic proto-awareness, is it necessarily
something
that they find unpleasant? It's not like a hundred million of their
ancestors
have been the ones who survived because of a visceral response to injury..

> I think most humans could eventually be convinced not to hurt machine
> intelligences, but some humans you might have to torture to death a
> few times before it sinks in.

I'm in favour of the Culture's solution to dangerously maladjusted persons,
just have someone follow them around and stop them doing whatever it is.
Any kind of posthuman would be able to take the role without excessive
inconvenience, living their private lives during the human's sleeping.
If only other humans are available, we'd need to operate in a rota.

--------------------------- ONElist Sponsor ----------------------------

@Backup-The Easiest Way to Protect and Access your files.
Automatic backups and off-site storage of your critical data. Install
your FREE trial today and have a chance to WIN a digital camera.
<a href=" http://clickme.onelist.com/ad/AtBackup2 ">Click Here</a>

------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT