[SL4] AI Ethics & Banning the Future.

From: Marc Forrester (A1200@mharr.f9.co.uk)
Date: Wed Feb 16 2000 - 15:09:03 MST

From: Marc Forrester <A1200@mharr.f9.co.uk>

Patrick McCuller: Wednesday 16-Feb-00
> Ever since the public reaction to Dolly, something's been itching at me:
> the legality of AI. The legal and moral implications of machine
> need entire volumes to consider, but what concerns me from a developmental
> perspective is the possibility of an outright ban on strong AI research.

Regarding the moral implications of machine intelligence,
what do we think they are? Seems to me that we need to get
them very clear in our minds before we start messing about,
because there's no way we can rely on our sense of empathy
here - look how we as a species react to, say, Furbies and
lab rats. Ass about face? I think so.

OTOH, the Hacker subset of humanity tends to vivisect
Furbies and treat rats with respect and often affection,
so maybe this isn't a disaster waiting to happen.. ?

Hypothesis: You have a piece of software, running on a
massively parallel machine utterly unlike anything cast
in silicon today, but software nonetheless, an ephemeral
contruct of flickering patterns of numbers.

It seems to have the intelligence of an advanced animal,
able to communicate through simple language and solve
puzzles for reward.

What are the implications of:

Stopping the program?
Deleting the program?
Pausing the program?
Making copies?
Rewinding its memory?
Creating its world?
Editing its mind?

Well.. At least we really -are- playing god this time.
Is that a bad thing? I move that it depends wether
you're intelligent enough to treat play seriously.

> I know this sounds a little silly,

"I'm sorry about this, I know it's a bit silly." [HAL 9000 :]

> When the (direct) possibility of strong AI becomes obvious,
> the religious right may go absolutely bonkers. I think they
> will see AI as an enormous invasion of turf.
> Any thoughts?

Well.. I don't think it's just the Christpolitik bureau,
they take advantage of the FUD to grab power wherever they can,
but they're not creating it, people are genuinely getting scared
about the future. Individually, most of the people I've met
seem to be in denial, but as societies, they seem to be aware,
because everyone's trying to put the brakes on. Food and drug
administrations are given the power to dictate what people can
do with their own biochemistry, human fertility authorities
think they own your DNA, encryption software is classed as
firearms and everyone wants the Internet shut down and
replaced with an interactive Disney Channel.

Fortunately, there's no central intelligence driving all this,
and so it amounts more to a mesh of annoying tripwires than a
brick wall. If the Internet -is- ever closed down, we'll just
build a new one. You can only halt progress by burning all
the books and killing all the smart people, and look what
happens to that kind of society in the long run.

So: Yes, I think there will be massive popular resistance
to machine intelligence, I think the world's lawmakers and
institutions will create all kinds of difficulties and
inconveniences, but what I don't think is that it'll have
any real effect. Loopholes = Legislation squared, and all
authorities have their perceived areas of jurisdiction,
outside which you can do entirely as you will.

And there's a whole universe outside to play in.

Maybe more.

--------------------------- ONElist Sponsor ----------------------------

Get what you deserve with NextCard Visa. Rates as low as 2.9 percent
Intro or 9.9 percent Fixed APR, online balance transfers, Rewards
credit you deserve! Apply now! Get your NextCard Visa at
<a href=" http://clickme.onelist.com/ad/NextcardCreative2 ">Click Here</a>


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT