RE: On Our Duty to Not Be Responsible for Artificial Minds

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Aug 11 2005 - 08:26:41 MDT


> I assign full responsibility to the AI researcher for all consequences,
> intended or unintended. An AI researcher has a responsibility to
> choose an AI
> design with predictable consequences. If the AI researcher
> negligently uses
> an AI design the AI researcher can't predict, the AI researcher
> is still fully
> responsible for all actual consequences.

But Eli, science and technology have never been predictable before...

One of the beautiful and terrible things about science is that innovations
have a tendency to lead in unforeseen directions.

Geez, did Maxwell and Faraday predict the consequences of their work on
electromagnetism; did Tesla predict the consequences of his invention of AC
power? (Just to pull out a few basically arbitrary examples...)

It seems unreasonable to expect the consequences of AI work to be so much
more predictable than the consequences of all other sorts of scientific
work. No?

To make my point a little clearer: Making an AI whose behavior when locked
in a box is predictable is one problem, and a very hard one, perhaps
infeasible (if, as I expect, complex and hard-to-predict dynamics are an
indispensible part of intelligence). But making an AI whose impact on the
world when richly interacting with said world is predictable, is an even
harder problem, which seems to require a more accurate and comprehensive
model of the universe than is reasonable to assume (at least,
pre-Singularity).

-- Ben G



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:00 MST