From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Aug 14 2005 - 10:19:30 MDT
Ben Goertzel wrote:
>> I assign full responsibility to the AI researcher for all consequences,
>> intended or unintended. An AI researcher has a responsibility to choose
>> an AI design with predictable consequences. If the AI researcher
>> negligently uses an AI design the AI researcher can't predict, the AI
>> researcher is still fully responsible for all actual consequences.
>
> But Eli, science and technology have never been predictable before...
Occasionally, technological accidents, even the pursuit of pure science, have
resulted in casualties (even civilian casualties). Madam Curie comes to mind,
giving her friends samples of curious glowing radium.
So they died, and humanity continued, because the people of that era did not
have enough power to wipe out the whole human species at once. They died, and
it was a worthwhile gamble on the whole; many benefited, and few suffered, and
other scientists went back and rolled the dice again.
Once you enter into the regime of technologies powerful enough to pose
existential risks, you cannot make even one mistake, or the dice will never
roll again.
If you will not hold yourself to a higher standard than an ordinary scientist,
then content yourself with an ordinary scientist's power.
> It seems unreasonable to expect the consequences of AI work to be so much
> more predictable than the consequences of all other sorts of scientific
> work. No?
Who said anything about being reasonable? Nature isn't reasonable. The
challenge is just there, like it or not. Maybe in the ultratechnological
regime your choices are to hit a higher level of competence than past
scientists, or die.
If you want to be held to the standard of 'reasonableness' that applies to
other scientific work, choose other scientific work.
> To make my point a little clearer: Making an AI whose behavior when locked
> in a box is predictable is one problem, and a very hard one, perhaps
> infeasible (if, as I expect, complex and hard-to-predict dynamics are an
> indispensible part of intelligence). But making an AI whose impact on the
> world when richly interacting with said world is predictable, is an even
> harder problem, which seems to require a more accurate and comprehensive
> model of the universe than is reasonable to assume (at least,
> pre-Singularity).
An optimization process steers the future into particular regions of the
possible. I am visiting a distant city, and a local friend volunteers to
drive me to the conference hotel. I do not know the neighborhood. When my
friend comes to a street intersection, I am at a loss to predict my friend's
turns, either individually or in sequence. Yet I can predict the result of my
friend's unpredictable actions: we will arrive at the conference hotel. Even
if my friend's house were located elsewhere in the city, so that my friend
made a wholly different sequence of turns, I would just as confidently predict
our destination. Is this not a strange situation to be in, scientifically
speaking? I can predict the outcome of a process, without being able to
predict any of the intermediate steps in the process.
If my friend were not intelligent, then indeed I would need a very large
amount of computing power to guess the final destination of a non-intelligent
process as complex as my friend's brain.
Thus creating AI differs in kind from creating a hammer or a personal computer
or a gun, and the moral responsibilities are different also.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST