From: Eliezer Yudkowsky (firstname.lastname@example.org)
Date: Sun May 30 2004 - 01:39:50 MDT
Which is better for the human species?
One enormously powerful, recursively self-improving runaway optimization
process trying to tile the solar system with paperclips?
Or three enormously powerful, recursively self-improving runaway
optimization processes, respectively trying to tile the solar system with
paperclips, tiny pictures of smiling humans, and reward-button circuitry?
Maybe they'd fight to the finish, but more likely they'd compromise with
each other in a way that simultaneously maximized paperclips, tiny pictures
of smiling humans, and reward-button circuitry.
To get *any benefit* out of an optimization process, you have to solve the
technical problems of Friendly AI to locate the tiny "beneficial" subspace
of the space of optimization processes. It doesn't matter whether you do
it using one optimization process, or some grandly Rube Goldberg scheme to
make it unnecessarily conditional on an elaborate network of interacting
optimization processes, threatening each other into not doing certain
things - except, of course, that the latter is unnecessarily dangerous and
less likely to work. It's just as hard whether you call your working
materials by the irrelevant label, "many" or "one".
Or perhaps it would help to have three AI projects competing to achieve the
threshold of runaway recursive self-improvement, so that whichever one
takes the fewest safety precautions finishes first.
I am at a loss to understand what is gained, except a way of
anthropomorphizing the issue and avoiding confrontation with that darned
hard part of the problem.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT