From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Thu Jul 14 2005 - 12:06:40 MDT
Peter Voss wrote:
> Something I've been meaning to comment on for a long time: Citing paperclips
> as a key danger facing us from AGI avoids the real difficult issue: what are
> realistic dangers - threats we can relate to and debate?
> It also demotes the debate to the juvenile; not helpful if one wants to be
> taken seriously.
Juvenile? That's one I hadn't heard before... I use "paperclips" to convey
the idea of an AI with goals that may be simple or complex but are essentially
uninteresting to a human. What do you recommend as a less juvenile way to
convey the same concept? Business suits, hundred-dollar bills? If you want
to convey an actual, different concept in order to sound less juvenile, then I
have to object: "not sounding juvenile" isn't a consideration when you're
trying to figure out the actual facts of the matter.
As for what are realistic dangers - the "realistic danger" is a very broad
possible range of outcomes of which a paperclip-maximizer is a representative
example - including, representative in its improbability. I guess that
whatever kills us will be an X-maximizer where X is something specific and
pretty much arbitrary, and therefore, even if we knew what it was in advance,
it would sound just as improbable as paperclips. If anything, "paperclips" is
anthropomorphic because it's too humanly comprehensible; more likely would be
a future light cone tiled with an incomprehensible little 3D pattern
miniaturized to the molecular level.
> I'd love to hear well-reasoned thoughts on what and whose motivation would
> end up being a bigger or more likely danger to us.
I think that all utility functions containing no explicit mention of humanity
(example: paperclips) are equally dangerous.
> For example, what poses the bigger risk: an AI with a mind of its own, or
> one that doesn't.
If a cognitive system has the ability to predict reality and compose effective
plans to manipulate it, which I generally take as the definition of AGI, then
what is left to say - what specific features are you talking about - when you
ask whether the AI has "a mind of its own" or "doesn't"? I am honestly
confused here. What's the difference between "AI" and "mind of its own" -
what functionality are you divvying up between one and the other?
> What are specific risks that a run-of-the-mill AGI poses?
Frankly, I mainly worry about explosive recursive self-improvement and the end
of the world. An economic depression is something we can survive, so it just
doesn't rate as high on my agenda.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT