RE: Fighting UFAI

From: Peter Voss (peter@optimal.org)
Date: Wed Jul 13 2005 - 21:13:16 MDT


Something I've been meaning to comment on for a long time: Citing paperclips
as a key danger facing us from AGI avoids the real difficult issue: what are
realistic dangers - threats we can relate to and debate?

It also demotes the debate to the juvenile; not helpful if one wants to be
taken seriously.

I'd love to hear well-reasoned thoughts on what and whose motivation would
end up being a bigger or more likely danger to us.

For example, what poses the bigger risk: an AI with a mind of its own, or
one that doesn't.

What are specific risks that a run-of-the-mill AGI poses?

Peter

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Eliezer
S. Yudkowsky
Sent: Wednesday, July 13, 2005 6:29 PM
To: sl4@sl4.org
Subject: Re: Fighting UFAI

Tennessee Leeuwenburg wrote:
> What do people suppose the goals of a UFAI might be? Other than our
> destruction, of course. I'm assuming that UFAI isn't going to want our
> destruction just for its own sake, but consequentially, for other
> reasons.

I usually assume paperclips, for the sake of argument. More realistically
the
UFAI might want to tile the universe with tiny smiley faces (if, as Bill
Hibbard suggested, we were to use reinforcement learning on smiling humans)
or
most likely of all, circuitry that holds an ever-increasing representation
of
the pleasure counter. It doesn't seem to make much of a difference.

--
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT