Re: Fighting UFAI

From: Russell Wallace (russell.wallace@gmail.com)
Date: Thu Jul 14 2005 - 12:46:07 MDT


On 7/14/05, Peter Voss <peter@optimal.org> wrote:
> Something I've been meaning to comment on for a long time: Citing paperclips
> as a key danger facing us from AGI avoids the real difficult issue: what are
> realistic dangers - threats we can relate to and debate?
>
> It also demotes the debate to the juvenile; not helpful if one wants to be
> taken seriously.
>
> I'd love to hear well-reasoned thoughts on what and whose motivation would
> end up being a bigger or more likely danger to us.

I think paperclips are an excellent way to summarize two key points:

- Intelligence (in the operational sense of ability to come up with
effective plans in the service of some goal system) and wisdom (in the
sense of having goals we would recognize as wise) are completely
different things.
- Anthropomorphism is a fallacy; nonhuman entities can be dangerous
without feeling malicious emotions. (Consider anthrax.)

But by all means use another term if you prefer.

As for the most likely danger, I don't think it's going to come from
AI undergoing hard takeoff in someone's basement and popping out to
eat the world - reality is bigger and more complex than that. (Though
I think everyone trying to create AI in their basement should act on
the assumption that that is a danger - always assume the gun is
loaded.)

I think the danger is larger scale and longer term: that evolution
will lead the universe out of the region of state space that contains
sentience, and into the region that contains an optimal
self-replicator. The ancestors could have been AIs, uploaded humans,
genetically engineered transhumans or plain biologically evolved
transhumans (the last being in my opinion the least likely, since
biological evolution is slow; but it would get the job done if given
enough time), but the end result is a future light cone full of
optimal self-replicators and empty of people.

> For example, what poses the bigger risk: an AI with a mind of its own, or
> one that doesn't.

What do you mean by "mind of its own"?

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT