Re: Fighting UFAI

From: Eliezer S. Yudkowsky (
Date: Thu Jul 14 2005 - 13:17:00 MDT

Russell Wallace wrote:
> I think paperclips are an excellent way to summarize two key points:
> - Intelligence (in the operational sense of ability to come up with
> effective plans in the service of some goal system) and wisdom (in the
> sense of having goals we would recognize as wise) are completely
> different things.

I disagree that they are completely different. Human-ish wisdom requires
human-ish intelligence. You can't have a moral debate without the ability to
debate. The fallacy is in presuming that the ability to optimize a star
system requires moral complexity, that is, that intelligence implies what
humans would call wisdom in the domain of morality; as far as I can tell, you
can have a simple constant utility function that does not invoke difficult
computation, let alone intelligent debate.

> I think the danger is larger scale and longer term: that evolution
> will lead the universe out of the region of state space that contains
> sentience, and into the region that contains an optimal
> self-replicator. The ancestors could have been AIs, uploaded humans,
> genetically engineered transhumans or plain biologically evolved
> transhumans (the last being in my opinion the least likely, since
> biological evolution is slow; but it would get the job done if given
> enough time), but the end result is a future light cone full of
> optimal self-replicators and empty of people.

See the past SL4 discussion with the subject line "Darwinian dynamics unlikely
to apply to superintelligence".

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT