Re: Two draft papers: AI and existential risk; heuristics and biases

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Thu Jun 08 2006 - 00:36:13 MDT


On Wed, Jun 07, 2006 at 12:24:55PM -0500, Bill Hibbard wrote:
> If you think RL can succeed at intelligence but must fail at
> friendliness, but just want to demonstrate it for a specific
> example, then use a scenario in which:
>
> 1. The SI recognizes humans and their emotions as accurately as
> any human, and continually relearns that recognition as humans
> evolve (for example, to become SIs themselves).
>
> 2. The SI values people after death at the maximally unhappy
> value, in order to avoid motivating the SI to kill unhappy
> people.
>
> 3. The SI combines the happiness of many people in a way (such
> as by averaging) that does not motivate a simple numerical
> increase (or decrease) in the number of people.
>
> 4. The SI weights unhappiness stronger than happiness, so that
> it focuses it efforts on helping unhappy people.
>
> 5. The SI develops models of all humans and what produces
> long-term happiness in each of them.
>
> 6. The SI develops models of the interactions among humans and
> how these interactions affect the happiness of each.

Have you read The Metamorphosis Of Prime Intellect?

The scenario above immediately and obviously falls to the "I've
figured out where human's pleasure centers are; I'll just leave them
on" failure.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT