Question about CEV

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon Oct 29 2007 - 11:34:42 MDT


Suppose that we have AI and I wish for super powers. Which of the following
(if any) are inconsistent with CEV? ( http://www.intelligence.org/upload/CEV.html
)

1. The AI transplants my brain into a robotic body, but due to technical
limitations I am only able to leap over medium sized buildings in a single
bound?

2. The AI transplants my brain into a bottle and simulates a world where I can
jump over the Burj Dubai?

3. The AI predicts (because it is smarter than me and can model my brain and
the world with great accuracy) that in 1000 years I will be bored with a world
where I can have everything I want, and denies my request?

4. The AI moves some neurons around so that I am perfectly happy staring at
the wall for the rest of my life?

5. The AI, knowing that after I die I won't care one way or the other, kills
me? (Because my death would make others unhappy, it solves this problem by
simultaneously wiping out the human race).

About 4, CEV says (I think) that a friendly AI should give us what we would
want if we were smarter (thought faster, knew more) and were more the person
we want to be. Clearly I do not want to be like that now, but after the
operation I would have no regrets, would I?

About 5, CEV seems to assume that we *want* to be smarter and *want* not to
die. But it also (rightly) warns against encoding any moral or ethical rules.
 The reason we want these things is because these rules are favored by
evolution. If we do not encode rules favored by natural selection into CEV,
does that leave anything?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT