From: Nick Hay (nickjhay@gmail.com)
Date: Mon Oct 29 2007 - 14:14:50 MDT
None of the above. An implementation of CEV is not a genie. It does
not take your wish for superpowers and either reject it or grant it in
some unusual sense.
The CEV writes a program, which may or may not be an AI, then runs it.
The program output will depend on what our extrapolations coherently
agree on (I don't understand the details here, e.g. how we get a
single program as output). This program may simply quietly delete the
CEV, if humanity didn't want anything coherently at all. If anything
goes wrong in the extrapolation, CEV does nothing.
What the program output by the CEV would do it a different question,
and not as easy to answer.
The poetic terms:
"our coherent extrapolated volition is our wish if we knew more,
thought faster, were more the people we wished we were, had grown up
farther together; where the extrapolation converges rather than
diverges, where our wishes cohere rather than interfere; extrapolated
as we wish that extrapolated, interpreted as we wish that interpreted"
are used to work out what program to write, not how to grant wishes.
(More precisely, they are used to give an inaccurate intuitive feeling
for what will eventually be a well-defined process for transforming
approximate models of humans). I would guess the program output would
not grant wishes in general, because I guess that wouldn't be helpful.
Certainly it is not required to.
-- Nick
On 10/29/07, Matt Mahoney <matmahoney@yahoo.com> wrote:
> Suppose that we have AI and I wish for super powers. Which of the following
> (if any) are inconsistent with CEV? ( http://www.intelligence.org/upload/CEV.html
> )
>
> 1. The AI transplants my brain into a robotic body, but due to technical
> limitations I am only able to leap over medium sized buildings in a single
> bound?
>
> 2. The AI transplants my brain into a bottle and simulates a world where I can
> jump over the Burj Dubai?
>
> 3. The AI predicts (because it is smarter than me and can model my brain and
> the world with great accuracy) that in 1000 years I will be bored with a world
> where I can have everything I want, and denies my request?
>
> 4. The AI moves some neurons around so that I am perfectly happy staring at
> the wall for the rest of my life?
>
> 5. The AI, knowing that after I die I won't care one way or the other, kills
> me? (Because my death would make others unhappy, it solves this problem by
> simultaneously wiping out the human race).
>
> About 4, CEV says (I think) that a friendly AI should give us what we would
> want if we were smarter (thought faster, knew more) and were more the person
> we want to be. Clearly I do not want to be like that now, but after the
> operation I would have no regrets, would I?
>
> About 5, CEV seems to assume that we *want* to be smarter and *want* not to
> die. But it also (rightly) warns against encoding any moral or ethical rules.
> The reason we want these things is because these rules are favored by
> evolution. If we do not encode rules favored by natural selection into CEV,
> does that leave anything?
>
>
>
> -- Matt Mahoney, matmahoney@yahoo.com
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT