Re: Collective Volition: Wanting vs Doing.

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Mon Jun 14 2004 - 09:27:23 MDT


Samantha Atkins wrote:
>
> On Jun 13, 2004, at 2:58 PM, Eliezer Yudkowsky wrote:
>>
>> Okay... how do you know this? Also, where do I get the information?
>> Like, the judgment criterion for "wise decisions" or "good of
>> humanity". Please note that I mean that as a serious question, not a
>> rhetorical one. You're getting the information from somewhere, and it
>> exists in your brain; there must be a way for me to suck it out of
>> your skull.
>
> Hmm. Is a claim that the information is not necessarily present a claim
> to know something or a claim of ignorance as to whether such information
> can be expected to be present or not? I would think it is the
> latter. Let me rephrase. Why do you believe that the information
> inside human skulls and able to be extrapolated from such information is
> sufficient to make wise decisions for the well-being of humankind?
>
> It may simply be the best we have.

Suppose you tell me that you don't know which way is "up", but you know
it's not exactly the direction of gravity, although gravity obviously has
something to do with it, but you're not sure exactly what. It may be that
the exact specific direction of "up" is not something I can read out of
your skull, the same way I can't read your volition off an LED display on
the back of your neck. But to find out what you *mean* by "up", I need to
look inside your head. And you must already know something about this "up"
of yours - maybe knowledge about the algorithm used to compute, rather than
knowledge of the output of the algorithm - or you wouldn't know that "up"
was *not* the exact direction of gravity.

By the time you're finished telling me how to look inside your skull and
read the cues used to find the algorithm that an AI runs to compute your
"up", you will, once again, have a volition-extrapolating dynamic.

Morality is not *allowed* to be a mystery; you are in immediate possession
of all information needed to untangle the mysteriousness, since your mind
contains everything that generates the perception of mystery. The same
holds true of consciousness; I have not yet finished untangling that, but I
know it is not allowed to be a mystery.

If I had not already untangled morality, I cannot think of any algorithm
Eliezer-2001 could have given an FAI to determine the answer except "give
the answer I would give if I knew more, thought faster" (more drastic
transformations not being necessary in this case). CFAI does in fact
specify something of this sort, albeit not clearly.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT