Re: Collective Volition: Wanting vs Doing.

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Mon Jun 14 2004 - 14:20:12 MDT


Keith Henson wrote:
>>
>>> On Jun 13, 2004, at 7:56 AM, Eliezer Yudkowsky wrote:
>>>>
>>>> Samantha, you write that you might have a badly warped view of what
>>>> kind of person you would like to be. "Badly warped" by what
>>>> criterion that I feed to the FAI? Your criterion? Someone else's?
>>>> Where am I supposed to get this information, if not, somehow, from
>>>> you? When you write down exactly how the information is supposed to
>>>> get from point A (you) to point B (the FAI), and what the FAI does
>>>> with the information once it's there, you'll have something that
>>>> looks like - surprise! - a volition-extrapolating dynamic. It's not
>>>> a coincidence. That's where the idea of a volition-extrapolating
>>>> dynamic *originally comes from*.

[snip]

> And furthermore, changing environmental conditions make last week's
> "wise decisions" less than wise. Consider how things would change if we
> found we were going to get smacked in a few years by a dinosaur killer
> comet.

What you've just decribed is a change in "wisdom" where I know how the
change works, and how to describe the change to an FAI. There's a desired
set of outcomes, and changing facts, so the desired actions that lead to
the desired outcomes change with the facts. You've got the utility
function over outcomes, and the conditional probabilities from actions to
outcomes. Change the conditional probabilities, and the preferences over
actions change, even if the utility function stays constant.

Remember that this is not about what we know right now, it is what we can
tell an FAI to compute in a well-specified way. Your example is an
excellent demonstration of how what seems like inscrutably variable wisdom
may change according to an easily defined dynamic. I might not know the
exact wisest action if we found we would be smacked in a few years by a
dinosaur killer, but I know how to describe the decision update process.

>> Also, where do I get the information? Like, the judgment criterion
>> for "wise decisions" or "good of humanity". Please note that I mean
>> that as a serious question, not a rhetorical one. You're getting the
>> information from somewhere, and it exists in your brain; there must be
>> a way for me to suck it out of your skull.
>
> Not when it isn't there.

If the algorithm isn't there, or the map to an algorithm, then where is it?

> Further, the question is poorly framed. "good of humanity" for
> example. What is the more important aspect of humanity? Genes?
> Memes? Individuals built by genes who are running memes? I have been
> thinking around the edges of these problems for close to two decades and
> I can assure you that I don't have *the* answer, or even *an* answer
> that satisfies me. (Right now, of course, they are all important.)

I see no reason why I should care about genes or memes except insofar as
they play a role in individuals built by genes who are running memes. What
exerts the largest causal influence is not necessarily relevant to deciding
what is the *important* aspect of humanity; that is a moral decision. I do
not need to make that moral decision directly. I do not even need to
directly specify an algorithm for making moral decisions. I do need to
tell an FAI, in a well-specified way, where to look for an algorithm and
how to extract it; and I am saying that the FAI should look inside humans.
  There is much objection to this, for it seems that humans are foolish.
Well, hence that whole "knew more, thought faster etc." business. Is there
somewhere else I should look, or some other transformation I should specify?

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT