Re: FAI: Collective Volition

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Jun 02 2004 - 10:09:58 MDT


Wei Dai wrote:

> On Wed, Jun 02, 2004 at 05:32:38AM -0400, Eliezer Yudkowsky wrote:
>
>> The last question strikes me as irrelevant;
>
> It's relevant to your fifth goal:
>
> 5. Avoid creating a motive for modern-day humans to fight over the
> initial dynamic.
>
> If you can't convince modern-day humans that collective volition
> represents them then naturally they'll want to fight. For example, if
> al-Qaeda programmers wrote an AI, "knew more" would mean knowing that
> only Allah exists, and they would fight anyone who suggests that "knew
> more" should mean a Bayesian probability distribution over a wide range
> of gods.

The point of the analogy is to postulate al-Qaeda programmers smart enough
to actually build an AI. Perhaps a better phrase in (5) would be, "avoid
policies which would create conflicts of interest if multiple parties
followed them". Categorical Imperative sort of thing. I am *not* going to
"program" my AI with the instruction that Allah does not exist, just as I
do not want the al-Qaeda programmers programming their AI with the
instruction that Allah does exist. Let the Bayesian Thingy find the map
that reflects the territory. So the al-Qaeda programmers would advise me,
for they know I will not listen if they mention Allah in their advice.

No matter what any AI project does, some modern-day human will be annoyed.
  At some point one is left with Aristotle indignantly objecting to the use
of a collective volition model that assumes that thought resides in the
brain; the brain is an organ for cooling the blood, and Aristotle wishes
his volition extrapolated on this basis. I draw the line at objections
which, if they were true, would cause the project to fail harmlessly. For
on that premise the speaker has nothing to fear from me, and should not
even be paying attention.

I am trying not to be a jerk, but you can always hypothesize someone who
calls me a jerk regardless. In such cases I will wait for some real person
to show up who is offended. Hypothetical complainers too often are
partial-people, constructed from imagination as stereotypes, bearing all
the bad qualities and none of the good.

>> Let me toss your question back to you: What do you think a devout
>> Christian should be said to *want*, conditional upon Christianity
>> being false? Fred wants box A conditional upon box A containing the
>> dimaond; Fred wants box B conditional upon box B containing the
>> diamond. What may a devout Christian be said to want, conditional
>> upon Christianity being false? I can think of several approaches.
>> The human approach would be to *tell* the devout Christian that
>> Christianity was false, then accept what they said in reply; but that
>> is the Christian's reaction on being *told* that Christianity is
>> false, it is not what the Christian "would want" conditional upon
>> Christianity being false. If the Christian is capable of seriously
>> thinking about the possibility, the problem is straightforward enough.
>> If not, how would one extract an answer for the conditional question?
>
> The thing is, I'm not sure that's the right question to ask, and the
> example I choose was meant to show the apparent absurdity of asking it.
> So I don't understand what point you're making by tossing the question
> back to me.

I don't think it's an absurd question to ask, and I can see several
possible ways to ask it, such as:

1) How would former Christians who still identify with and sympathize with
their past selves, prefer those past selves to be treated?
2) What policy would a loving Christian (one who aspires to treat their
neighbors, even their non-Christian neighbors, as they treat themselves)
suggest as a general policy for "people with wrong religious beliefs"?
3) What policy does the Christian suggest about the general case of deeply
held but wrong beliefs, without specifying that it is about religion?
4) Suppose it's possible to modularly extrapolate someone with their
ideological block against *considering the possibility* removed - the same
person, but without 'realizing' that their religion forbids them to
seriously think about the possibility. If this can be done, what would
they say of the hypothetical?
5) If the predictable result of growing up farther together is to lose
faith in a particular way, what would the self of that outcome say?

And, as you point out:

6) Suppose we extrapolated the person with the knowledge forcibly
inserted, and modeled the resulting catastrophic breakdown of faith, what
would that person say of their present self?

> Have you looked at any of the existing literature on preference
> aggregation? For example this paper: "Utilitarian Aggregation of Beliefs
> and Tastes", available at
> http://www.tau.ac.il/~schmeid/PDF/Gil_Sam_Schmeid_Utilitarian_Bayesian.pdf.
> I think it might be worth taking a look if you haven't already.

Reading... read. Relevant stuff, thanks.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST