Re: supergoal stability

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat May 04 2002 - 19:41:25 MDT


Wei Dai wrote:
>
> On Sat, May 04, 2002 at 04:28:08AM -0400, Eliezer S. Yudkowsky wrote:
> > "Personal philosophy" is here not used in the sense of "My own personal
> > philosophy, which is just mine and nobody else's" but rather in the sense of
> > describing "that portion of philosophy which you, personally, have managed
> > to acquire."
>
> Humanity has created many different systems of philosophy which are
> incompatible with each other. It seems to me that the CFAI approach, if it
> works at all, should work with any philosophy that is self consistent.
> Therefore the question of which philosophy do you subscribe to seems a
> relevent one.

That there are many different incompatible systems of philosophy does not
imply that all, or even any, of those philosophies are self-consistent.

> > That's an interesting question. I would expect/hope resource conflicts
> > along these lines to be rare.
>
> I disagree. There are 6 billion minds on Earth and presumably there will
> be at least that many post-Singularity. All it would take for resource
> conflicts to occur is for one mind to want use all matter in the solar
> system to build a giant optical telescope, and another one to use all
> matter to simulate an alternate universe. Why would you expect that 6
> billion different sets of supergoals can all be achieved without
> significant resource conflicts?

Operative word: "Significant". Does it really matter who gets to keep the
original model of R2D2 from Star Wars?

But of course this is a highly optimistic view; on the other hand, mana plus
Minimum Living Space seems like a straightforward way of settling any
conflict, not just insignificant ones.

> > One take is that after the Singularity all
> > sentient beings would get a quantity of "mana" and that mana could be used
> > to bid on whichever universal resources are conserved, after which all
> > conserved resources would be private property. But that's just a guess.
>
> I guess that means you don't subscribe to communism, which says private
> property is bad, or democracy, which says that as long as a majority
> agrees, a tax can be imposed on everyone's property to serve some common
> purpose. So do you believe that private property is good in itself, or is
> it good because it leads to good consequences?

I believe that private property is a correct answer in this case because it
seems to be the bedrock for determining whether a solution like communism or
democracy is fair. Suppose that I believe in private property, you believe
in democracy, and Damien Broderick believes in communism. Assuming we're
all still that human at this point, there's no reason why you and 2 billion
people can't choose to live in a democracy, while Broderick and 2 billion
others choose to live in a communism, and I and 2 billion others choose to
live in a private-property system. It seems simpler for conserved resources
go 1/3 to each of the three systems than that nobody gets any conserved
resources (the communist solution; incidentally, how the heck does anyone
ever make use of anything under post-Singularity communism?) or that the
disposition of the conserved resources would be decided by majority vote.
Of course, this just drops the question back a level and asks what criteria
determine "simpler". I would say that private property is the solution that
resonates most strongly with the Friendliness principles of "If a question
is arbitrary, let people pick their own answers" and "A symmetrical solution
is better than an asymmetrical one; at the acausal level, symmetry is
mandatory". It's possible to view communism and democracy as voluntary
special cases of this rule. If you can figure out a good way to ground
communism and democracy symmetrically on the acausal level, you might be
able to make a case that they are more basic, or more consistent with other
moral principles, than the private-property/volition model.

Actually, things are much more complicated than this, but sending off a
partial explanation is probably better than waiting to try and write it up
completely.

> Don't you think it would be a good idea, before porting your philosophy
> into an AI, to examine your philosophy in detail and make sure that is
> really what you want to port?

Why, yes, what a good idea. No offense, Wei Dai, and I'm speaking strictly
in my personal capacity as just one more poster on SL4, but gimme a break!
If it took until now for that thought to filter into my head, I should have
been locked up as a menace to the human species in 1998.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT