Re: The Future of Human Evolution

From: Marc Geddes (
Date: Thu Sep 30 2004 - 04:19:03 MDT

 --- Eliezer Yudkowsky <> wrote:

> The problem word is "objective". There's a very
> deep problem here, a place
> where the mind processes the world in such a way as
> to create the
> appearance of an impossible question. Sort of like
> similar questions asked
> by dualists through the ages: How could mere matter
> give rise to thought?
> How could mere matter give rise to intentionality?
> How could 'is' ever
> imply 'ought'? Considering the amount of
> philosophical argument that has
> gone into this sort of thing, I hope it is not too
> implausible when I say
> that my preferred method for actually untangling the
> confusion is a bit
> difficult to explain. But if you're willing to
> leave the confusion tangled
> and ask after my moral output, then my answer starts
> out with the me that
> exists in this moment, and then asks what changes to
> myself I would make if
> I had that power. "Human is what we are.
> Humaneness is renormalized
> humanity, that which, being human, we wish we were."
> Etc.
> I don't want to say that you can *safely* ignore the
> philosophical
> confusion. That sort of thing is never safe. I do
> allege that the
> philosophical confusion is just that, a confusion,
> and after it gets
> resolved everything is all right again. The
> apparent lack of any possible
> objective justification doesn't mean that life is
> meaningless, it means
> that you're looking at the question in a confused
> way. When the confusion
> goes away you'll get back most of the common sense
> you started with, only
> this time you'll know why you're keeping it.
> The root mistake of the TMOL FAQ was in attempting
> to use clever-seeming
> logic to manipulate a quantity, "objective
> morality", which I confessedly
> did not understand at the time I wrote the FAQ. It
> isn't possible to
> reason over mysterious quantities and get a good
> answer, or even a
> well-formed answer; you have to demystify the
> quantity first. Nor is it
> possible to construct an AI to accomplish an end for
> which you do not
> possess a well-specified abstract description.
> --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for
> Artificial Intelligence

I'm still not convinced, that you have yet resolved
the confusion, although you do a good job of
pretending that you have ;)

Even if you can't derieve 'ought' from 'is', this
doesn't mean that there isn't an objective morality.

There seems to be a confusion about 'levels of
organization' here. Skepticism about moral
objectivity seems to be a consequence of scientists
being overly reductionistic. Complex systems can give
rise to new 'emergent' properties at higher levels of
organization. It is a mistake to demand that
everything be reduced to the physics level. Morality
looks like an 'emergent' property.

For instance the concept of 'wetness' is an emergent
property which cannot be *explained* in terms of the
individual behaviour of hydrogen and oxygen atoms.
The *explanation* of 'wetness' does NOT NEED to be
*derived* from physics. 'Wetness' can of course be
reduced to physics in one narrow sense: the sense of
a *casual description*. But a casual description of
something is NOT the same thing as an actual
*understanding* of that something.

The fact you can't derive an 'ought' from an 'is' does
not disprove objective morality, since morality is an
emergent property which does not need to be *derived*

I'm still not convinced that 'Collective Volition' is
the last word in Friendliness theory.


"Live Free or Die, Death is not the Worst of Evils."
                                                    - Gen. John Stark

"The Universe...or nothing!"

Please visit my web-sites.

Sci-Fi/Fantasy and Philosophy :
Mathematics, Mind and Matter :

Find local movie times and trailers on Yahoo! Movies.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT