Re: "Objective" Morality

From: Marcello Mathias Herreshoff (
Date: Tue Aug 09 2005 - 12:25:58 MDT

On Tue, Aug 09, 2005 at 06:55:40PM +1200, Marc Geddes wrote:
> --- Tennessee Leeuwenburg <>
> wrote:
> > That's incorrect : Objective != Universal.
> >
> > Universal means that the same morality applies
> > universally (i.e. for
> > everyone) whereas Objective means that for any one
> > person their
> > morality is an objective fact.
> >
> To clarify, 'Universal Morality' is what I really
> always meant. I think there's a set of moral
> principles applicable to all sentients in the
> multiverse at all times. This background set of
> principles goes beyond mere 'Volition' (what sentients
> want).

Yes, I realize now that I did gloss over a few of the differences between
Objective and Universal Morality, but my argument makes both of these
concepts meaningless anyway.

As Tennessee Leeuwenburg seems to define them, objective morality is the
claim by some person that their personal morality is an objective fact,
whereas universal morality claims that there some morality out there which
applies universally.

If you meant something else by either of these terms, feel free to clarify.

How does one find out whether a morality is "applying universally." I
already showed that no experiment will do the trick. Thus that is
meaningless too, as there is no way to find out which morality is universal.

> The reason I keep banging on and on about this on list
> to the point where I've annoyed people almost to
> getting banned is because I'm certain I'm right.
> I haven't been able to produce a completely coherent
> proof yet. That's the only little problem ;)
> Seriously, though, I would be absolutely astonished if
> I was wrong about this. But if turns out that I *am*
> wrong about this, after the Singularity you are all
> quite welcome to get print-outs of all my SL4
> postings, and stuff them down my throat one at a time
> for being such an idiot.
> I think Eliezer is simply confused. He's pissing
> around in the dark without a clue. Poor fellow.
> Brilliant? Yes. Right? No.
> Ask yourselves:
> Does the idea of general intelligence without
> sentience (consciousness) *really* make sense to you?

Well of course not! The only examples of intelligence humans have ever seen
in their evolutionary past are sentient ones. That doesn't lessen the
possibility of the future existence of intelligences which are not.

> Does the idea of a super-smart intelligence interested
> only in tiling the universe with paper-clips *really*
> make sense to you?

Nope. Neither does a nuclear missile. This doesn't make the threat any less

> To my mind, these ideaas are obviously quite absurd.
> Always were. Always have been. Only someone with
> Autism or Aspergers could seriously give them
> credence.

You are half right. These two diseases are marked by an inability to
understand the behavior of other humans. Perhaps someone with one of these
diseases would have an easier time understanding what a real AI would
actually do, as they would not constantly be attempting to empathize with
something that their brain isn't the least bit like.

The list of things that didn't make sense to people in their time is very
long indeed. It contains almost every single technological revolution, from
the telephone to the computer, from relativity to quantum mechanics. Do you
expect something as revolutionary as an AI to actually make sense to us?

> I point to the proven fact that there's a *unity* to
> the universe, in the sense that scientific theories
> from different subject areas have in the past always
> *fitted together* in a coherent way.
> As an example I point to the 4 physics forces:
> Electromagnetism, Gravity, Weak Nuclear, Strong
> Nuclear. Modern phsyics frameworks (for instance 'the
> standard model') are succedded in 'unifying' the 4
> forces into a single explanatory framework.
> There is no reason by all facets of the mind should
> not also be *integrated* (unified) into a single
> explanatory framework also.

Physics isn't Psychology.

The Laws of the Universe are simple and pretty as far as we can tell.
The human brain is a hodge podge of layered complex function adaptations, most
of which are set up to deal with medium sized things for medium length times
on the plains of Africa. Given that the brain was made by a blind watch
maker who cares far less about consistency than even Microsoft, what makes
you expect there to be underlying principles?

I grant you that because we have the ability to empathize with other humans
(really a poorly designed emulation hack) it may seem that given how much we
have in common, there are universal principles. This is an illusion
maintained by the fact that most people are very similar mentally. Look
at people with (to use your example) autism, if you want an idea of what
intelligence looks like when it is even just a tiny bit different.

> Take 'Values' on the one hand, and 'Intelligence'
> (ability to make predictions on the other).
> If it really were the case that you could have a
> super-smart intelligence with any old value system,
> that would mean that it would be impossible to
> combinedValues and Intelligence into a single
> explanatory framework. This goes against everything
> we know about the fundamental *explanatory* unity of
> the cosmos.

Again, human values and human intelligence are not fundamental principles.
They are the products of complex functional adaptations.

> As an analogy, I point to physics again:
> Electromagnetic and Weak Nuclear forces. Everyone
> thought they were seperate, but then physics showed
> that they were related: under certain conditions they
> combine into a single force: the Electro-weak force.
> And there is every reason for thinking that 'Values'
> and 'Intelligence' are related in some way not yet
> understand, so that a super-smart intelligence must
> correlate with Friendliness as I've claimed. Again,
> if this wasn't true science would unable to integrate
> values and Intelligence into a single expalanatory
> framework, which would run contrary to everything we
> know about the fundamental unity of the cosmos.
See above.
> Thoughts cannot float around free of brains. And
> brains obey physical laws. So it's reasonable to
> suppose that there are 'laws of thoughts' that apply
> to all sentients. Such-and-such a thought has to
> correlate with such-and-such a brain state (otherwise
> functionalism would be false).
> There are basic conditions that need to met for
> 'cognition' to occur in the first place. Pure
> Self-awareness and ability to take action are not
> themselves a part of the 'Volitional' level. Volition
> is what sentients want, but self-awareness itself
> comes from the basic laws underpinning cognition.
Again, what basic laws?
> For instance, take the ability to detect 'spatial
> patterns'. This pattern-recognition is only possible
> of a certain meta-condition : namely that there is
> some degree of *symmetry* in physical objects. So the
> meta-principle *Symmetry* is a neccessery condition of
> cognition. But for a mind which is self-aware it
> *also* becomes a *value* - symmetry is valued because
> it enables cognition to occur in the first place and
> allows self-wareness to begin with.
> This shows that there are meta-values which are
> 'neccessery conditions of cognition', and are not
> themselves a part of the level of 'Individual
> Volition', but go beyond this and constitute a sort of
> 'Universal Volition'.

If you are not defining Universal as all of humanity, which would make it
Collective, what or who do you even mean by it? If you are postulating a
deity, it might really be time for somebody to call the list sniper.

> As I said earlier, I think the foundation of values is
> *not* individual (or even collective) Volition, but
> *Self-Actualization* - becoming more aware of our true
> nature. And our 'true nature' is the objective,
> universal principles underpinning self-awareness and
> cognition.

So I get Enlightened when I become truly aware of the hodge podge that is
the human brain? I seriously doubt it. If we really about knew all the
kludges and piled up lies that the brain uses to accomplish its evolutionary
business would we really be all that happy? On the contrary, I suspect it
would offend our moral sensibilities, and make us want to move out of our
wetware and become truly decent people.

-=+Marcello Mathias Herreshoff

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT