Re: "Objective" Morality

From: Marcello Mathias Herreshoff (
Date: Wed Aug 10 2005 - 18:29:32 MDT


Note: I found that the message is getting redundant in some places.
I refer back to old sections of writing to economize by giving them headers
like: --- HEADER ---

On Wed, Aug 10, 2005 at 05:02:30PM +1200, Marc Geddes wrote:
> --- Marcello Mathias Herreshoff <> wrote:
> > Yes, I realize now that I did gloss over a few of
> > the differences between
> > Objective and Universal Morality, but my argument
> > makes both of these
> > concepts meaningless anyway.
> >
> > As Tennessee Leeuwenburg seems to define them,
> > objective morality is the
> > claim by some person that their personal morality is
> > an objective fact,
> > whereas universal morality claims that there some
> > morality out there which
> > applies universally.
> >
> > If you meant something else by either of these
> > terms, feel free to clarify.
> >
> > How does one find out whether a morality is
> > "applying universally." I
> > already showed that no experiment will do the trick.
> > Thus that is
> > meaningless too, as there is no way to find out
> > which morality is universal.
> Rubbish. Plenty of physics concepts are supposed to
> 'apply universally'. That doesn't stop us assigning
> rational probabilities to supposed 'universals' on the
> basis of finite evidence. Rationality is a process of
> assessing competing *explanations* to see which works
> best.
I agree with you so far. (except for the "Rubbish")

> To treat Universal Morality scientifically we
> would simply form multiple hypotheses about sentient
> behaviour
Stop right there! Questions about morality are not about what sentients
*actually* do. They are about what they *should* do. If you found a sentient
life form that thinks eating human babies is moral would you change your
opinions on baby eating?

> and use 'inference to the best explanation'
> to cut down the space of probability by selecting the
> hypothesis with the best explanatory power, just as we
> do for every other branch of science.

To summarize, the reason that you can't treat universal morality
scientifically is that there are no testable experiments that could
demonstrate whether any particular morality is in fact the universal one.
There is absolutely nothing wrong with induction. However, induction can't
be used here because there is no evidence on which to use it.

> > Physics isn't Psychology.
> Are you sure? Objective Idealism treats physics as a
> form of cognition you know. How can something be said
> to exist at all if it wasn't being *interpreted* by
> some sort of cognitive proccess? Everything you know
> about the world requires a mental model to be
> comprehended you realize?

So physics is a form of cognition? Things exist by being interpreted?
Alright then! Let's put that to the test! Nope. Sorry. The test object
failed to disappear when I stopped thinking about it. I was also completely
unsuccessful at telekinesis.

If all reality were just interpreted, don't you think these feats would be
common place? They would be as simple as changing one's point of view.

> > The Laws of the Universe are simple and pretty as
> > far as we can tell.
> > The human brain is a hodge podge of layered complex
> > function adaptations, most
> > of which are set up to deal with medium sized things
> > for medium length times
> > on the plains of Africa. Given that the brain was
> > made by a blind watch
> > maker who cares far less about consistency than even
> > Microsoft, what makes
> > you expect there to be underlying principles?
> Did I mention the *human* mind specifically ? No. I
> was talking about *sentient minds in general*.

If you make a statement about sentient minds in general, it must also apply to
humans, because they are an instance of sentient minds in general. If you
expect sentient minds in general to have underlying principles, human minds
must also share those same principles.

> Abstract out all the arbitrary hodge podge features of
> the human brain and look *only* at the *neccessery*
> features - the general properties of the brain
> required for rationality and self-awareness.

Now there's a tricky one! How we know which principles are necessary
for sentience? Sentience is not a very well understood phenomenon. Of
course, if I use Mark Geddes' definition of sentience (if one was proposed)
then he will naturally be completely right about everything he says about it.

And even if I conceded that definition of the word sentience, we would still
have to worry about all the "non-sentients" which might do nasty things like
baby munching or tiling the universe with paper clips.

> Since
> brains run on physical laws, there should be general
> principles that apply to all sentients.

Firstly I should point out that "principle" has at least two meanings, a
physical meaning, as in "the principle of least distance" and a moral
meaning, as in "a person of principle". They are very different. You are
intentionally or unintentionally mixing them up.

So, sure, the laws of electromagnetism apply to all sentients, as do all the
other established physical principles. But this does not mean that the
resulting intelligences follow common moral principles. No morality is
inherent in the universe for reasons I already explained.

> > Again, human values and human intelligence are not
> > fundamental principles.
> > They are the products of complex functional
> > adaptations.
> Again, did I mention *human* values and *human*
> intelligence specifically. No. I was talking about
> Values and Intelligence *in general*.

> Consciousness, Values and Intelligence *are*
> fundamental properties of the cosmos that need to be
> explained.
Really? Is <insert your favorite large computer program here> a fundamental
property of the microchip it is running on? Try imagining your program with
some new feature added, or some old feature discarded and you will see why
that program was not fundamental. In the same way it is not hard to
hypothesize many different versions of consciousness, values and intelligence.

In the same way, all three of the properties you mention are no more
fundamental to the universe than the program is to the microchip. Our
versions of these three things are complex functional adaptations to our
evolutionary environment. AIs and aliens will have different versions by
reason of different design/evolution.

> Humans are not the only beings that can be
> conscious, have intelligence and have values.
Yes. However the values may be completely different from ours, and the
consciousness is optional. Computer programs can say practically anything.
An intelligence can believe practically anything about what is desirable, or
may not even have that concept in any recognizable form. That is the point.

> And as I pointed out above, if you abstract out all
> the hodge-podge features, you'd be left with the
> general features common to all sentients.

> You can't
> just throw brains together willy-nilly. There are
> common physical principles required for brains to work
> at all.

> > If you are not defining Universal as all of
> > humanity, which would make it
> > Collective, what or who do you even mean by it? If
> > you are postulating a
> > deity, it might really be time for somebody to call
> > the list sniper.
> I never mentioned a deity. Read what I say. I
> clearly defined 'Universal' as something held in
> common *by all logically possible sentients*. This is
> clearly different from a Collective. See the
> difference:
> Collective: Something defined by reference to all
> *currently* existing sentients in some context
> Universal: Something deifned by reference to all
> *logically possible* sentients.
The space of all logically possible sentients is absolutely massive!
Besides sentience itself, I very much doubt you will find much in common.

> > So I get Enlightened when I become truly aware of
> > the hodge podge that is
> > the human brain? I seriously doubt it. If we
> > really about knew all the
> > kludges and piled up lies that the brain uses to
> > accomplish its evolutionary
> > business would we really be all that happy? On the
> > contrary, I suspect it
> > would offend our moral sensibilities, and make us
> > want to move out of our
> > wetware and become truly decent people.
> Again, read what I say! I said you get enlightened
> by learning about your *true* nature, not every part
> of your nature. I defined your *true* nature to be
> only that small part of yourself which is *universal*
> - i.e common to all logically possible sentients.
> Again, ignore all the kludges and hodge-potch features
> of your mind. These are not part of your true nature
> as I have defined it here.
> Which parts of your mind enable you to be self-aware,
> to reason and to be altruistic? *These* are your true
> nature. All the other evolutionary kludges are just fluff.

You say my true nature is to be self-aware, to reason and to be altruistic.
I might even admit that that is definition of sane humanity's true nature, or
at least what we want it to be, albeit an over-simplified one.

But, to say that these are the true nature of sentience is another thing all
together. You are listing treating three distinct properties as a single

All three of these properties are probably as un-fundamental as the existence
of rice pudding and income tax. (See PROGRAM ANALOGY)

Why not a mind which can reason but is neither self-aware nor altruistic?
Why not a mind which can reason and is self-aware but not altruistic?

Time and time again, history has recorded humanity believing it is the center
of the universe in some way and inevitably being wrong. I have already shown
that there is no universal morality in any meaningful way. It would be the
height of anthropomorphic arrogance to assume that an intelligence must have
values similar to ours in order to progress beyond any specific point.

-=+Marcello Mathias Herreshoff

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT