From: Jef Allbright (email@example.com)
Date: Sun Feb 29 2004 - 12:24:03 MST
Ben G wrote:
> But what is this "Universal Morality"?
> Your definition gives some properties that UM should have, but that's
> not enough.
> Someone walked into my house this morning and threatened to dissolve
> my children with acid. Should I have shot him with my AK47, or tried
> to wrestle him to the ground and immobilize him? What does UM say
> about this, or any other particular situation?
It appears Marc experienced a significant "Aha".
The key point, it seems to me, is that he postulates a system of
Universal Morality, along with more limited Personal Moralities, and
that goodness is measured by how closely PMs fit UM.
I agree with what I see as this essential point. (Apologies to Marc if
I've oversimplified or mis-represented his meaning.)
Ultimately, an ethical system must be grounded in "what works". This is
in accord with those who say it must be grounded in "survival" or
those who say it must be grounded in "what must ultimately be accepted."
Note: This is not to be confused with the "naturalistic fallacy" of
concluding ought from is.
An effective ethical system is similar to any application of science.
For example, to build a radio, one must have some understanding of
electronics and electromagnetic wave propagation. To build a better
radio, one must understand Shannon information theory. To build an even
better communication system, one must understand the behavior of
communication networks in terms of social dynamics and more.
These increasingly nested context levels can be usefully compared to
nested levels of moral wisdom. As organisms grow in the scope of their
interactions with "Other", they become increasingly aware of the variety
and value of the choices available to them (apologies for the excessive
anthropomorphizing to make the point simply.) In this growth, they
approach, but never reach, the "Universal Morality" that I think Marc is
also referring to.
Ethics has been, and continues to be confusing for two general reasons.
(1) Lack of understanding of evolved basis for human values
As human culture has evolved, we have invented a progression of ideas
about morality, from simple self-survival to survival and growth of
kin/tribe/nation/..., and a progression of justifications for these
values in terms of our understanding at the time such as "might makes
right" or "god-given laws" or humanistic "self-evident rights", etc. Up
until now, there has been little understanding of how virtually all of
our values are rooted in our evolved state.
(2) Lack of understanding of human point of view as *within* the
process, not external to the process
Here is where we approach the confusion over "observer-centered" and
non-observer-centered" morality. Western science and society has
inherited a concept of Self as independent from the ongoing processes of
the universe. This permeates our language and philosophy to the extent
that it is very difficult to think about these things. The confusion
over qualia, the so-called "hard problem of consciousness", etc., are
based on a fundamentally misleading belief in "cogito, ergo sum" that is
only now rising to the level of awareness in science that it is being
poked at and questioned anew. There is increasing awareness -- based on
scientific experiments, experiences with mind-altering substances,
mystical experiences -- that the conscious experience of self is not as
fundamental as assumed and that the experience of direct perception of
reality (including Self)is an illusion, but there's a long road ahead
before enough people understand this such that a more enlightened and
scientifically accurate view of Self will play a significant role in
human thinking and policy.
In the bigger picture, it is naive to look for simple answers to moral
questions, because they always depend on context. In Ben's question
above, about whether or not he should kill the intruders, the decision
will of course be a result of the details known and the values applied
in that particular instance. More specifically, early morality would
have been simple: "I will react with deadly force when I perceive a
deadly attack on my family." More recent moral systems allow that there
might be some question of the right thing to do, based on the dominant
social and religious influences along with the underlying evolved
instinct to defend with deadly force.
At a more highly evolved context level, one would consider a wider range
of choices and values. Such as: A greater ability to differentiate
between risk level, risk probability, risk immanence; the psychological
state of the intruders as well as one's self; assessment of the
relative intelligence of the intruders and capability to reason with
them or to fool them into changing their intentions; balanced against
greater awareness of the value of the intruder's life in terms of future
interactions (may be positive or negative, but it is considered) and so on.
I agree with Marc that a Universal Morality can be said to exist as
something that be measured against and approached, with Personal
Morality the approximation that we have to deal with. It's identical
with the concept of a true physical reality existing and being measured
against, but subjective reality being all we can experience and deal
with. My point is that these are actually one and the same, and that a
future ethical system will be based on a science of complexity dynamics
rather than a specific set of rules to be followed.
As with any branch of science, there will be fundamental axioms. These
might be popularly taken as "moral truths" with an almost mystical
flavor such as "Self and Other are distinct and both are essential",
"Growth is essential and is achieved via interaction between Self and
Other", "Growth is maximized via interaction with adjacent Other prior
to distant other", "Growth is maximized via increasing diversity of
interaction", and so on. During a period of transition, these might be
adopted by some as the basis of a new pseudo-religious morality, but in
the bigger picture will be a closer approximation to "what works."
>> Marc Geddes wrote:
>> The Fundamental Theorem of Morality
>> This theorem was first publicly posted to the sl4 mailing list on
>> the World Wide Web by Marc Geddes (New Zealand) on 29th February,
>> Definition of Universal Morality:
>> *UM is normative (all rational beings would converge on a unique
>> solution given sufficient intelligence and time) *UM is morally
>> symmetric (Universal, that which works if all sentients respect it)
>> *UM is consistent (Non-contradictory) *UM is non-observer centered
>> Friendliness and Unfriendliness
>> The completely Friendly (good) sentients are the sentient's with
>> Minds congruent with Universal Morality. That is, the Friendly
>> sentients are the sentients with Personal Moralities congruent with
>> Universal Morality. The completely Unfriendly (evil) sentients are
>> the sentients with Minds incongruent with Universal Morality. That
>> is, the completely unfriendly sentients are the sentients with
>> Personal Moralities completely incongruent with Universal Morality.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT