From: Marc Geddes (firstname.lastname@example.org)
Date: Fri May 21 2004 - 01:00:04 MDT
--- Michael Roy Ames <email@example.com>
wrote: > Marc Geddes,
> You wrote:
> > We note that although Universal Morality is
> > in all of us, it is 'filtered' by our 'Personal
> > Morality'. Our Personal Morality interferes with
> > Universal Morality, and as a result we can only
> > a very 'low resolution' image of 'Goodness'. So
> > learning to be more moral is an 'optimization
> > process': we need to start adjusting our Personal
> > Morality in order to let Universal Morality shine
> > through.
> This sounds like a pitch from a traveling
> guru-spiritualist :)
*Marc shrugs and grins* Sorry. But my description
basically is an accurate plain English summary of my
theory. There's really no other way to say it.
> There is little evidence that humans share an
> inbuilt universal morality,
> filtered or unfiltered. I'll allow that there are a
> number of ideas of
> right and wrong that are broadly, but not
> universally shared by adults.
> However, morality varies widely by age group, by
> culture, by religion and
> social class - and is heavily effected by ones
> immediate environment. Your
> hologram analogy and your theory don't fit with
> these facts.
> Michael Roy Ames
I would refer you back to the relevant sections of the
'Creating Friendly A.I' document.
Go to section 3.4.4: The actual definition of
It's described as 'normative'. Normative means that
there is convergence to a single morality:
[Humanity is diverse, and there's still some variance
even in the panhuman layer, but it's still possible to
conceive of description for humanity and not just any
one individual human, by superposing the sum of all
the variances in the panhuman layer into one
description of humanity. Suppose, for example, that
any given human has a preference for X; this
preference can be thought of as a cloud in
configuration space. Certain events very strongly
satisfy the metric for X; others satisfy it more
weakly; other events satisfy it not at all. Thus,
there's a cloud in configuration space, with a clearly
defined center. If you take something in the panhuman
layer (not the personal layer) and superimpose the
clouds of all humanity, you should end up with a
slightly larger cloud that still has a clearly defined
center. Any point that is squarely in the center of
the cloud is "grounded in the panhuman layer of
from 'Creating Friendly A.I' (Yudkowsky)
The variance in morality between humans is described
by my term 'Personal Morality'. But this variance
does not mean that there isn't a background 'Universal
Morality' at a high enough level of abstraction.
That's why I said that:
Universal Morality x Personal Morality = Mind
All the variance between humans is caused by the
'Personal Morality' term.
I said that 'Universal Morality' can be thought of as
'fully present in everyone', because all sentient's
are potentially capable of modifying their goal
systems (imagine a seed A.I or a human that becomes
post-human). Imagine that at some time in the future
you under go the 'uplifting' process which turns you
into a post human. When you became post human, you
would have a far better understanding of 'Goodness'
(because you would be much smarter). Now imagine
running a movie of the uplifting process backwards.
The inverse of the uplifting process can be said to be
'filtering out' your understanding of morality which
is in some sense *already present* in you now (since
you are potentially capable of taking all the steps
needed to become post human)
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
Please visit my web-sites.
Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org
Find local movie times and trailers on Yahoo! Movies.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT