From: Michael Roy Ames (firstname.lastname@example.org)
Date: Sun Nov 24 2002 - 12:32:29 MST
I was asked some questions in the SL4 chat room last night, and
didn't get to answer most. Here are my answers and commentary that
may be of interest to you
<ChrisRovner> MRAmes, about your SL4 post: what is the "delta
function" of a chosen action?
<MRAmes> Chris: The 'delta function' of a chosen action is a function
that models the 'delta' or change, in complexity as a result of the
<Gordon> Rightness is not dependent on your ability to assess
Rightness what is Right is Right, regardless of your ability to tell
if it's right or not
MRA: True. But how *Right* one can knowably *be* depends on your
ability to assess Rightness.
<ChrisRovner> According to MRAmes, if AaR --> infinite, then
Rightness will be in the interval [0,1]. Otherwise it will be in
[a,b] where 0<a<b<1
MRA: False. The measure 'Rightness' will be in the interval [0,1]
regardless of ones level of ability to assess Rightness. But as AaR
increases ones ability to asses Rightness changes (the window widens)
and one comes closer to the ideal.
<ChrisRovner> I think it would be easier to use a [-1,1] interval;
negative numbers could be interpreted as "Wrongness" (proportional to
the damage inflicted to the System)
MRA: Wrongness is the inverse of Rightness... they are on the same
<MichaelA> it's hard to assess what constitutes "damage"
<ChrisRovner> It's hard to asses what constitutes "good"
<ChrisRovner> Equally hard, I mean
<MichaelA> I think it's easier to assess what is "more or less good"
rather than what is "damage"; "damage" is a more vague term
<Gordon> damage is something Bad
<ChrisRovner> Why more vague? Actually, I think it's easier to know
what's destructive than to know what's creative
MRA: All actions increase Entropy, or disorder... at least with our
current understanding of physics ;> Therefore it is a better analogy
to define Rightness on a continuum that does not cross zero. I chose
a positive continuum to eliminate typing all those minus signs :)
MRA: It is *opinion* to judge a given action as bad or good, but it
is *measurable* to judge a given action as a complexity delta.
<ChrisRovner> [Friendly AI is] Building something that help us
understand what the heck we're talking about when we say Right and
MRA: Yes. This is a big part of what I see as valuable in Friendly
<Gordon> in my view, accidental processes can be assigned morality
MRA: Yes, they can be assigned morality, but it is meaningless to do
so. Judging how an accidental process *effects* a sentient is
meaningful, and in that case, it would be on the Bad--Good continuum,
not the Right--Wrong continuum.
<ChrisRovner> If the process is not carried out by a sentient being,
then it can't have moral value
<MichaelA> I think there's a lower bound of nervous system complexity
needed to approximate morality with any degree of accuracy
whatsoever, but maybe that's just anthropomorphic on my part
MRA: As intelligence and situational knowledge approaches zero, the
'window' of Rightness that a sentient can choose from approaches
zero. A mouse, for example, is not very sentient - but there is a
spark there... it can make decisions. The Rightness of those
decisions can be judged based on the mouse's available window.
<Eliezer> higher-Quality premises that result in higher-Quality
conclusions means that your own perceptions of Quality have perceived
a step forward
<Eliezer> heh, Pirsig never took it quite this far, but it's a good
<Eliezer> your perceptions of Quality have perceived *better
perceptions of Quality*
MRA: 'Quality' is an interesting idea... I liked it when I originally
read Pirsig. But even then, it bothered be that is was so
subjective. If we cannot test it against something outside of
ourself, then its not much use.
One thing that struck me about FAI, was the way that you suggested it
be grounded in human decision-making characteristics... and not just
one human, but the cloud of characteristics that represent many
different humans. This hooks Friendliness in something real, it
makes it into a very defensible morality. With the definition of
Rightness, I have tried to describe a way to measure a real situation
in the universe, and relate that the morality also. Equating
Rightness in some way to 'complexity change' removes a lot of baggage
that comes along with human discussions about morality. Does it
throw out the baby with the bath water? Perhaps not. The definition
could be framed in several more-colloquial ways:
'To create is better than to destroy',
'Living things are more valuable than dead things',
'A human is more precious than an ant',
'Peace is better than war'.
Perhaps the great majority of moral decisions could be re-assessed in
the form: "what decision will lead to maximum complexity increase".
There may be exceptions to the correspondence - and if there are, I
would very much like to see an example.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT