Re: All sentient have to be observer-centered! My theory of FAI morality

From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Thu Feb 26 2004 - 23:43:46 MST


 --- Michael Anissimov
<michael@acceleratingfuture.com> wrote: > Marc, have
you carefully read
> http://www.intelligence.org/CFAI/anthro.html?
> It poses very convincing rebuttals to what you are
> currently arguing.

I read them Mike. I didn't find these 'rebuttals'
very convincing.

> Quote:
>
> "There is no reason why an evolved goal system would
> be anything /but/
> observer-focused. Since the days when we were
> competing chemical blobs,
> the primary focus of selection has been the
> individual. Even in cases
> where fitness or inclusive fitness
>
<http://www.intelligence.org/CFAI/info/glossary.html#gloss_inclusive_reproductive_fitness>
>
> is augmented by behaving nicely towards your
> children, your close
> relatives, or your reciprocal-altruism trade
> partners, the selection
> pressures are still spilling over onto /your/ kin,
> /your/ children,
> /your/ partners. We started out as competing blobs
> in a sea, each blob
> with its own measure of fitness. We grew into
> competing players in a
> social network, each player with a different set of
> goals and subgoals,
> sometimes overlapping, sometimes not."
>
> When selection pressures no longer adhere to single
> organisms strongly
> (as in ant colonies), "selfish" behavior gets
> distributed across the
> entire colony rather than each unique individual.
> Many simple forms of
> life share most or all of their genetic material
> with their bretheren,
> and therefore behave in purely selfless ways. This
> happens in a very
> predictable way... if a selection pressure came into
> existence that
> selected for genuine benevolence and/or pure
> selflessness, then
> eventually the species in question would evolve that
> way. But no such
> selection pressures exist. Here is something I once
> wrote on another list;
>
> 'Try to imagine an animal that evolving for millions
> of years in an
> environment where benevolence is the only effective
> survival strategy,
> and while a certain amount of psuedoselfish behavior
> exists as a
> coincidental subgoal of efficiency. I'm not saying
> that all beings in
> the future should be forced to be like this, but
> it's just a thought
> experiment to show that the existence of perfectly
> selfless beings could
> be possible. If the selection pressures towards
> altruism were intense
> enough, not only would benevolence be the only
> externally observable
> behavior amongst these entities, but the *tendencies
> to resort to
> egotism or notice opportunities for selfish deeds*
> would not be absent -
> they would not even be cognitively capable of being
> egoistic unless they
> performed neurosurgery on themselves (or whatever.)
> And why would they
> want to do such a thing? They might use
> computational models to see what
> it would have been like they had evolved more
> "selfishly", (a vague,
> theoretical, abstract concept to them) and see only
> war,
> negative-sumness, and counterproductivity. One of
> the most disturbing
> things they might notice is that such a culture
> could develop memetic
> patterns which act strongly to preserve the existing
> cognitive template,
> and disbelieve proposed designs for minds reliably
> possessing
> selflessness, even in the context of a highly
> selfish world."

Arguments from evolution don't say that much.
Firstly, there's a big difference between observation
and experiment. Arguments basesd solely on
observation are notoriously unreliable.

Secondly, the class of beings important for FAI
hypothesis are the SENTIENT beings (Capable of
intelligent, abstract reflection). It's only very
recently that sentients (humans) evolved.
Observations based on non-sentients (like ants) aren't
likely to be that revelent.

>
> Maybe you are conflating the idea of an
> observer-*instantiated* morality
> and an observer-biased one. Personal moralities
> will be instantiated in
> some observer, by definition, but this doesn't mean
> that the observer is
> necessarily *biased* towards him/herself. Instead
> of asking whether
> genuinely selfless beliefs and actions are
> "logically possible"
> (philosophical appealing), maybe you should ask if
> they are physically
> possible; given a long enough time, could a
> neuroscientist modify a
> human brain such that the resulting human was almost
> entirely selfless?
> Heck, there is already evidence that the drug E can
> compel people to act
> selflessly, and that approach is incredibly
> untargeted in comparison to
> what an advanced neurosurgeon could do, never mind
> an AGI researcher
> designing an AI from scratch...
>
> Michael Anissimov
>
>

In the real world, all the sentients we know about
(humans) have both non-observer centered (altrustic)
components to their morality AND observer centered
(self centerd) components to their morality. If we
define 'Universal Morality' to mean the non-observer
centered components, and 'Personal Morality' to mean
the observer-centered components, then human morality
as a whole is described by the following equation:

Universal Morality x Personal Morality

In other words, the altruistic side of our morality
(Universal Morality) interacts (is transformed by,
hence the multipication sign) with our self-centered
goals (Personal Morality).

Note I don't dispute the existence of an entirely non
self-centered (altrusitic) morality. I agree that
such a morality exists. It's normative, in the sense
that all ethical sentients would converge on it, if
they thought about morality for long enough. That's
why I called it 'Universal Morality'.

However, as an empirical fact, I note that all the
sentients we know about (humans) do have self-centered
components to their morality as well as altruistic.
These self-centered components are not normative.
There is no unique observer centered morality, and
many different possible kinds are possible. That's
why I called this side of morality 'Personal
Morality'. So clearly all the sentients we know about
(humans) have a morality which is a mixture of
Universal (altruistic) and Personal (observer
centered) morality. Since such sentients (of which
humans are a specific example) are clearly possible
the general solution to FAI morality is given by the
equation:

Universal Morality x Personal Morality

A 100% altruistic sentient is a special case of this
general equation. If we eliminated all Self-Centered
goals from our morality, then this would be equivalent
to setting the 'Personal Morality' component to Unity
(1).

Thus

Universal Morality x 1 =Universal Morality

and we are left with a 100% altruistic sentient
(equivalent to a Yudkowskian FAI).

Whether a 100% altuistic sentient is logically or
empirically possible is a debatable point. But I do
not, in any event, regard such a sentient as
desireable.

If you completely strip out observer centered goals
('Personal Morality'), you are left with Universal
Morality. But this (by definition) is normative
(unique, all altruistic sentients converge on it). So
if all sentients followed this morality alone, you are
left with totally bland uniformity. Everything
unique, everything interesting, everything personal,
everything creative, would have been stripped away.
Would you really like to see everything reduced to
this?

Of course, to be moral we should strive to follow
Universal Morality, BUT THIS DOES IMPLY THAT WE HAVE
TO COMPLETELY STRIP OUT OBSERVER CENTERED GOALS! Some
observer centered goals will certainly contradict
Universal Morality, but we have no reason for
believing that ALL observer centered goals contradict
Universal Morality. That is, we can imagine ourselves
behaving in a manner totally CONSISTENT with altruism
(Universal Morality), whilst at the same time having
additional observer centered goals. Universal
Morality (altruism) is just a very general set of
contraints. For instance we could have personal
moralities like ('Coke is good for me , Pepsi is bad
for me') , ('Ice skating is good for me', 'Running is
bad for me') and so, which don't contradict Universal
morality. In other words: Not all observer-centered
biases are bad! SOME of them are bad (for instance
behaviour which hurts others), and yes, we want to
strip out those parts of our observer centered
morality which contradict altruism. But I see no
reason why we should want to strip out ALL of our
observer centered goals.

In fact, it's the observer centered goals that make
indivuals unique! It's the observer centered goals
that allow for individual creatively and uniqueness.
If you ever stripped away all your observer centered
goals, you would be an empty husk, devoid of
individually or uniqueness (Remember, all you would be
left with is Universal Morality, which is normative).
And that's exactly what a Yudkowskian FAI would be:
An empty husk.

So, even if the stripping of all observer centered
goals was possible (which I strongly doubt), I don't
even regard it as desireable.

  

=====
Please visit my web-site at: http://www.prometheuscrack.com

http://personals.yahoo.com.au - Yahoo! Personals
New people, new possibilities. FREE for a limited time.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT