Re: All sentient have to be observer-centered! My theory of FAI morality

From: Michael Anissimov (michael@acceleratingfuture.com)
Date: Thu Feb 26 2004 - 21:19:26 MST


Marc, have you carefully read http://www.intelligence.org/CFAI/anthro.html?
It poses very convincing rebuttals to what you are currently arguing.
Quote:

"There is no reason why an evolved goal system would be anything /but/
observer-focused. Since the days when we were competing chemical blobs,
the primary focus of selection has been the individual. Even in cases
where fitness or inclusive fitness
<http://www.intelligence.org/CFAI/info/glossary.html#gloss_inclusive_reproductive_fitness>
is augmented by behaving nicely towards your children, your close
relatives, or your reciprocal-altruism trade partners, the selection
pressures are still spilling over onto /your/ kin, /your/ children,
/your/ partners. We started out as competing blobs in a sea, each blob
with its own measure of fitness. We grew into competing players in a
social network, each player with a different set of goals and subgoals,
sometimes overlapping, sometimes not."

When selection pressures no longer adhere to single organisms strongly
(as in ant colonies), "selfish" behavior gets distributed across the
entire colony rather than each unique individual. Many simple forms of
life share most or all of their genetic material with their bretheren,
and therefore behave in purely selfless ways. This happens in a very
predictable way... if a selection pressure came into existence that
selected for genuine benevolence and/or pure selflessness, then
eventually the species in question would evolve that way. But no such
selection pressures exist. Here is something I once wrote on another list;

'Try to imagine an animal that evolving for millions of years in an
environment where benevolence is the only effective survival strategy,
and while a certain amount of psuedoselfish behavior exists as a
coincidental subgoal of efficiency. I'm not saying that all beings in
the future should be forced to be like this, but it's just a thought
experiment to show that the existence of perfectly selfless beings could
be possible. If the selection pressures towards altruism were intense
enough, not only would benevolence be the only externally observable
behavior amongst these entities, but the *tendencies to resort to
egotism or notice opportunities for selfish deeds* would not be absent -
they would not even be cognitively capable of being egoistic unless they
performed neurosurgery on themselves (or whatever.) And why would they
want to do such a thing? They might use computational models to see what
it would have been like they had evolved more "selfishly", (a vague,
theoretical, abstract concept to them) and see only war,
negative-sumness, and counterproductivity. One of the most disturbing
things they might notice is that such a culture could develop memetic
patterns which act strongly to preserve the existing cognitive template,
and disbelieve proposed designs for minds reliably possessing
selflessness, even in the context of a highly selfish world."

Maybe you are conflating the idea of an observer-*instantiated* morality
and an observer-biased one. Personal moralities will be instantiated in
some observer, by definition, but this doesn't mean that the observer is
necessarily *biased* towards him/herself. Instead of asking whether
genuinely selfless beliefs and actions are "logically possible"
(philosophical appealing), maybe you should ask if they are physically
possible; given a long enough time, could a neuroscientist modify a
human brain such that the resulting human was almost entirely selfless?
Heck, there is already evidence that the drug E can compel people to act
selflessly, and that approach is incredibly untargeted in comparison to
what an advanced neurosurgeon could do, never mind an AGI researcher
designing an AI from scratch...

Michael Anissimov



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT