From: Keith Henson (hkhenson@rogers.com)
Date: Thu Mar 24 2005 - 18:49:54 MST
At 12:21 PM 24/03/05 -0500, you wrote:
> > I don't think I'd take oblivion for any purpose, even
> > to save ten other people. Although I'd really love to
> > be that philosophical, I just can't pay my life for
> > ideals like utilitarianism.
>
>I believe you.
>
>However, some feel differently.  I know several individuals, each of whom
>*would* trade their lives in order to save 10 random strangers with whom
>they have minimal genetic relatedness -- an action that would go against the
>interest of their selfish genome as well as their selfish organism.
That's actually to be expected, but if you look deeper, it isn't so much 
against the gene's interest after all, particularly in the environment in 
which we evolved.
One factor is that people in those days didn't jump off cliffs to certain 
death to save relatives, they took risks in an attempt to save close 
relatives and/or less related people.
If you take a serious risk and come out alive, your status is enhanced 
(even if you die, the status of your relatives may be enhanced enough to 
partly make up for your loss--consider Todd Beamer, the guy who led the 
attack on the Flight 93 hijackers).  Now in the days in which most of our 
evolution occurred, status was highly associated with improved reproductive 
success, so taking the right kind of risks was rewarded by more children.
Thus we are left with the psychological traits to take some kinds of risks 
for others.  (And in the days when this was selected, the people you were 
around were also your more or less close relatives.)
And it certainly works.  My status has been enhanced by times I have taken 
risks to save people or property from fires.  My brother's was enhanced by 
saving a guy from drowning in rough surf off Hawaii.  (My brother is a 
really level headed guy.  He had rescue training and took as little risk as 
possible.  He tells me that if he had not been able to find proper gear, 
the guy would have drowned.  As he put it, there was no point in both of 
them dying.)
>You could argue this isn't "true altruism" because their goal may be
>personal satisfaction or personal ego-boosting or something, rather than
>"pure altruism" -- but I don't tend to find such arguments very meaningful.
I am in agreement about the arguments lacking meaning.  Even if you do 
understand the reason people are heros, it's not like they ran a spread 
sheet on the pros and cons of saving someone, there just isn't time even if 
you could quantify all the factors.  And the fact that hero genes do better 
(or at least *did* do better) does not make them any less worthy of our 
respect.
However, the theory problem with "pure altruism" is that psychological 
traits that only depress your "inclusive fitness" while enhancing unrelated 
others just don't close the evolution/selection loop.
>As far as I'm concerned, this is an example of genuine altruism, and it's
>not explained via the neo-Darwinist orthodoxy very well.  It's explained by
>the variant of evolutionary theory that emphasizes self-organization and
>dynamical attractors.
As far as I know, neo-Darwin theory is "dynamical attractors," shaped by 
millions of years as hunter gatherer tribes who got along in good times and 
killed each other when bad times were a-coming.
>Altruism (in the sense I'm using it here) is a psychological attractor, and
>the quasi-altruism that the selfish genome promotes has pushed some human
>brains toward that attractor.  Guiding AGI's into this psychological
>attractor will be an important topic in AGI psychology...
There is also the need to clearly understand what's involved in the origin 
of human altruism.
Of course, if you do understand it and talk about it you can expect to take 
a lot of flack.  For the most part people are very uncomfortable talking 
about the parts of our minds that are usually hidden from us.
Keith Henson
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:55 MST