From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jan 23 2002 - 21:29:18 MST
Ben Goertzel wrote:
>
> Yes, this is a silly topic of conversation...
Rational altruism? Why would it be? I've often considered starting a
third mailing list devoted solely to that.
> It seems to me that you take a certain pleasure in being more altruistic
> than most others. Doesn't this mean that your apparent altruism is actually
> partially ego gratification ;> And if you think you don't take this
> pleasure, how do you know you don't do it unconsciously? Unlike a
> superhuman AI, "you" (i.e. the conscious, reasoning component of Eli) don't
> have anywhere complete knowledge of your own mind-state...
No offense, Ben, but this is very simple stuff - in fact, it's right there
in the Zen definition of altruism I quoted. This is a very
straightforward trap by comparison with any of the political-emotion
mindtwisters, much less the subtle emergent phenomena that show up in a
pleasure-pain architecture.
Ego gratification as a de facto supergoal (if I may be permitted to
describe the flaw in CFAImorphic terms) is a normal emotion, leaves a
normal subjective trace, and is fairly easy to learn to identify
throughout the mind if you can manage to deliberately "catch" yourself
doing it even once. Once you have the basic ability to notice the
emotion, you confront the emotion directly whenever you notice it in
action, and you go through your behavior routines to check if there are
any cases where altruism is behaving as a de facto child goal of ego
gratification; i.e., avoidance of altruistic behavior where it would
conflict with ego gratification, or a bias towards a particular form of
altruistic behavior that results in ego gratification.
I don't take pleasure in being more altruistic than others. I do take a
certain amount of pleasure in the possession and exercise of my skills; it
took an extended effort to acquire them, I acquired them successfully, and
now that I have them, they're really cool.
As for my incomplete knowledge of my mind-state, I have a lot of practice
dealing with incomplete knowledge of my mind-state - enough that I have a
feel for how incomplete it is, where, and why. There is a difference
between having incomplete knowledge of something and being completely
clueless.
> Eliezer, given the immense capacity of the human mind for self-delusion, it
> is entirely possible for someone to genuinely believe they're being 100%
> altruistic even when it's not the case. Since you know this, how then can
> you be so sure that you're being entirely altruistic?
Because I didn't wake up one morning and decide "Gee, I'm entirely
altruistic", or follow any of the other patterns that are the
straightforward and knowable paths into delusive self-overestimation, nor
do I currently exhibit any of the straightforward external signs which are
the distinguishing marks of such a pattern. I know a lot about the way
that the human mind tends to overestimate its own altruism.
I took a couple of years of effort to clean up the major emotions (ego
gratification and so on), after which I was pretty much entirely
altruistic in terms of raw motivations, although if you'd asked me I would
have said something along the lines of: "Well, of course I'm still
learning... there's still probably all this undiscovered stuff to clean
up..." - which there was, of course; just a different kind of stuff.
Anyway, after I in *retrospect* reached the point of effectively complete
strategic altruism, it took me another couple of years after that to
accumulate enough skill that I could begin to admit to myself that maybe,
just maybe, I'd actually managed to clean up most of the debris in this
particular area.
This started to happen when I learned to describe the reasons why
altruists tend to be honestly self-deprecating about their own altruism,
such as the Bayesian puzzle you describe above. After that, when I
understood not just motivations but also the intuitions used to reason
about motivations, was when I started saying openly that yes, dammit, I'm
a complete strategic altruist; you can insert all the little qualifiers
you want, but at the end of the day I'm still a complete strategic
altruist.
And one answer to the little Bayesian puzzle you pose is that if you look
at the internal memories of the people who so easily claim to be 100%
altruistic, there's a great deal of emotional confidence in that altruism,
but not much declarative knowledge about altruism. If you look at my
memories, there's a picture of several years worth of work in carefully
cleaning up the mind, with the point of effectively complete strategic
altruism being reached several years in advance of the emotional and
rational confidence where I could admit to myself that I'd succeeded.
Let me turn the question around another way, Ben: Suppose that I build a
roughly human (~human) Friendly AI. Ve really is a complete altruist, but
ve's surrounded by people who claim to be altruists but aren't. How does
ve know that ve's a complete altruist? If you're tempted to answer back
"Ve doesn't", then consider the problem as applying to a superintelligence
and ask again.
To condense one particular distinguishing factor into plain English: I
did not get where I am today by trusting myself. I got here by
distrusting myself for an extended period of time.
> > It's the first piece of the puzzle. You start with a description of
> > fitness maximization in game theory; then shift to describing ESS
> > adaptation-executers; then move from ESS in social organisms to the
> > population-genetics description of political adaptations in communicating
> > rational-rationalizing entities; and then describe the (co)evolution of
> > memes on top of the political adaptations. As far as I know, though, that
> > *is* the whole picture.
>
> I suppose it's the whole picture if you construe the terms broadly
> enough....
As I explicitly specified.
> In my view, though, the books you mention take an overly neo-Darwinist view
> of evolution, without giving enough credence to self-organizing and
> ecological phenomena. Ever read the book "Evolution Without Selection" by
> A. Lima de Faria? He doesn't discuss evolution of morals, but if he did, he
> would claim that common moral structures have natural symmetries and other
> patterns that cause them to serve as "attractors" for a variety of
> evolutionary processes. In other words, he sees evolution as mostly a way
> of driving organisms and populations into attractors defined by "natural
> forms." This view doesn't necessarily contradict a standard evolutionary
> view, but it deepens it and adds a different angle.
It's amazing how many things the modern synthesis in neo-Darwinism doesn't
fail to take into account. Constraints on evolvability (or more
generally, fitness landscapes for evolvability) is most certainly one of
them. I can't recall reading anything explicitly about, e.g., local
adaptations mirroring local rules and thereby giving rise to
un-selected-for global mirroring of Platonic forms - but I have explicitly
written about it, and specifically in the context of the implications for
human morality.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT