Re: QUES: CFAI [#2]

From: Eliezer S. Yudkowsky (
Date: Thu Sep 19 2002 - 15:27:48 MDT

Anand wrote:
> What known cognitive processes associate or may associate with
> altruism? What is the relation or importance or both of said processes
> to Friendly AI development?

Um... a whole bunch. There are the individual emotional tones that make
you happy when you help other people; those would be the most obvious
ones. Some of the more subtle, semantic, and structural characteristics
that are (if I recall directly) described in CFAI at various points include:

Substitution for speaker deixis in moral arguments. Joe says: "My
philosophy is look out for Joe"; it is interpreted as "Joe advocates the
philosophy "Look out for [number one]".

Representation of moral beliefs as being part of the common pool of
beliefs and working just like any other beliefs, with support,
antisupport, truth or falsity, etc.

Rule of derivative validity - if something is moral there must be a valid
reason why it is moral, and there must be a valid reason why that reason
is valid, etc. In the same way, if something is true, there must be a
valid reason why it is true, and so on; if there's an effect, it must have
a cause, and the cause must have a cause, and so on. (I'm not talking
about whether this is how philosophy *should* work; I'm talking about the
way we *do* think about it by default, bearing in mind that by default we
generally stop examining the chain at some point and just don't think
about extending it any further. This is a little reflex that exists in
the mind. Whether it is logically self-consistent, or can be made so, is
a wholly separate issue.)

Putting yourself in the other guy's shoes. I didn't spend enough time on
this effect in CFAI.

Knowing what at least some of these processes are, specifically, is a big
help in Friendly AI; what is *vital* in Friendly AI is knowing how to tell
an AI: "Here's an effect called 'altruism'; your job when you grow up
will be to discover all its causes, including the ones your programmers
didn't know about."

Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT