Re: Emotional intelligence

From: Dimitry Volfson (
Date: Tue Aug 30 2005 - 11:17:28 MDT

On Tue, 23 Aug 2005 02:00:47 +0100 Chris Paget <>
> The fundamental driving force behind emotional intelligence (as I
> see
> it) is a desire to be happy. "Happiness" in the case of AI is just
> a
> number, as are all emotions.

I have a fundamental disagreement here. Emotions are NOT numbers, they
should never be numbers even in a computer embodiment. Emotions change
the frames through which people see the world. Basically, a frame can
amplify, filter out, or distort elements of incoming and stored
information -- to streamline the process of recognition, analysis, and
planning of objects, events, attitudes, etc.

When you are happy you "see" the world differently than you "see" the
world when you are "upset". A different set of criterial frames act upon
one's perception and sequencing of the world depending upon the emotion
experienced. Fundamentally, emotions correspond to the activation of
certain frames; which people have given names to based upon the
similarity of the experience in the body and cognitive processing.

What does it mean to be jealous if not to analyze a situation based upon
a frame of the absence of getting something that someone else is getting;
and your being more deserving of it; which presupposes a criteria of
deservingness; and the complex emotion of deservingness.

> In a biological intelligence there is
> some
> correlation for emotions as simple vectors - it might, for example,
> be
> reasonable to measure the fear level of a creature based upon the
> amount
> of adrenaline in its system.
> The catch is, the system cannot make itself happy. All of the input
> it
> is given affects its emotional state - if it observes people who are
> happy, it becomes happy itself. In essence, it is driven by a
> desire to
> make other people happy - or, to put that another way, it is driven
> by
> "friendliness".

A person can make themselves happy. Linking frames of happiness to
outside influences is not the only way to operate as a person.

> Alongside basic happiness sit a number of other basic emotions -
> fear,
> anger, etc etc. Again, each of these cannot be influenced directly
> by
> the being itself (at least not by conscious thought), they are
> primarily
> controlled by external influences.

Again, I fundamentally disagree. A person can "go to a happy place" in
their mind, such as when you remember the fun of being a child chasing
your dog, and this need have nothing to do with "external influences".

> Above these primal emotions sit more complex emotions, such as
> confidence, stubbornness, and optimism. (There is a threshold at

Notice that optimism is a way of framing the future, of "seeing",
"feeling", and "hearing" (or valuing) more of the good and less of the
bad, that might be anticipated for the future.

> which
> these should be regarded as personality traits, although that's
> not
> strictly relevant). Each of these more complex emotions has a
> smaller
> influence on overall happiness, but at the same time are influenced
> less
> by external factors. As a general rule, every emotion is controlled
> by
> two things - external influences, and other emotions. Any emotion
> which
> has a large effect on overall happiness is controlled more readily
> by
> external influences.
> Example: Optimism is strongly affected by external influences.
> Optimism
> controls how likely you are to take a chance - if the chance pays
> off,
> your optimism goes up, and you feel happier because of it. At the
> same
> time, stubbornness affects optimism, but does not directly affect
> happiness much. If you are stubborn in a given situation, you are
> less
> likely to take a chance, and the happiness increase when that chance
> succeeds is far less.

Doesn't stubbornness mean that once you take a position, you don't change
your position? This doesn't really have anything to do with whether or
not one is taking a chance with this position that has been selected,
does it?

> Memory is a combination of two things. Firstly, a word - the
> concept in
> question. Secondly, attached to each word is a set of emotional
> vectors
> which comprise the beings total experience about that object or
> concept.
> If the being encounters an object that it recognizes, it consults
> its
> memory to see how that object has made it feel in the past,
> determines
> whether the emotions it presents are appropriate for its current
> mental
> state, and either promotes or avoids the encounter accordingly.

There is an automatic revivification, an unconscious framing of the
object in light of the frames and sequences-of-events-and-reactions
(templates) previously experienced. What happens is that a person
responds in a way that corresponds to the appropriate or familiar
response as determined by prior interaction with an object of the same
type. In a sense they are not acting in the present, they are acting in
the past, and may ignore, due to framing that de-emphasizes them,
elements that make the current situation different from the prior
situation; as well as seeing features that are not as prominent
(sometimes totally absent) in the current object -- as if they were as
prominent as the features in the previous, remembered object.

> Example: Let's say that in the past, you have been bitten by a dog.
> Pain is a universal way to reduce happiness (much like pleasure is a
> universal way to increase it), so the emotional vectors associated
> with
> your memory of a dog include negative happiness. However, if you
> see a
> person holding a dog and you remember that the person has, in the
> past,
> made you very happy, you may still decide that the dog is not worth
> avoiding; the negative happiness based upon your experiences with
> dogs
> are offset by the positive happiness of your experiences with the
> person, and you can make an intelligent decision on how to respond
> to
> the situation.

Emotional response is not "intelligent"; it relies on unconscious framing
of the situation which may distort the actual circumstance a person can
find yourself in .. a memory that overrides the actuality-set of

> Automated learning, in any given situation, is simply the product of
> combining a number of different emotional memories together in order
> to
> achieve the required goal. You program the system with a number of
> basic operations that could be applied to the task, and let the
> system
> experiment randomly, learning based upon emotion along the way.
> If, for example, correctly recognizing a face stimulates "pleasure"
> (either by seeing that the face is smiling, or by the programmer
> pressing the "pleasure" button), then whatever operations were used
> to
> perform that recognition (adjust brightness, decrease color depth,
> adjust contrast, etc etc) are then given higher happiness ratings,

Yes, the adjustment of brightness, etc. corresponds to framing. It may
also cause a misrecognition when the same framing is applied to a face in
the future. However, I disagree with the happiness conditioning you are
suggesting. There are intrinsic features of happiness that affect
cognitive processing; in general, when one is happy, cognitive processing
"flows", there is more automatic unconscious processing, and less
conscious analysis going on; although, this could be said for many
emotions. You have positive thoughts that spring to mind from the
unconscious, "these people will like you", "you are doing great", "this
is getting you somewhere", etc. that maintain the frame, that are related
to prior experience of feeling good.

> and
> are more likely to be used again. The act of randomly combining
> operations together is based upon emotion, and the success or
> failure of
> each attempt is similarly stored as emotion. If, for example, the
> computer takes a chance on a new graphics operation when attempting
> the
> recognition, it will remember whether it tried it before and failed
> based upon the stored value for confidence; if its confidence is
> high at
> that time then it may still take a chance on it.

Situational confidence may override remembered failure; but what makes
this intelligent?

I'm interested in your responses.
-- Dimitry

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT