From: H C (lphege@hotmail.com)
Date: Mon Aug 22 2005 - 19:20:41 MDT
>From: Chris Paget <ivegotta@tombom.co.uk>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Emotional intelligence
>Date: Tue, 23 Aug 2005 02:00:47 +0100
>
>I mentioned in my JOIN that I'm working on a model of intelligence using
>emotion as the driving force. When I posted that message to SIAI, I got
>some questions back about it, so I'll post my reply here as well. I was
>writing up another email clarifying some things and explaining some more of
>the complexity involved, but for the sake of netiquette I'll just send this
>out for the moment and save the extra stuff for when I (hopefully) get some
>questions back.
>
>
>
>The fundamental driving force behind emotional intelligence (as I see it)
>is a desire to be happy. "Happiness" in the case of AI is just a number,
>as are all emotions. In a biological intelligence there is some
>correlation for emotions as simple vectors - it might, for example, be
>reasonable to measure the fear level of a creature based upon the amount of
>adrenaline in its system.
>
Agreed.
>The catch is, the system cannot make itself happy. All of the input it is
>given affects its emotional state - if it observes people who are happy, it
>becomes happy itself. In essence, it is driven by a desire to make other
>people happy - or, to put that another way, it is driven by "friendliness".
You are getting fuzzy here. Yes, I can follow that the observation of
happiness may very well lead to happiness, but this is not true in all
cases. Especially as a fundamental drive for "friendliness".
The actual value of the emotion is ultimately derived from the part of the
brain which decides what observations are intrinsically valueable, thus the
happiness you have observing someone else being happy is actually a heavily
dependent emotion. Careful there.
>
>Alongside basic happiness sit a number of other basic emotions - fear,
>anger, etc etc. Again, each of these cannot be influenced directly by the
>being itself (at least not by conscious thought), they are primarily
>controlled by external influences.
>
>Above these primal emotions sit more complex emotions, such as confidence,
>stubbornness, and optimism. (There is a threshold at which these should
>be regarded as personality traits, although that's not strictly relevant).
>Each of these more complex emotions has a smaller influence on overall
>happiness, but at the same time are influenced less by external factors.
>As a general rule, every emotion is controlled by two things - external
>influences, and other emotions. Any emotion which has a large effect on
>overall happiness is controlled more readily by external influences.
>
>Example: Optimism is strongly affected by external influences. Optimism
>controls how likely you are to take a chance - if the chance pays off, your
>optimism goes up, and you feel happier because of it. At the same time,
>stubbornness affects optimism, but does not directly affect happiness much.
> If you are stubborn in a given situation, you are less likely to take a
>chance, and the happiness increase when that chance succeeds is far less.
Sounds like you've got some pretty well founded observations here.
>
>Memory is a combination of two things. Firstly, a word - the concept in
>question. Secondly, attached to each word is a set of emotional vectors
>which comprise the beings total experience about that object or concept.
>If the being encounters an object that it recognises, it consults its
>memory to see how that object has made it feel in the past, determines
>whether the emotions it presents are appropriate for its current mental
>state, and either promotes or avoids the encounter accordingly.
>
>Example: Let's say that in the past, you have been bitten by a dog. Pain
>is a universal way to reduce happiness (much like pleasure is a universal
>way to increase it), so the emotional vectors associated with your memory
>of a dog include negative happiness. However, if you see a person holding
>a dog and you remember that the person has, in the past, made you very
>happy, you may still decide that the dog is not worth avoiding; the
>negative happiness based upon your experiences with dogs are offset by the
>positive happiness of your experiences with the person, and you can make an
>intelligent decision on how to respond to the situation.
Again, I'm definitely following you here.
>
>Automated learning, in any given situation, is simply the product of
>combining a number of different emotional memories together in order to
>achieve the required goal. You program the system with a number of basic
>operations that could be applied to the task, and let the system experiment
>randomly, learning based upon emotion along the way.
>
>If, for example, correctly recognising a face stimulates "pleasure" (either
>by seeing that the face is smiling, or by the programmer pressing the
>"pleasure" button), then whatever operations were used to perform that
>recognition (adjust brightness, decrease color depth, adjust contrast, etc
>etc) are then given higher happiness ratings, and are more likely to be
>used again. The act of randomly combining operations together is based
>upon emotion, and the success or failure of each attempt is similarly
>stored as emotion. If, for example, the computer takes a chance on a new
>graphics operation when attempting the recognition, it will remember
>whether it tried it before and failed based upon the stored value for
>confidence; if its confidence is high at that time then it may still take a
>chance on it.
>
Yep looks good, although
>
>
>There's a lot more complexity than what I've presented here,
You are missing a key concept, at least from the perspective of my
understanding.
>If, for example, correctly recognising a face stimulates "pleasure" , then
>whatever operations were used to perform that recognition (adjust
>brightness, decrease color depth, adjust contrast, etc etc) are then given
>higher happiness ratings, and are more likely to be used again.
I think that's where you need to expand a little bit. This is where I would
mark as "missing some complexity"
>but I think this should convey the gist of what I'm thinking. Hopefully
>that's enough to let people start exploring the idea themselves, and
>hopefully I can answer some of your questions.
>
>Cheers,
>
>Chris
>--
>Chris Paget
>ivegotta@tombom.co.uk
Nice to chat with someone like you, it seems you have taken somewhat the
same approach I have to the problem of AGI.
-- Th3Hegem0n
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT