From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 30 2003 - 08:52:43 MDT
> An intelligent mind will develop a model of the world that
> equates human happiness with a loving family life, adequate
> food and shelter, physical exercise, freedom, a meaningful
> vocation, friends, etc. And it will equate human unhappiness
> with abusive relations, loneliness, homelessness, hunger, lack
> of freedon, poor health, drug addiction, etc. Its behavior
> will be based on this model, trying to promote the long-term
> happiness of humans.
>
> Human babies love their mothers based on simple values about
> touch, warmth, milk, smiles and sounds. But as the baby's
> mind learns, those simple values get connected to a rich set
> of values about the mother, via a simulation model of the
> mother and surroundings. This elaboration of simple values
> will happen in any truly intelligent AI.
Sure.
If an AGI is given these values, and is also explicitly taught why euphoride
is bad and why making humans appear happy by replacing their faces with
nanotech-built happy-makes is stupid, then it may well grow into a powerful
mind that acts in genuinely benevolent ways toward humans. (or it may
not -- something we can't foresee now may go wrong...)
I note you have stuck "freedom" in there -- which is related to Eliezer's
"volition" -- and is a mighty can of worms, which a seed AI is gonna have to
learn by example since there is no viable theory of it right now...
ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT