From: turin (turin@hell.com)
Date: Wed Feb 15 2006 - 23:44:25 MST
We were discussing happiness and whether or not it was a valuable value for an AI to have inorder for it to be "friendly". I do not think happiness in and of itself of much value. I don't know much game theory or decision theory but I think an autonomous AI not Bostrom's oracle, but an agent without an abstract and dynamic value system would be ratherdangerous.
Ideally, I the AI to be able to experience human subjective reality in a limited first person fashion in order to understand our desires in the ways in which we might want to coerce it into doing this or that terrible thing.
An abstract and dynamic value system would save it from being "human all too human" and yet at the same time would not keep it a kind of slave, a dog. Ideally, I would like for an SI to evolve a kind of autonomy which would allow it to complete some many of our goals while formulating and completeing its own goals, that is what autonomy is about,while at the same time, denying us many of our goals which are of course vain, selfish, psychotic, and paranoid.
In this way, the SI would be in many ways would be a kind of overlord of transhumanity. This idea does not bother me very much as I can imagine the aesthetic, philosophical, and scientific goals of an SI to be very much more expansive than anything we could possibly dream of, but then at that point, in many ways it would have to be a "person"
I won't qualify any of this, this is purely anecdotal, opinion, but when I speak of an SI being a person it makes me think of this.
http://faculty.ncwc.edu/toconnor/428/428lect16.htm
This is an lecture article explaining the clinical difference now widely accepted between antisocial personality disorder, sociopathy, and psychopathy. There is a link to an article by Dr. Hare, the Canadian Psychiatrist who pioneered the psychopathy index and wrote "Without Conscience: The Disturbing World of the Psychopaths Among Us"
1% of humans are psychopaths according to Hare's research. Psychopaths have different cognitive architecture which prevents them from feeling empathy, makes them pathological liars, and prevents them from understanding what it is that happens when someone experiences emotion.
A psychopath does not have to be a serial murderer, rapist, con man, or criminal, though many of these criminals are psychopaths. They merely must be in a position of power.
Psychopaths however have a low chance of commiting suicide and because they cannot understand human emotion well, it is doubtful such people such as Napoleon, Hitler, or Stalin were psychopaths; otherwise how could they have been able to control their organizations without knowing how to manipulate someone's emotions with fine precision, a precision impossible to know if someone does not have first hand experience.
This is the problem with friendly SI. I am afraid that if we do not allow them to understand first hand subjective experience, we could produce psychopaths; whether they have happiness or any other prime directives as their core goals. I think core goals might be a bad idea to begin with, again I don't know, I don't study game theory.
At the same time, autnomic yet passive AI without knowledge of subjective experience but dependent on their master's wishes and happiness might as well be a psychopath as it would be acting on its master's behalf, and the pursuit of individual happiness or even the happiness of small grounps of people are a good way to make very many people live safely and in good help much less happy.
EOT;
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:30 MST