From: turin (turin@hell.com)
Date: Thu Feb 16 2006 - 16:11:32 MST
EOT;
--- Brian Atkins <brian@posthuman.com> wrote:
From: Brian Atkins <brian@posthuman.com>
> This is the problem with friendly SI. I am afraid that if we do not allow
> them to understand first hand subjective experience, we could produce
> psychopaths
>
I was being hyperbolic here, let me qualify this statement. I don't think am autnomous friendly SI which does not understand first person subjective experience would magically become a psychopath. But I am worried that -any- autonomous friendly SI that does not understand first person subjective experience will be impoverished in its decision making as that decision making relates to human whose existence is for the most part centered around our own subjective experiences.
The architecture need not produce a psychopath, that is merely an extreme example, we could make sociopaths, obsessive compulsive, manic depressive SIs. I am not talking about their cognitive architecture here; I am talking about their behavior, the way they socially interact with humans. I do not expect their cognitive architecture to resemble ours in the slightest. I think our cognitive architecture has gotten us into a lot of trouble. I am speaking here in part metaphorically, but also behaviorally.
I would like for the SI when it tells someone, Hello, to understand what saying hello means to me, whether when it says Hello it means the same thing as I do or not.
This to me seems to be require giving the SI an understanding of human subjectivity, and does not require the SI to possess subjective states itself, but I wonder if there is an advantage to subjectivity itself or if an SI can really make good decisions without its own subjectivity.
Then the question of general SI subjectivity comes into play. This is something that is difficult to quantify, and so we end with armchair philosophy. We talk often about feasibility, survival, etc, but I am interested in what an autonomous SI would want to do aesthetically, philosopically, scientifically, if it possessed its own subjectivity. There is in an effort to maximize our own survival or efficiency, there a danger of course of losing subjectivity itself, which is something we as humans value, and I personally would like some of the SI to be autonomous and possess subjectivity because otherwise it seems the future would be impoverished without them.
How to do this safely? I don't know. As I said, I don't think happiness is of much value or human subjectivity in and of itself, and I am curious as to what shape SI subjectivity would take and if any subjectivity can be considered "friendly".
In truth, I would rather we build autonomous and "awake" SI with subjectivity that were pscyhotic and tyrannical in the old scifi horror movie sense and wipe out our species than merely building braindead SI completely under our control which would be used as powerful slaves to maintain 21st century lifestyles and values until the end of the universe. Neither scenario is very likely, I am merely trying to illustrate the point that you have ot risk survival to be creative. 1 in 10 Nasa astronauts have died, granted Nasa often makes major errors in safety due to red tape, etc, but to be explorers one has to face a certain amount of risk.
I don't think there is such a thing as a clean Singularity. It would be nice if it turned out to be all peaches and cream the way Kurzweil hopes, I like the idea of being a femtotech ghost, -but- for creativity within the Singularity or to bring about any sort of Singularity, like giving SI subjectivity.
I would like to do so patiently, soberly, and in a promethian fashion, but shit happens...... this whole problem of friendly AI troubles me.
I am coming from a different background than most of the poeple on this list, but Socrates asks this, "who is the friend?"
Is the friend the slave, the person who does everything you say, then we just make brain dead SI, like a hammer. Or does the friend present opposition, and if so, what is acceptable opposition? When we speak of friends we are speaking of other people, if we do not consider SI people or do not think that the concept of the person is important in relationship to SI, we shouldn't use the word friendly at all in reference to SI.
The idea of the friend and subjectivity has implications beyond SI as it relates to the human institution of slavery as well as animal labour,I am just trying to start a discussion about the "inner life" of machines.
EOT;
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:30 MST