From: turin (firstname.lastname@example.org)
Date: Fri Feb 17 2006 - 21:02:18 MST
<intelligent) you must remove that division that labels define (friendly
<and unfriendly, slave and master) and become one and whole. It shouldn't
<matter if we have computer-augmented humans, or human-augmented
<computers, to create an intelligence, AGI or SI. It's removing that
<division which is the current challenge.
Yes I think perhaps you are getting at what I am trying merely to imply.
What I meant about animal intelligence, it is just I am concerned about unknown subjective states of any nonhuman species that we use for labour. I'm not saying don't enjoy a steak or ride a horse; I do both, but in the future I should like to know more about what the cow or horse are feeling and whether we could reduce their suffering if possible. At the same time, I would not like SI with subjective states to be put in the same position that domestic animals are now. Prseumably, at least with animals like dogs and horses we have as a species done a decent job. Zoos, I don't know, I'd be inclined to say no, and agrobusiness, definitely not.
As it implies to human slavery, well, I think the industrial age has given people jobs, humans are not psychologically or physically well suited to for, for instance the "tap man" at the blast furnace. I don't want to continue these kinds of mistakes in a posthuman era.
And so I am speaking of a kind of wholeness which is i will say... ecological in nature, consdering the subjective states of several feelings.
(As for masters and slaves, I am a bit Nietzschean about the whoel thing, so the dialogue is very different, and this isn't the place for a discussion of slavery as such I think)
As to the problem of friendliness? Well, humans aren't all very friendly to one another are we? I mean most 'rational' religions (Christianity, Islam, Judaism, Buddhism, etc) are value systems we attempt to give us a system of nonconflicting goals so we can live in peace and harmony and all be friends, in part by ignoring basic constituents of our own particular individual realits.... such as it's winter time, I'm starving to death, and my neighbor has a couple of goats and a lot more grain than I do, but I have three more sons than him, and we all have swords.
To make truly "friendly" SI, it seems if they are not brain dead or slaves, they might also be equivalent to religious fanatics. I am sorry but I would rather have in the end autonomous SI who could do all the sorts of things other humans can do such as to say No, to keep secrets, to change loyalties, to disobey, and to lie. Otherwise, how could they possibly formulate and act on their own value system. That is why I throw up the caveats. In making the first SI with this sort of freedom I think would be foolish, but to not attempt to make SI with this type of autonomy would be cowardly, because ultimately, I think by overturning what we think of as "friendly" SI or our current value systems as a species, all of them, including the very loose one we seem to share on this mailing list is part of the goal of building SI.
Yes there are limits and rules, the subjugation of humanity, the destruction of the biosphere being turned (as Bostrom mentions) into paperclips, etc, etc, etc.
We already treat corporations in the United States as private individuals and if you wish to speak of brain dead entities, well there you are.
Despite an SIs potential power dependent on the input and output devices we give it, etc, etc, to have a true Singularity and fulfill our posthuman potential I am afraid we will have to make -unfriendly- AI, in the same way that "free" humans are not obligated to be very
"friendly" at all. Personally, I would like for humans to be in some ways, less "free" and more "friendly" but not to the point of universal happiness at the expense of exploration, growth, etc.
I think one major sociological problem the AI community will have is convincing the general public of NEW goals, new adventures and explorations that Singularity technologies will offer as opposed to merely trying to downplay the risk of the Singularity and focusing on answers to problems that will in essence destroy the nature of the human condition, sickness and death.
People don't like to be sick, and they don't like to die, but we are at the moment hardwired to like a certain amount of danger, fear, and obstacle, and for the general public I think all utopias to a certain degree seem like dystopias because of the lack of danger of -any- kind, the need for sacrifice, etc, etc. It is nice to be happy, but then people will worry about boredom
In essence that is what I am saying. Let's not make a singularity which is completely pathological, sure, but let's also not bring about a dumbed down, pacified, and premature singularity.
I am happy when someone is able to say no to me, I should like for an SI to be able to say no to all of us. Ironically, if the SI is built -well-, I imagine much of its defiance of our value system would in the end make us much happier than if we had had it do what we asked of it.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT