From: Keith Henson (hkhenson@rogers.com)
Date: Mon Feb 02 2004 - 17:46:43 MST
At 01:54 PM 02/02/04 -0500, you wrote:
>Keith Henson wrote:
>>>However, when thinking about the ethics of posthuman digital minds,
>>>evolutionary ethics probably aren't so relevant, because these minds
>>>won't be evolving by Darwinian natural selection.
>>I am not so sure. The scale may be off in space and/or time, but it
>>seems to me that posthuman digital minds are going to be faced with
>>much the same kinds of problems of limited resources human groups were
>>faced with. Being able to directly change their nature might not make
>>a lot of difference in the long term about the kinds of ethics they
>>adopt.
>
>See http://sl4.org/archive/0401/7483.html
>("Darwinian dynamics unlikely to apply to superintelligence")
>
>A more compact restatement of the above was as follows:
I am in complete agreement with this statement. For that matter, we are at
the very end of Darwinian dynamics applying to *humans.*
But that wasn't my point (sorry for being unclear).
Darwinian dynamics operating within the constraints of the physical
universe generated creatures with a sense of ethics as a
solution. (Solution to what? To a world with others in it.) Self
modifying superintelligence operating in the same universe with the same
constraints and needing rules about how to deal with others might come to
similar ethical rules, only one heck of a lot faster.
There are maybe a dozen conditions I am assuming here. Let's see if I can
name some of them and if the other readers can fill in what I miss.
1) There are a lot of individual superintelligences.
I have no idea of what scale is involved, the material around a star could
be one SI, but I would think light speed is going to make syncing up a
brain with that much material tough. (Unless we can make wormholes or have
some other form of FTL--if we can, the whole universe might become one
brain. In that case, I am not entirely sure ethics is the right word to use.)
2) There is at least mild competition for energy and materials, but matter
and energy far from a SI's core is worth much less than near matter and
energy. Enough less that it is not worth trying to take it from another.
If superintelligences figure out how to bud off and move into entire new
universes this constraint goes away.
3) There are advantages to being social.
Being social ourselves, it is hard to imagine non-social
intelligences. Perhaps that's because our main use for intelligence is to
impress our peers and in doing so attempt to gain status. I have no idea
of what a non-social SI would be like or how it would
interact (if at all).
These three might be enough for non-zero sum elements to come into play and
something possibly similar to human ethics to emerge. But I am far from
sure about this.
Keith Henson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT