Re: Ethics was In defense of physics

From: Keith Henson (hkhenson@rogers.com)
Date: Sun Feb 15 2004 - 01:17:58 MST


At 10:24 PM 14/02/04 -0500, you wrote:
>Keith Henson wrote:
>>This is important because the *ethics* of superhuman intelligences depend
>>on the underlying physics.
>>If you have FTL, the ethics of the future are more on the scale of taking
>>care of your body vs having to deal with other independent superhuman
>>intelligences.
>
>You're natural-selection-o-morphizing here. You deal with cognitive
>systems that tend to have "utility functions" (not real utility functions,
>of course, but cognitive dynamics that make choices) centered around their
>own physical locations. Actually, the utility doesn't follow the
>location, it follows the genes - but one's own genes tend to be reliably
>located in the same place as one's own brain. In biological cases where
>this is violated, and to the exact extent it is violated, the "utility"
>always follows the genes, not the physical location.
>
>"Self"-centered utility with copying deixis is an easy idiom for natural
>selection (though, even there, it is frequently violated). It need not
>apply at all to superintelligences. There is no reason why an
>optimization control process transforming distant matter into a copy of
>the optimization control process would need to imbue that copy of the
>control process with different optimization criteria than the original,
>i.e., an expected utility optimization process extending itself throughout
>space has no conflicts of interest with distant parts of itself. This can
>hold true for arbitrarily great communication delays without introducing
>any obvious difficulties I can see.

I see what you are saying here. The idea is analogous to cells in a body
(with common genes) not being in conflict. Cells in a body sometimes do go
wild (cancer) but with good error correcting codes whatever is at the core
of an AI could copy to end of the universe without making an error.

It seems to me that the core would have to be absolutely impervious to
outside influences--which is in conflict with intelligence--to the extent
that intelligence has to do with learning. Otherwise units at the ends of
communication delays would diverge. I suppose every AI could be
broadcasting its total information stream into memory and receiving the
memory dumps from every other AI. It would have to treat the experience
(memory) of other AIs with equal weight to its own. That would keep at
least close ones in sync, but if there are growing numbers of these things,
the storage problem will get out of hand no matter what media is being
used. (In fact, it might make the case for very few AIs. Even on per star
would get out of hand.)

The problems this creates are bad enough that far apart AI cores would be
forced to consider themselves as different "individuals" just by weight of
different (unsync'ed) post creation experiences. I think this is true even
if closer ones engaged in total mind melding.

I like the idea that AIs could avoid having conflicts of interests in more
ways than humans can. People would probably fight a lot less if they
walked away from every meeting not knowing which one they were.

I am sure you will point out flaws in the above reasoning. Please do, it
is an interesting topic.

> > It also leads to some very interesting questions about how physically
> > large a superhuman intelligence can be. At some point there is no
> > utility in absorbing more matter.
>
>I'm planning to talk about this part with Ben.

With FTL there doesn't seem to be an obvious limit. Without . . .
eventually your brain undergoes a gravitational singularity.

Keith Henson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT