From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Feb 14 2004 - 20:24:08 MST
Keith Henson wrote:
>
> This is important because the *ethics* of superhuman intelligences
> depend on the underlying physics.
>
> If you have FTL, the ethics of the future are more on the scale of
> taking care of your body vs having to deal with other independent
> superhuman intelligences.
You're natural-selection-o-morphizing here. You deal with cognitive
systems that tend to have "utility functions" (not real utility functions,
of course, but cognitive dynamics that make choices) centered around their
own physical locations. Actually, the utility doesn't follow the
location, it follows the genes - but one's own genes tend to be reliably
located in the same place as one's own brain. In biological cases where
this is violated, and to the exact extent it is violated, the "utility"
always follows the genes, not the physical location.
"Self"-centered utility with copying deixis is an easy idiom for natural
selection (though, even there, it is frequently violated). It need not
apply at all to superintelligences. There is no reason why an
optimization control process transforming distant matter into a copy of
the optimization control process would need to imbue that copy of the
control process with different optimization criteria than the original,
i.e., an expected utility optimization process extending itself throughout
space has no conflicts of interest with distant parts of itself. This can
hold true for arbitrarily great communication delays without introducing
any obvious difficulties I can see.
> It also leads to some very interesting questions about how physically
> large a superhuman intelligence can be. At some point there is no
> utility in absorbing more matter.
I'm planning to talk about this part with Ben.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT