From: John Smart (firstname.lastname@example.org)
Date: Tue Feb 27 2001 - 16:32:31 MST
> There are no military applications of superintelligence.
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
c) a statement that once a system becomes REALLY SUPERINTELLIGENT it will no
any motivation to serve any particular country above any other one...
[My guess as to your intended meaning]
Let's assume your meaning is c). Then I have a question for you. Which is:
Why couldn't the
CIA create a self-modifying AI whose supergoal was "Serve the USA." I.e.,
"Be Friendly to the USA."
You posit that the supergoal "Be Friendly to Humans" can remain a fixed
point throughout the successive
reason-driven self-modification events that will constitute the path from
initial AI to superintelligent AI.
But, I'm not seeing why the supergoal "Serve the USA" couldn't serve as an
equally adequate supergoal,
Interesting question, Ben. I think it is enlightening to examine the
difference between service and friendliness. To my mind, Eli is right in
the way you've suggested here.
It seems all complex systems merge symbiotically with the complex systems
around them, in the biological record, and the more complex they are, the
more extensive the merging. (Margulis, Symbiotic Planet). The more complex
systems never remain subordinate, so this argues against the "serving"
supergoal. Is there merging friendly? Again symbiosis as a mechanism
produces ever greater local computational complexity, so we might call it
collectively algorithmically friendly, as an uninterrupted supergoal. Is it
friendly to the individual systems involved? I think again the record shows
this, as a function of complexity, but I wish to make that case in detail at
a later date.
Understanding Accelerating Change
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT