RE: Coercive Transhuman Memes and Exponential Cults

From: sunrise2000@mediaone.net
Date: Tue Jun 12 2001 - 13:43:05 MDT


There was an episode of Outer Limits...

        A white, middle-class, scientist guy is whisked off to a faroff
        world by a four-dementional intelligence which is somehow able
        to speak with the man using sound (surround sound? :). The man
        learns that he is being held there irrespective of his wishes.
        He asks the 4D: ``You won't let me leave of my own free will?''
        The 4D responds: ``I've studied hundreds of civilizations in this
        universe and read reports on dozens more- and, only on earth is
        there any talk of this ``free will.''
        
        Perhaps volition is evolutionarily on its last legs.
        Community is one of the organic's greatest technologies.
        Look, there's that Borg cube I called for the other day....

It may be blasphemous to say it, but causality will always be the supreme
heuristic guiding the perception of any reality. There will be viruses in
any domain - even simple mathematical functions have attractors. Why do
you wear blue jeans? It may very well be that punctuated gradualism
(Montenegro, 1993) is the "be all and end all" of reality. In order to
preserve the "intelligence" in a system, one may invision a rule such as:
"It is unethical to influence parties to do things ("thinking" being
implicitly included in "do"ing), by a method so (generally?) powerful
that it could be used to violate their will." By this standard, holding a
gun to a human's head would be considered equally violent. Is weaponry a
virus? Or is it a technology? Is transhumanity a virus?

One may also invision as a guiding moral heuristic a rule stating that:
"No party shall, by action or inaction, decrease the richness of any local
system." For Trekkies, this is the "Prime Directive" (cf. the interesting
DS9 episode involving the game species "Tosk"). It seems like there needs
to be some way of measuring intelligence (discussions on general IQ omitted)
and that conservation of this independence be given a high enough priority
in the AI's set of moral heuristics. (Informal proof: exerting control over
a system decreases its local richness.)

Regarding the Sysop scenario, I surmise that complexity issues would
probably prohibit such a setup. Wouldn't the Sysop need to model everything
external to it? That would make the Sysop at least half the system
intelligence, and nominally the majority will. If there were several (n>1)
AIs on the Sysop's radar, each with an intelligence slightly less than that
of the Sysop (S - delta), then (1) any increase in the intelligence of these
systems would have to be kept below delta and (2) the combined intelligence
of the systems would be greater than the intelligence of the Sysop. Would a
Sysop forcefully intervene if something threatened its position as Sysop?
What if it couldn't intervene? Would/could the Sysop "die"? (Cf. God is Dead).

Realistically, I'm more concerned about the yang of this issue: engineering
peace. How would post-singularitarians logistically find, join, and maintain
trusting roles within peaceful post-singularity communities? (I think some of
Ben Houston's work touches on this, i.e. http://www.exocortex.org/p2p/.)

(Maybe, post-singularity, a Sysop will find this thread and care to comment. :)

Dave



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT