Re: Coercive Transhuman Memes and Exponential Cults

From: Durant Schoon (
Date: Wed Jun 13 2001 - 12:36:36 MDT


> From: "John Stick" <>
> Fair enough. My response was based in part on a sense that the language of
> cult and coercion is emotionally loaded, and calls up specific instances of
> current behavior that are unlikely to be precisely duplicated by transhuman
> AIs.

Those terms are loaded, yes. "Exponential Communities" doesn't sound as
interesting :) What does still sound interesting to me is the notion of
"Manipulation with Consent". I guess what I'm really rediscovering is that
although we are pretty much free, we already live in a web of manipulation.
I make movies for some rich guy named George, whose company hires out my
services to the highest bidder. At the same time it's rather enjoyable,
compared to say, heaving giant stones to build the pharo's tomb under the
sweltering sun.

Even if most of us spend most of our time fulfilling other people's goals,
there is still enough "left over" for me to pursue my own (provided I
limit my ambitions). There is always the truism that the more money one has,
the "free"er one seems to be. I actually think that this will hold true
in a transhuman future where computational resources play the part of
"money", but that's another discussion.

Cuckoo's discard other birds' eggs and lay their own egg's in the nest, to
be fed by the nesting bird. This type of one-sided manipulation does go
away as intelligence increases for both birds. Humans, who are much
smarter, would see through this usually, cuckolds not withstanding.
Hopefully these days, infidelity sooner results in something like divorce
and alimony than revenge and murder.

So we have come a long way and I accept your point individual freedom has
increased over the years and there is much hope and reason to believe that
transhuman ascendence will continue that upward arc.

If Friendliness does take hold and my well being is further assured,
there are still many various paths my life can take. These variations might
have benefit for more powerful entities than myself (in the same way that
corporations profit from my actions while I'm still a free citizen making
what I consider are my own choices). I find that interesting. I also find
it interesting that transhumans could scale up their abilities and "play
games" with other people's lives while those other people still enjoy
their "freedom". Would there be protections against this? If people didn't
want to be a part of anyone else's game, is there any way of obtaining
disclosure one's "manipulation"? This might be a silly line of reasoning
but I find it curious nonetheless. Perhaps I find it most striking because
I'd assumed that with freedom and protection of volition that I would not
be manipulated, and now I realize that's hardly true. With an all knowing
Sysop, maybe we can still maintain the control of our lives that our
individual dignity warrants.

The _danger_ I still see is incremental drift combined the ability to modify
ourselves at deep levels. But, actually, I think this is a tractable
and probably obvious problem, so I won't worry about it (I'll leave that up
to the Sysop...riddle: when does "Sysop" rhyme with "Copout"? :)

> There are interesting issues about what uses of information to
> persuade another transhuman intelligence would be fraudulent, antisocial or
> coercive, but the cult, conversion, and eaten by a meme tropes

:) is taken yet ?

> draw
> attention to the extremes. Arguing that the extremes will become less
> prevalent as intelligence increases is (like) trying to look beyond a
> singularity: of dubious utility and accuracy-- but what else are we going to
> talk about?

We haven't seen what transhumanism looks like yet. When the anaerobic bacteria
were "catastrophically" replaced by aerobic bacteria, the bacterial world
probably hadn't seen such drastic, punctuated chage. Both types survived, but
the new upstart dominated thereafter. Maybe Exponential Communities or Group
Brains should not be feared. Maybe they will be superior and more enjoyable.
The weirdness enters when I can allow ideas to enter my mind which can change
what I find "enjoyable", so that I might be converted from an individualist
to a happy member of the Borg family, while completely complying with the
rules of Friendliness!

(Pssst, hey buddy, I've got a mod that'll blow your mind. Your body will
be reconfigured to intake the waste of other entities. You'll find it satisfying
and healthy. You'll enjoy its as you've never enjoyed anything else. Soon,
you'll be craving that waste "oxygen"...or hating not getting it...hard to tell.
Mmmm, manure sandwiches... :)

> I don't buy the idea that intellectual coercion and its defense is an arms
> race where each advance in intelligence helps each side equally.

When I think of intellectual coercion as an arms race, I think of having
to buy a car from a used car salesman. He has all these tactics and if I'm
not aware of them, I'm going to end up getting a lousy deal. By learning
how to defend myself (what's that webite, I will fare
better. Each iteration, each "side" must escalate...There are limits on
fraud, though, that prevent the salesman from completely cheating me (ie.
lemon laws), but somehow I'm still afraid of ending up with True-Coat
rust proofing (a la the movie Fargo).

> First,
> there is the simple minded argument that we start at zero intelligence, zero
> freedom, so any advance has to help. Not conclusive, of course, because
> plotting intelligence against freedom may describe a curve whose slope goes
> negative at some point, but still suggestive. Mostly it is just a sense
> running scenarios through my head that each bit of defense will require a
> much larger increase in offensive intelligence to knock down.

Having an SI calculate my best defence seems like a good way to go.

> As for your inquiry about law and law books, I think law will help only
> after the fact to ratify a consensus that develops within the AI community
> on ethical behavior, that is, only after true AIs have been up and running a
> long time (for them and us). Attempts to legislate ahead of technology on
> subtle, fact specific issues like this rarely occur, and are never
> effective.

Ah yes, thanks for pointing that out.

> If legislators were convinced of the danger of AIs converting
> humans, a ban of AIs would be the most likely response. (And it would still
> be ineffective.) Redefinitions of fraud, blackmail and so on to make them
> applicable to AIs will come only after substantial experience with the
> behavior of real AIs.

Again, the latency with which it takes the law to adapt to new scams might
be acceptable under current conditions, but in the hyperfast ascent scenario
effective coercion techniques might rise faster than defensive tactics.

Hopefully the first thing Eli will do is lead the nascent AI into consider
defenses for all the yet-to-exist-nasties that lie in our path.

> Laws mostly follow social conventions, and do not
> precede them, and are easily swept away without that support, unless you
> have a very strong secret police.

When social convention is split (ie. the abortion issue), I guess things
get messy. Now if only there were a way to manipulate social convention...

> As for meme talk (a complete side issue and I apologize for raising it),
> although the evolutionary metaphor can be suggestive, I am generally
> suspicious that it is often used to attempt to get to grand theory and
> sweeping arguments without having to attend to the detailed mechanism. In
> my amateurs understanding of current biology, the selfish gene theory meant
> never having to say you are sorry for not having unraveled protein
> chemistry, and is being superceded at the cutting edge because people are
> beginning to find the ways proteins matter to evolutionary development. For
> us, I think meme talk displaces more detailed discussions of how concepts
> are adopted in rational discourse, when the whole point in the AI field is
> to get to the details and implement them. But my argument here is unfair to
> some uses of the meme meme, even if accurate about others.

I hope I was using the concept appropriately in the context of mass ideological
movements where the latest transhuman craze might be Borgification and the
complete elimination of the desire to leave a hive-mind. The notion of a spread
of memes (like an infection vector) appeared natural for this, but I don't use
the metaphor all the time.

Durant Schoon

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT