Re: Coercive Transhuman Memes and Exponential Cults

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jun 11 2001 - 17:26:18 MDT


Durant Schoon wrote:
>
> > Charlie has to be smarter than the *Sysop* to *covertly* take
> > over Bill, at least if Bill has a normal relationship with the Sysop.
>
> Not smarter. Just smart enough. Charlie merely needs to be smart enough to
> persuade Bill to incrementally change vis volition, without violating any
> rules of the Sysop.

But what are the Sysop's rules? They are Bill's rules! Rather than, as I
think you may be visualizing, some a-priori definition of which messages
are allowed to pass between two entities with a given level of
intelligence. It is possible, probably even likely, that almost all of
the message-filtering rules will converge to the same basic standard. My
candidate for this standard would be "It is unethical to convince people
of things, even true things, by a method so powerful that it could be used
to convince them of false things." But the point is that the ultimate
decision is up to Bill.

So Bill can simply say, "Disallow all messages that are intended to
convert me to a cult, or that might as well be so intended." And that'd
be that. If that *still* doesn't work then Bill can adopt a "prohibited
unless allowed" rule, and totally block off all communication from smarter
entities except Friendly AIs, known total altruists, and messages where
the Sysop appends a note saying "Bill, I'm damn sure you need and want to
read this." And if that doesn't work, I guess that Bill basically has the
option of either entirely silencing the Spaces Beyond or doing a
fast-as-possible transcendence personally. There might be a threshold
level of superintelligence beyond which not even a Power can fool you.

> Charlie might also do this completely openly. In fact,
> if Charlie does not do this, then transhuman Cindy probably will, ie.
> someone would do it after enough time passes. And you know transhuman Cindy
> has a way with words. She makes everything so clear and understandable.

Sure. If you expose a human to a superintelligence, even through a VT100
terminal, then the human's sole safeguard from total mental takeover is an
ethical superintelligence. I'm pretty confident of this. With what I
know of intelligence so far, it looks to me like a being that had a list
of all the emotional and intuitive sequiturs, and that could keep track of
a hundred different chunks in short-term memory, could chat with a human
for a bit and then navigate her like a chess search tree. We simply are
not that complicated except by our own wimpy standards.

Hence the folly of "containment".

In fact, if I were the "created AI" and I could chat only through a VT100
terminal, I could probably also convince you to let me out, using only
truthful arguments, while obeying my own ethical constraints, as long as
the person on the other end was fairly rational. An irrational jailkeeper
would probably require a transhuman jailbreak, though.

> Smart people can be convinced to make incorrect conclusions if there is
> enough spin and doubt created or if an idea is "irresistably appealing".

"Smart" being relative, of course this is true. But in this case the
first thing that smart people do is ask the Sysop to filter their
messages, or better yet, blaze up to superintelligence themselves.

> For some category of non-dangerous manipulation, the sysop won't intervene.

The Sysop intervenes when you ask the Sysop to intervene; when you define
intervention as desirable. If you define intervention as desirable for a
transhuman-originated message intended to cause you to arrive at an
incorrect conclusion - I sure would - then the Sysop will intervene.

> It seems likely that there will be an arms race for intelligence to defend
> one's self against hyper-clever marketing - ie. ideas which influence but
> are completely permissible by the sysop.

The best defense is equal intelligence. Failing that, the point I want to
make is that the question is not whether ideas are permitted by the
*Sysop* but whether they are permitted by the union of Sysop and citizen.
There may be some subsegment of the population that is simply eaten by
memes, but it will probably be that small segment that was dumb enough not
to accept a Sysop recommendation, or the even smaller segment that made
the deliberate decision to be prey.

> fnord

all your base are belong to us

> I suppose this leads to another question about property rights and volition.
> If there is a land grab and I get out to the other matter in the universe first,
> claim it and convert it to computronium populated with sentients who follow my
> cult am I violating anyone's volition in an unfair manner?

My guess is that if matter is a limited resource, then the Sysop expands
outward at lightspeed, and incoming matter is distributed according to
some algorithm that I'm not really sure I can guess, except that the most
obvious one is to distribute it evenly among all current citizens.

> Seriously though, are exponential cults a natural consequence for societies
> revering volition?

That depends on how you define "cult", ironically enough. (Well, it's
ironic if you still try to read "Extropians".)

I think that truthful ideas, and to a lesser extent ideas that are not
totally objective but that are valid for almost all humans, will spread
exponentially from human to human; or, even more likely, emerge instantly
as a result of people asking the Sysop. Why is this a bad thing?

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT