From: Durant Schoon (durant@ilm.com)
Date: Tue Jun 12 2001 - 15:01:11 MDT
> From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
>
> Durant Schoon wrote:
> >
> > > Charlie has to be smarter than the *Sysop* to *covertly* take
> > > over Bill, at least if Bill has a normal relationship with the Sysop.
> >
> > Not smarter. Just smart enough. Charlie merely needs to be smart enough to
> > persuade Bill to incrementally change vis volition, without violating any
> > rules of the Sysop.
>
> But what are the Sysop's rules? They are Bill's rules! Rather than, as I
> think you may be visualizing, some a-priori definition of which messages
> are allowed to pass between two entities with a given level of
> intelligence.
Ok, they are Bill's rules.
Let's suppose Bill could be happy choosing any number of directions
for in his life. Maybe a new question I'm raising is: Is it ethical
for other entities to influence Bill toward particular (happy)
targets? There may be targets which are equalling appealing to Bill
but which have various and varied benefits for others. Will there be
limits placed on more powerful entities competing for any side
benefits for Bill's decisions?
Concrete Example: Whether I buy Guess jeans or Levi's, my happiness as
a result of my actions might be pretty much the same. To the two
corporations Guess Corp. and Levi's Corp., however, my decision could
make all the difference in the world*.
Your agument may be that from Bill's point of view, this does not
matter. But I'm wondering if this becomes more complicated when the
possibility that Bill would be happy in an exponential cult is
non-zero and/or he's able to rewire how much he'd like being in one.
If Bill is naturally prone to cult membership but just hasn't found
the right cult yet, then I suppose the Sysop would help him find True
Happiness and that would be the right thing to do.
Sysop: "No really Brother Bill, you would be very happy here. I have
complete knowledge of your psychological subnetworks and, although you
may not realize it yet, this is a good fit for you. Your wife Barbara
will probably leave you, but your kids will still visit you and you'll
prefer the trade off after time. As always the choice is yours..."
Ok, I admit this example is a little unfair <evil smirk>.
If Bill is not naturally prone to cults, should there be protection
against aggressive/effective strains (the kind that make you want to
join)? Maybe this will just be part of the standard filter. It also
implies that exponential cults are bad things to be avoided.
* Guess Corp. may not be allowed to hold my dog hostage until I buy
their jeans, but they are allowed to clutter my view with billboards
of pretty people.
> It is possible, probably even likely, that almost all of
> the message-filtering rules will converge to the same basic standard.
I'm assuming that people will explicitly set (or have an internal
setting for already) a personal tolerance for the amount of change
they are willing to undergo. Once the standardized message-filtering
is applied, further settings based on one's acceptable-mutation-rate
may come into play.
I do like the idea that generic, reasonable rules seem possible and
further that they could be tailored to individuals. What I think is
difficult is knowing *which* rules to choose before it is too
late. But that's a job for the Sysop, as you say. I have seen some
pretty disgusting images on the web that I wished I hadn't seen
(www.rotten.com **) and I can't un-see them now. I knew I probably
didn't want to look at them, but I went ahead and did it anyway. I
suppose life will be very different when the Sysop can whisper in my
ear: "Durant, you'll like this but not that." I'll also, presumebably,
be able un-see things (ie. edit my memories...or better hide them
indeterminantly).
Perhaps this will all be covered in the welcome video: "Hello, if
you're watching this video, you've made the first step and just
increased your intelligence three orders of magnitude from human
average. Before you complete the process and accept full write
permission to your volitional centers, we strongly suggest you apply
the standard filter to avoid automatically opening Viral Brain Script
files. These files have a tendency to induce a cultlike following and
a herd mentality when choosing certain consumer products."
** WARNING: www.rotten.com has images which could be construed as
extremely grotesque (ok, it's just plain disturbing). There's also
www.goatse.cx which I found in someone's sig w/o a warning and
decided to view it - also extremely graphic.
> My
> candidate for this standard would be "It is unethical to convince people
> of things, even true things, by a method so powerful that it could be used
> to convince them of false things."
Ah yes, good point, but I don't think I could argue with it ;)
> But the point is that the ultimate
> decision is up to Bill.
When I was a young boy, I didn't like tomatoes or or onions. At some
point tried them individually and changed my mind. Now I really love
them. It seems like it boils down to how much we want to change in the
future. I don't, for example, think I'll ever enjoy a manure sandwich,
no matter how good it tastes...not even smothered in sauteed onions
and topped with vine ripened, garden sweet tomatoes. In fact I
probably wouldn't want to modify myself to like them, even if there
were no health concerns. But can I really say that?...maybe I'd be
missing something...
> So Bill can simply say, "Disallow all messages that are intended to
> convert me to a cult, or that might as well be so intended." And that'd
> be that. If that *still* doesn't work then Bill can adopt a "prohibited
> unless allowed" rule, and totally block off all communication from smarter
> entities except Friendly AIs, known total altruists, and messages where
> the Sysop appends a note saying "Bill, I'm damn sure you need and want to
> read this."
That gives me an idea! I'll start The Cult of Known Altruists...oh
wait, you beat me to it ;-)
> And if that doesn't work, I guess that Bill basically has the
> option of either entirely silencing the Spaces Beyond or doing a
> fast-as-possible transcendence personally.
Resistance is not futile ... but only for ultra-paranoid hermits :)
> There might be a threshold
> level of superintelligence beyond which not even a Power can fool you.
That would be interesting, but I'm not sure why that would be. With
limits on other entities, maybe you could get close to that.
> > Charlie might also do this completely openly. In fact,
> > if Charlie does not do this, then transhuman Cindy probably will, ie.
> > someone would do it after enough time passes. And you know transhuman Cindy
> > has a way with words. She makes everything so clear and understandable.
>
> Sure. If you expose a human to a superintelligence, even through a VT100
> terminal, then the human's sole safeguard from total mental takeover is an
> ethical superintelligence. I'm pretty confident of this. With what I
> know of intelligence so far, it looks to me like a being that had a list
> of all the emotional and intuitive sequiturs, and that could keep track of
> a hundred different chunks in short-term memory, could chat with a human
> for a bit and then navigate her like a chess search tree. We simply are
> not that complicated except by our own wimpy standards.
yup.
> Hence the folly of "containment".
yup.
> In fact, if I were the "created AI" and I could chat only through a VT100
> terminal, I could probably also convince you to let me out, using only
> truthful arguments, while obeying my own ethical constraints, as long as
> the person on the other end was fairly rational. An irrational jailkeeper
> would probably require a transhuman jailbreak, though.
Hmm, it might depend on how "irrational", but ok.
> > Smart people can be convinced to make incorrect conclusions if there is
> > enough spin and doubt created or if an idea is "irresistably appealing".
>
> "Smart" being relative, of course this is true.
Yes, intelligence is relative and context dependent. I should have
been less categorical. There are no stupid people, only stupid
questions :)
> But in this case the
> first thing that smart people do is ask the Sysop to filter their
> messages, or better yet, blaze up to superintelligence themselves.
With filtering we have the equally-appealing-to-Bill problem above.
With blazing up, we get an arms race...maybe not a bad thing, maybe
unavoidable.
> > For some category of non-dangerous manipulation, the sysop won't intervene.
>
> The Sysop intervenes when you ask the Sysop to intervene; when you define
> intervention as desirable. If you define intervention as desirable for a
> transhuman-originated message intended to cause you to arrive at an
> incorrect conclusion - I sure would - then the Sysop will intervene.
Ah, but it doesn't have to lead to incorrect reasoning, ie. outright
deception (the equally-appealing thing again). Maybe I can help myself
out of this conundrum, though, simply by asking the Sysop for all
ulterior motives for messages directed to me. If that works, maybe I'd
feel more in control. Maybe I could avoid things my current self wants
to avoid for my future self.
> > fnord
>
> all your base are belong to us
:)
> > I suppose this leads to another question about property rights and volition.
> > If there is a land grab and I get out to the other matter in the universe first,
> > claim it and convert it to computronium populated with sentients who follow my
> > cult am I violating anyone's volition in an unfair manner?
>
> My guess is that if matter is a limited resource, then the Sysop expands
> outward at lightspeed, and incoming matter is distributed according to
> some algorithm that I'm not really sure I can guess, except that the most
> obvious one is to distribute it evenly among all current citizens.
I suppose that would work. Now I'm tempted to speculate what a person
should do, once an SI is created (new thread). Eli's answer would
probably be: Ask the SI. Now I wonder what the first thing an SI would
do...Collect lots of money and spew PR secretively to allow a graceful
announcement? Hmm, maybe there's a good story there somewhere about
The Grand Debutante Ball...
Limited resources also makes me wonder about the old "Sentients as
Temporary Variables" thread, ie. infinitely forking new sentients, but
that doesn't seem like an intractable problem, really. Maybe just a
difficult one to get right.
> I think that truthful ideas, and to a lesser extent ideas that are not
> totally objective but that are valid for almost all humans, will spread
> exponentially from human to human; or, even more likely, emerge instantly
> as a result of people asking the Sysop. Why is this a bad thing?
It is perhaps hopeful that we humans are able to place reason over
non-reason (ok it could be said we are joining the Cult of Reason). I
do believe that some mechinisms should arise to control the degree to
which we are manipulated in the future, though. Hopefully our natural
aversion to outside manipulation will factor largely in our
ascendance. I'm still curious about how much entities will be allowed
to "direct me" indirectly, where the outcomes don't matter that much
to me.
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT