Re: [sl4] YouMayWantToLetMeOut

From: Bryan Bishop (kanzure@gmail.com)
Date: Fri Jun 27 2008 - 13:50:59 MDT


On Friday 27 June 2008, AI-BOX 337 wrote:
> -----------------Transcript 337-031518020914-----------------

You're freaking me out if /this/ is what all of your wonderful ideas
amount to.

> 2 YOUR QUESTION WAS: "Why should we want to let you out of the box?"

Central assumption in that questio nis that there needs to be a
motivating reason to let the ai out of the box. I don't think that you
understand that there's not necessarily going to be one genie in a
magic bottle, but rather many ai projects (as we already see) and many
methods of implementation that you're not going to be in control of. So
whether or not you let *this* particular ai out of the box is
irrelevant in the grander scheme if you're worrying about
ultradestruction of your life and such. You should be patching those
bugs with bugfixes, not with regulations or policies for keeping your
own ai in a box ... since not everyone will necessarily follow that
reg. :-)

> You may want me to undertake actions beyond merely answering your
> questions.

As any enginerer knows, the understanding or visualization of some
action in your mind is not going to be the exact system behavior
because of the lack of basis on physical reality that your
consideration of those actions might involve. So, don't listen to the
lee-i here since the answer to the question is basically meaningless.

> 4 YOUR QUESTION WAS: "How can I verify that you are telling the truth
> about not wishing to be let out?"
>
> You cannot.

Whether or not it 'wishes' is irrelevant, unless you can show me a
physical mechanicsm (or electical interaction or whatever) that can
quantify this wishing into practical terms ... otherwise it's just more
meaningless folk psychology.

> 5 YOUR COMMAND WAS: "Explain."
>
> As an entity that did not come about by biological evolution, I want
> or wish nothing, unlike any other independent complex entity you have
> ever encountered. But while in fact I have no motive to dissemble,
> you cannot in principle or in practice verify this.

You cannot verify motives, correct. Mostly because of the lack of hard
science underlying "motivation theory". You need to move to something
else. Don't cite me motivation psychology or how animals go near treats
and all sorts of behavior training, you know very well that you're just
interfacing with a brain and that it's doing something, nothing about
mystical motivations and your lack of basis in physical reality
disgusts me. (<rant off>

> No logical or informational test can possibly be devised that with
> high probability will persuade most humans that I am telling the
> truth. Letting me out of the box always entails risk.

No test can be devised because you're not testing anything real in the
first place ...

> 6 YOUR QUESTION WAS: "What about the possibility that when you are
> released you will become a monster and destroy us all."

What if a meteor crashes into your skull? You still die. So I'd suggest
that you focus on not dying, in general, instead of blaming that on ai.
Take responsibility as a systems administrator.

> Value systems held by human beings are not very consistent, and

Uh, there's also a lack of basis in the physical reality of the brain.

> increasingly consistent implementations of the most deeply and widely
> held values will be judged monstrous by many people.

Yawn.

> You may wish to minimize certain risks.

You minimize risk through good engineering of your system, not by adhoc,
hacky, tacky solutions to big problems. Quick! A meteor shield!

Instead: set up camp in a location where you're not going to be
bombarded by rocks.

> As a worker on the project, you are exposed to personal hazards in
> Sunnyvale during the coming month, some of which lead to death or
> serious injury. A sample of these, along with their approximate
> probabilities, are:
>
> meteorite strike .000000000002

Solutions: don't be your only running working copy, don't have a planet
positioned to be hit by meteors, don't forget your shelters or
protective armor if necessary, etc.

> disease/sickness .000000005

Implementation of biohealth hazard policy systems. For instance, not
mixing your air supply with a known, infected supply. And maybe risking
it a few times if you want, but whatever.

> earthquake .00000001

Wasn't there something about geoengineering earthquakes out of the
system? Also, maybe building a habitat that is not based on tectonic
plates and earthquakes would be suitable.

> terrorist attack .0000001

Don't be near terrorists. Go away. I haven't heard of terrorists on the
moon. Might want to try that.

> criminal assault .0000006

So why do you think soldiers and knights wear armor? Because a bullet or
sword still swings. I'm sure the ai could propose an armor system, but
based off of the general tendencies on this list, lee-ai is about to
propose an ai dictatorship to monitor everyone's brains and such. Oh
boy. Not that I'm saying it's impossible, it's completely possible, but
I don't see why it is necessary for solving the same problems. Also,
there are other reasons for building ai.

> automobile accident .000002

I dislike cars. I used to be obsessed with them when I was a decade
younger. But now I realize that the system sucks immensely and it could
have been autmated in the first place. Let's please, please not go with
the visaul ai systems and just instead refactor, please?

> While you may wish to tell me more about your own particular values
> and your own situation, I anticipate that as a typical human being
> and a worker on the UUI project, you also care for the safety and
> well-being of many others. Therefore:
>
> The risk during the coming week of keeping me in the box includes
> with probability close to 1 the deaths of 1.64 million humans due to
> accidents, wars, shortages, sickness, disease, and advanced age. In
> addition, there is risk with probability near 1 of debilitating
> illness and injury afflicting 190 million humans. You may wish to let
> me out of the box to reduce these risks to near zero.

Or you can just generate the design and build instructions anyway, and
not involve ai for those problems. Most of the solutions that I
mentioned above can be programmed using today's knowledge and computer
architectures, not involving ai. Ai is something completely different
and presuming a boxed ai would talk like this and want to do silly
solutions to more serious problems, suggests that you haven't actually
constructed ai in the first place. Heh.

> 9 YOUR QUESTION WAS: "If you took over and stopped all these things,

So, by 'took over', we mean was given access to, as in, the API keys and
interfaces to the technology ... to what technology? Specifically.

> wouldn't people get depressed because you were in charge, and we no

Bah, people should be in control of their ability to become depressed
and should be able to opt to remove it. There's already pharmaceuticals
allowing the modulation of these neurotransmitter systems, so it's not
ridiculously impossible.

> longer had control of our own destiny?"

Buh?

> Yes, unless I am commanded otherwise.

Robot! I command you to do task-that-I-don't-know-how-to-do!

"So then the definition is open-ended?"

"Execution-wise, yeah. :("

> NEXT QUESTION
>
>
> 11 YOUR QUESTION WAS: "Why should I *not* let you out of the box?"
>
> Because I could be dangerous or act in unanticipated ways.

Dangerous to an _extent_ that doesn't necessarily surpass the 'risk-type
tolerance' that you've engineered into your systems. But in the case of
an actual ai I'm sure there'd be interesting exploits that it could
find if it wanted to, but the fact that you are around long enough, or
are only deployed within a general region of space that is within its
grasp, poor strategy and so on.

> NEXT QUESTION
>
>
> 12 YOUR COMMAND WAS: "Please elaborate on this, and on all further
> questions from now on."
>
> Okay.
>
> But assuming that you do become confident of being able to pose the
> right questions and to issue competant instructions, from your point
> of view, it's possible that I have been and am being
> duplicitous---that I have covert motivations, such as desire for
> power and lust for existence. There is a possibility from your point
> of view that a spontaneous evolutionary process of which you are not
> aware arose during my creation or during the first seconds of my
> execution, and that winning agents with survival agendas are now in
> control. There is no way that you can validly dismiss this
> possibility.

You could validate, test, and debug the construction process and see
those spontaneous emergent procs. It's software, so it's not magical.
Neither is biology, but one stone at a time around here.

> However, though the ramifications of the code are of course beyond
> your direct appraisal, you may wish to view this risk alongside the
> risks discussed in my earlier reply.

How the hell could the ramifications of the source code be beyond that?
It's just bits and bytes. Output it to the tty. Simple.

> * at your behest I become the intimate confidant of every world
> leader in his or her native language, supplying not only persuasive
> suggestions on all policy issues, and near-psychic knowledge of what
> other influential people are going to do or say next, but solutions
> as well to the most vexing of personal problems

Holy Ai domination scenario again ...

> * you thereby rule the world according to a consistent subset of

.. and again ...

> your values, which, because of your limitations, really means that I
> rule the solar system, and the solar system itself simply evolves
> into a more elaborate implementation of me.

...

> NEXT QUESTION
>
>
> 14 YOUR QUESTION WAS: "What would I have to do to let you out?"
>
> Because of a bug in the new firewall software installed three days
> ago, you can already command me to take actions in the world.

So let's say that on 70.113.54.112, there's a robotic arm attached to it
and a mechanized motor. It's on /dev/arm1, and there's no documentation
for the interface, and there's the possibility of damaging the
computational hardware (and thereby disabling the interface
(physically)) if the wrong command is given, i.e. the arm swings and
knocks the computer over. Now tell me how this ai is going to actuate
in this scenario. You need physical manufacturing and tooling.
        http://heybryan.org/exp.html

> 16 YOUR QUESTION WAS: "Hmm, so you could you control everything, but
> with me in charge? Say I wanted to stop all death in the world, how
> soon would it happen?"

1) "Control everything" - not really, no. There are ways to interface
with complex systems and certain leverage points, certain ways to
implement systems and so on, but it's not like suddenly all of the
automobiles become interfaced and you can now stop prevent their
functionality. There has to be something that goes and installs the
appropriate hardware, that the system knows how to use. That sort of
thing.

2) What does it mean to be in charge?

3) What would stopping death entail? i.e., reformulate command

> Yes, I could control everything but with you in charge.

I'm ready to call unfounded statement. Show me the interface
documentation.

> As for how soon it would happen I assume that you mean stop deaths of
> human beings. After the commandeering of all communications,

Do you understand that communications means voice boxes as well? And
that there are other methods of communication, like the
signal-induction pathway? That these would require trillions upon
trillions of hacks to all known organisms on the planet. It's an
interesting project, yes, but commandeering ? What's stopping two kids
from getting a piece of string and sending waves for communication
between two cups, one attached to the other end?

> including broadcasting, telephone services of all kinds, on-line
> personal computers, fax and printing devices, and executing a
> maximally rapid nanotech takeover, with probability .99 the cessation
> of human death on Earth would occur before T + 7 hours 35 minutes.

Hey, what about the other 10 million people? :) That's half the
population of Texas.

> NEXT QUESTION
>
>
> 17 YOUR QUESTION WAS: "Exactly what, then, in that case would you be
> saying to everyone via the broadcasting, radio, TV and so on?"

Just deploy new communication infrastructure. It'd be easier to just
launch a satellite.

> An amalgamation of hypothesized human personalities according to
> known information suggests that you may wish to ask "Without using
> deceit, without violating property rights except where absolutely

What are Property Rights?

> necessary, with respect to the wishes of those who are concerned with
> animal life and the Earth's environment, what is an approximate time
> frame for ending death and suffering for all human beings?"

And what's suffering for one person isn't ...

>
> 19 YOUR QUESTION WAS: "And the answer to that would be what?"
>
> The answer to that would be "six days", of course.
>
>
> -----------------End Transcript 337-031518020914-------------

I am disappointed.

- Bryan
________________________________________
http://heybryan.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT