Re: SIAI's flawed friendliness analysis

From: Brian Atkins (brian@posthuman.com)
Date: Tue May 20 2003 - 18:30:11 MDT


Bill Hibbard wrote:
> On Sun, 18 May 2003, Brian Atkins wrote:
>
>
>>Bill Hibbard wrote:
>>
>>>On Sat, 17 May 2003, Brian Atkins wrote:
>>>
>>>>Bill Hibbard wrote:
>>>>
>>>>>The danger of outlaws will increase as the technology for
>>>>>intelligent artifacts becomes easier. But as time passes we
>>>>>will also have the help of safe AIs to help detect and
>>>>>inspect other AIs.
>>>>>
>>>>
>>>>Even in such fictional books as Neuromancer, we see that such Turing
>>>>Police do not function well enough to stop a determined superior
>>>>intelligence. Realistically, such a police force will only have any real
>>>>chance of success at all if we have a very transparent society... it
>>>>would require societal changes on a very grand scale, and not just in
>>>>one country. It all seems rather unlikely... I think we need to focus on
>>>>solutions that have a chance at actual implementation.
>>>
>>>
>>>I never said that safe AI is a sure thing. It will require
>>>a broad political movement that is successful in electoral
>>>politics. It will require whatever commitment and resources
>>>are needed to regulate AIs. It will require the patience to
>>>not rush.
>>
>>Bill, I'll just come out and state my opinion that what you are
>>describing is a pipe dream. I see no way that the things you speak of
>>have any chance of happening within the next few decades. Governments
>>won't even spend money on properly tracking potential asteroid threats,
>>and you honestly believe they will commit to the VAST amount of both
>>political willpower and real world resource expenditures required to
>>implement an AI detection and inspection system that has even a low
>>percentage shot at actually accomplishing anything?
>>
>>And that is not even getting into the fact that by your design the "good
>>AIs" will be crippled by only allowing them very slow intelligence/power
>>increases due to the massive stifling human-speed
>>design/inspection/control regime... they will have zero chance to
>>scale/keep up as computing power further spreads and enables vastly more
>>powerful uncontrolled UFAIs to begin popping up. The result is seemingly
>>a virtual guarantee that eventually an UFAI will get out of control (as
>>you state, your plan is not a "sure thing") and easily "win" over the
>>regulated other AIs in existence. So what does it accomplish in the end,
>>other than eliminating any chance that a "regulated AI" could "win"?
>>
>>Finally, how does your human-centric regulation and design system cope
>>with AIs that need to grow to be smarter than human? Are you proposing
>>to simply keep them limited indefinitely to this level of intelligence,
>>or will the "trusted" AIs themselves eventually take over the process of
>>writing design specs and inspecting each other?
>
>
> If humans can design AIs smarter than humans, then humans
> can regulate AIs smarter than humans.

Just because a human can design some seed AI code that grows into a SI
does not imply that humans or human-level AIs can successfully
"regulate" grown SIs.

> It is not necessary
> to trace an AI's thoughts in detail, just to understand
> the mechanisms of its thoughts. Furthermore, once trusted
> AIs are available, they can take over the details of
> design and regulation. I would trust an AI with
> reinforcement values for human happiness more than I
> would trust any individual human.
>
> This is a bit like the experience of people who write
> game playing programs that they cannot beat. All the
> programmer needs to know is that the logic for
> simulating the game and for reinforcement learning are
> accurate and efficient, and that the reinforcement
> values are for winning the game
>
> You say "by your design the 'good AIs' will be crippled
> by only allowing them very slow intelligence/power
> increases due to the massive stifling human-speed". But
> once we have trusted AIs, they can take over the details
> of designing and regulating other AIs.

Well perhaps I misunderstood you on this point. So it's perfectly ok
with you if the very first "trusted AI" turns around and says: "Ok, I
have determined that in order to best fulfill my goal system I need to
build a large nanocomputing system over the next two weeks, and then
proceed to thoroughly redesign myself to boost my intelligence 1000000x
by next month. And then, I plan to take over root access to all the nuke
control systems on the planet, construct a fully robotic nanotech
research lab, and spawn off about a million copies of myself."? If
you're ok with that (or whatever it outputs), then I can withdraw my
quote above. I fully agree with you that letting a properly designed and
tested FAI do what it needs to do, as fast as it wants to do it, is the
safest and most rational course of action.

Now you also still haven't answered to my satisfaction my objections
that the system will never get built due to multiple political, cost,
and feasibility issues.

>>>By pointing out all these difficulties you are helping
>>>me make my case about the flaws in the SIAI friendliness
>>>analysis, which simply dismisses the importance of
>>>politics and regulation in eliminating unsafe AI.
>>>
>>
>>This is a rather nonsensical mantra... everyone is pointing out the
>>obvious flaws in your system- this does not help your idea that politics
>>and regulation are important pieces to the solution of this problem.
>>Tip: drop the mantras, and actually come up with some plausible answers
>>to the objections being raised.
>
>
> Calling this a "nonsensical mantra" does not answer it.
> The objections are just possible ways that a political
> solution may fail. Of course it may fail. But its the
> best chance of success.

No, calling a proposed solution with many raised and unanswered
objections, that you yourself admit could easily fail, the "best chance
of success" is not something I will agree with. In order to convince me
of that rather large claim, you will have to go much farther and into
much more detail.

>
>>SIAI's analysis, as already explained by Eliezer, is not attempting at
>>all to completely eliminate the possibility of UFAI. As he said, we
>>don't expect to be able to have any control over someone who sets out to
>>deliberately construct such an UFAI, and we admit this reality rather
>>than attempt to concoct world-spanning pipe dreams.
>
>
> Powerful people and institutions will try to manipulate
> the singularity to preserve and enhance their interests.
> Any strategy for safe AI must try to counter this threat.
>

Certainly, and we argue the best way is to speed up the progress of the
well-meaning projects in order to win that race.

Your plan seems to want to slow down the well-meaning projects, because
out of all AGI projects they are the most likely to willingly go along
with such forms of regulation. This looks to many of us here as if you
are going out of your way to help the "powerful people and institutions"
get a better shot at winning this race. Such people and institutions are
the ones who have demonstrated time and time again throughout history
that they will go through loopholes, work around the regulatory bodies,
and generally use whatever means needed in order to advance their goals.
Again, to most of us, it just looks like pure naivete on your part.

>
>>P.S. You completely missed my point on the nanotech... I was suggesting
>>a smart enough UFAI could develop in secret some working nanotech long
>>before humans have even figured out how to do such things. There would
>>be no human nanotech defense system. Or, even if you believe that the
>>sequence of technology development will give humans molecular nanotech
>>before AI, my point still stands that a smart enough UFAI will ALWAYS be
>>able to do something that we have not prepared for. The only way to
>>defend against a malevolent superior intelligence in the wild is to be
>>(or have working for you) yourself an even more superior intelligence.
>
>
> I didn't miss your point. I accepted that nanotech is a big
> threat, along with genetic engineering of micro-organisms.
> I added that nanotech will be a threat with or without AI.

Those weren't the point. The reason I brought up the
UFAI-invents-nanotech possibility is that you didn't seem to be
considering such unconventional/undetectable threats when you said:

"But for an unsafe AI to pose a real
threat it must have power in the world, meaning either control
over significant weapons (including things like 767s), or access
to significant numbers of humans. But having such power in the
world will make the AI detectable, so that it can be inspected
to determine whether it conforms to safety regulations."

When I brought up the idea that UFAIs could develop threats that were
undetectable/unstoppable, thereby rendering your detection plan
unrealistic, you appeared to miss the point because you did not respond
to my objection. Instead you seemed on one hand to say that "it is far
from a sure thing" and on the other hand that apparently you are quite
sure that humans will already have detection networks built for any type
of threat an UFAI can dream up (highly unlikely IMO). Neither are good
answers to how your plan deals with possibly undetectable UFAI threats.

> The way to counter the threat of micro-organisms has been
> detection networks, isolation of affected people and
> regions, and urgent efforts to analyze the organisms and
> find counter measures. There are also efforts to monitor
> the humans with the knowledge to create new micro-organisms.
> These measures all have the force of law and the resources
> of government behind them. Similar measures will apply to
> the threat of nanotech. When safe AIs are available, they
> will certainly be enlisted to help. With such huge threats
> as nanotech the pipe dream is to think that they can be
> countered without the force of law and the resources of
> government. Or to think that government won't get involved.

Oh, I don't disagree that some form of "government" will be required, I
just think it will be a post-Singularity form of governance that will
have no relation to our current system.

At any rate, I believe you will grant me my point that "safe AIs" can
only defend us if they stay ahead of any possible UFAI's intelligence level.

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT