RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 23 2003 - 21:29:53 MDT


There are a lot of good points and interesting issues mixed up here, but I
think the most key point is the division between

-- those who believe a hard takeoff is reasonably likely, based on a radical
insight in AI design coupled with a favorable trajectory of self-improvemetn
of a particular AI system

-- those who believe in a soft takeoff, in which true AI is approached
gradually [in which case government regulation, careful peer review and so
forth are potentially relevant]

The soft takeoff brings with it many obvious possibilities for safeguarding,
which are not offered in the hard takeoff scenario. These possibilities are
the ones Bill Hibbard is exploring, I think. A lot of what SIAI is saying
is more relevant to the hard takeoff scenario, on the other hand.

My own projection is a semi-hard takeoff, which doesn't really bring much
reassurance...

ben g

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Brian
> Atkins
> Sent: Friday, May 23, 2003 11:20 PM
> To: sl4@sl4.org
> Subject: Re: SIAI's flawed friendliness analysis
>
>
> Bill Hibbard wrote:
> > On Tue, 20 May 2003, Brian Atkins wrote:
> >
> >
> >>. . .
> >>
> >>>If humans can design AIs smarter than humans, then humans
> >>>can regulate AIs smarter than humans.
> >>
> >>Just because a human can design some seed AI code that grows into a SI
> >>does not imply that humans or human-level AIs can successfully
> >>"regulate" grown SIs.
> >
> >
> > The regulation is not intended to trace the thoughts
> > and development of the SI. The inspection is of the
> > design, not the changing contents of its mind. If it's
> > initial reinforcement values are for human happiness,
> > and its simulation and reinforcement learning
> > algorithms are accurate, then we can trust the way it
> > will develop. In an earlier email I made the analogy
> > to game playing programs. If their game simulation
> > and learning algorithms are accurate and efficient,
> > and their reinforcement learning values are for winning
> > the game, then although the details of their play are
> > not predictable, the fact that they will play to win
> > is predictable.
> >
> > The first SI will be designed and educated by humans.
> > Humans will be able to understand and regulate its
> > design, and regulate how it is educated. This will
> > create trusted safe SIs. They can then design and
> > regulate improved SIs, with one independently
> > designed SI inspecting the designs of another.
> >
>
> You have a differing view of the development process of AGI from myself
> (and I would guess most people here). I do not believe it is that likely
> that SI will arrive, in its full design, from humans. I find it much
> more likely that it will come from a lengthy (in terms of iterations)
> continuing redesign process undertaken by an approximately human level
> AGI ("seed AI") that is capable of understanding its own design and
> improving upon it in a stepwise fashion.
>
> So, while human regulators may possibly be able to understand the design
> of an early AGI (assuming no evolutionary programming, chaotic
> "emergence" techniques, or other obscuring programming methods are
> utilized), I do not have any surety that they will be able to understand
> it later on. Perhaps if the growing AGI stopped for several months and
> laid it out in easily digestible chunks, MAYBE- but at this point you
> are taking its word at face value that it hasn't hidden anything from you.
>
> Does your plan rely on your supposition, or can it tolerate a seed AI
> scenario?
>
> >
> >>>It is not necessary
> >>>to trace an AI's thoughts in detail, just to understand
> >>>the mechanisms of its thoughts. Furthermore, once trusted
> >>>AIs are available, they can take over the details of
> >>>design and regulation. I would trust an AI with
> >>>reinforcement values for human happiness more than I
> >>>would trust any individual human.
> >>>
> >>>This is a bit like the experience of people who write
> >>>game playing programs that they cannot beat. All the
> >>>programmer needs to know is that the logic for
> >>>simulating the game and for reinforcement learning are
> >>>accurate and efficient, and that the reinforcement
> >>>values are for winning the game
> >>>
> >>>You say "by your design the 'good AIs' will be crippled
> >>>by only allowing them very slow intelligence/power
> >>>increases due to the massive stifling human-speed". But
> >>>once we have trusted AIs, they can take over the details
> >>>of designing and regulating other AIs.
> >>
> >>Well perhaps I misunderstood you on this point. So it's perfectly ok
> >>with you if the very first "trusted AI" turns around and says: "Ok, I
> >>have determined that in order to best fulfill my goal system I need to
> >>build a large nanocomputing system over the next two weeks, and then
> >>proceed to thoroughly redesign myself to boost my intelligence 1000000x
> >>by next month. And then, I plan to take over root access to all the nuke
> >>control systems on the planet, construct a fully robotic nanotech
> >>research lab, and spawn off about a million copies of myself."? If
> >>you're ok with that (or whatever it outputs), then I can withdraw my
> >>quote above. I fully agree with you that letting a properly designed and
> >>tested FAI do what it needs to do, as fast as it wants to do it, is the
> >>safest and most rational course of action.
> >
> >
> > For me, a trusted safe AI is one whose reinforcement
> > values are for human happiness. The behavior you describe
> > would make people unhappy, and therefore would not be
> > learned. The thing about using human happiness as a
> > reinforcement value is keeping humans "in the loop" of
> > the AI's thinking, no matter how intelligent it becomes.
>
> Aside from the nukes thing, what exactly about what it said makes you
> unhappy? It seems obvious to me that to increase its ability to satisfy
> its goal of increasing happiness it will logically want to become
> smarter, better able to communicate widely with a large number of human
> individuals, and to have manufacturing capabilities in order to
> implement things needed for happiness. Are you saying that the best way
> an AGI can make people happy is for it to self limit its capabilities
> and influence to human levels?
>
> It seems more apparent, that what you mean by a "trusted safe AI" is: a
> roughly human-level AI that never grows beyond the point where humans
> lose the ability to understand its decisions, and furthermore, said AI
> always submits its decisions to human level governmental bodies for
> discussion and approval. i.e. it remains basically, a "tool".
>
> The kinds of things it suggests doing would, to me, increase my
> happiness. I would *like* to have nukes under the control of a
> theoretically more rational entity, and I would *like* said entity to
> have the means to build for me whatever I desire, and protect me when
> necessary from other sentient beings. You may not personally like it,
> and the US government may not like it, but what if it determines that a
> majority of the humans on the planet *do* like it? Or that it should be
> its goal to serve each human individually?
>
> >
> >
> >>Now you also still haven't answered to my satisfaction my objections
> >>that the system will never get built due to multiple political, cost,
> >>and feasibility issues.
> >
> >
> > I'll grant that the process will be very complex and
> > politically messy. There will certainly be a strong urge
> > to build AI, because of the promise of wealth without work.
> > But when machines start suprising people with their
> > intelligence, the public be reminded of the fears raised
> > by science fiction books and movies. Once the public is
> > excited, the politicians will get excited and turn to
> > experts (it is encouraging that Ray Kurzweil has already
> > testified before congress about machine intelligence).
> > There will be conflicting opinions among the experts.
> > Among the public there will also be conflicting opinions,
> > as well as lots of crazy opinions. This will all create a
> > very raucous political situation, a good example of the
> > old line that its not pretty to watch balony and
> > legislation being made. Nevertheless, in the end it is
> > this public and democratic political process that we
> > should all trust best (if we've learned the lessons of
> > history).
> >
> > I don't see cost as a show-stopper. The world is pouring
> > huge resources into advancing technology. Regulation will
> > have its costs, but I don't see them making the whole
> > project infeasible. Embedding one inspector per designer
> > would roughly double costs, nine inspectors per designer
> > (that's probably too many) would multiply costs by ten.
> > These don't make the project infeasible. The singularity
> > is one project where we don't want to cut corners for cost.
>
> There are other aspects of your plan that I am referring to. For
> instance you suggest that a wide ranging detection system will be
> required in order to prevent UFAI projects. How exactly will this work?
> Also, will the USA invade or economically restrict any countries that
> fail to sign on to this AGI regulation system?
>
> >
> >
> >>. . .
> >>
> >>>Powerful people and institutions will try to manipulate
> >>>the singularity to preserve and enhance their interests.
> >>>Any strategy for safe AI must try to counter this threat.
> >>>
> >>
> >>Certainly, and we argue the best way is to speed up the progress of the
> >>well-meaning projects in order to win that race.
> >>
> >>Your plan seems to want to slow down the well-meaning projects, because
> >>out of all AGI projects they are the most likely to willingly go along
> >>with such forms of regulation. This looks to many of us here as if you
> >>are going out of your way to help the "powerful people and institutions"
> >>get a better shot at winning this race. Such people and institutions are
> >>the ones who have demonstrated time and time again throughout history
> >>that they will go through loopholes, work around the regulatory bodies,
> >>and generally use whatever means needed in order to advance their goals.
> >>Again, to most of us, it just looks like pure naivete on your part.
> >
> >
> > The key word here is "well-meaning". Who determines that?
> > I only trust the public to determine that, via a
> > democratically elected government.
> >
> > The other problem is thinking that you can help a
> > "well-meaning" project win the race. Without the force
> > of law to deter them, there are going to be some *very*
> > well financed projects developing unsafe AI.
>
> Yep, so again, why are you attempting to slow down what are likely "well
> meaning" projects?
>
> >
> > For all the details that need to be worked out in the
> > approach of regulation by democratic government, it is
> > still far better than trusting the "well-meaning"
> > intentions of some particular project, and trusting
> > that it will win the race to develop AI first.
>
> Are you saying "I don't know" ?
>
> >
> > The "naivete" is thinking that the wealthy and
> > powerful won't understand that super-intelligence
> > will have the power to rule the world, or that they
> > won't try to get control over it, or that the folks
> > in the SIAI are so smart that they will overcome a
> > million to one disparity in resources.
>
> Don't attempt to attribute these views to myself or SIAI, since they are
> not representative of our actual views.
>
> > The only hope
> > is to get the public on our side.
>
> Do you realize how many thousands of examples I could cite of where the
> public/government utterly failed to accomplish a technical project? Even
> fairly simple things like getting a dam built, or a database
> restructured. Ever watch that "Fleecing of America" bit on the NBC
> Nightly News?
>
> >
> >
> >>. . .
> >>Those weren't the point. The reason I brought up the
> >>UFAI-invents-nanotech possibility is that you didn't seem to be
> >>considering such unconventional/undetectable threats when you said:
> >>
> >>"But for an unsafe AI to pose a real
> >>threat it must have power in the world, meaning either control
> >>over significant weapons (including things like 767s), or access
> >>to significant numbers of humans. But having such power in the
> >>world will make the AI detectable, so that it can be inspected
> >>to determine whether it conforms to safety regulations."
> >>
> >>When I brought up the idea that UFAIs could develop threats that were
> >>undetectable/unstoppable, thereby rendering your detection plan
> >>unrealistic, you appeared to miss the point because you did not respond
> >>to my objection. Instead you seemed on one hand to say that "it is far
> >>from a sure thing" and on the other hand that apparently you are quite
> >>sure that humans will already have detection networks built for any type
> >>of threat an UFAI can dream up (highly unlikely IMO). Neither are good
> >>answers to how your plan deals with possibly undetectable UFAI threats.
> >
> >
> > I never said I was "quite sure that humans will already have
> > detection networks built for any type of threat an UFAI can
> > dream up". I admit the words you quoted by me are more
> > optimistic than I really intended. What I really should say
> > is that democratic government, for all its faults, has the
> > best track record of protecting general human interests. So
> > it is the democratic political process that I trust to cope
> > with the dangers of the singularity.
>
> Good, I'm glad the magical powers of democratic government automatically
> solve the technical issue I was attempting to engage you on.
> --
> Brian Atkins
> Singularity Institute for Artificial Intelligence
> http://www.intelligence.org/
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT