Re: AGI Policy (was RE: SIAI's flawed friendliness analysis)

From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Fri May 23 2003 - 10:15:41 MDT


On Tue, 20 May 2003, Keith Elis wrote:

> This post is not directed to me, but I'm jumping in anyway.
>
> Bill Hibbard:
>
> > Pointing out the difficulties does not justify not even
> > trying. Independent development of AI will be unsafe. A
> > political process is not guaranteed to solve the problem, but
> > it is necessary to at least try to stop humans who will
> > purposely build unsafe AIs for their own imagined benefit.
>
> You need to be far more specific about this. What do you mean by 'a
> political process'? Do you mean each line of code is subject to
> referendum? Surely not.

No more than every hose, fitting and wire in a nuclear
plant is subject to a referendum. Why even ask this
question?

> Perhaps the design should be agreed upon by a
> Senate subcommitee? Your insistence on this unknown process doesn't
> really take a position.

To a first approximation, this could work the way
government oversees other technology. Technical decisions
would be made by experts, reporting to elected officials.
Some elected officials would become semi-expert themselves,
and competing politicians would keep each other honest.
The critical thing in government oversight is always
that the public cares - otherwise policy is largely set
by lobbyists for corporations and other special
interests. As machines increasingly exhibit intelligence,
the public will remember all the books and movies about
the dangers of intelligent machines and care.

> A broad governmental policy with research guidelines that encourage
> Friendly AI (perhaps coupled with an offer of Manhattan Project funding
> to the most ethical researchers) *might* help. You admit that a
> political process is not guaranteed to help Friendly AI. It probably
> won't even come close. Friendly AI and compromise do not co-exist.

Technology in general is friendlier in democracies than
in non-democracies. Politics without compromise are
totalitarian.

> > Regulation will make it more difficult for those who want
> > to develop unsafe AI to succeed.
>
> Please be more specific. The regulatory process is cumbersome and slow,
> and mostly reactive. Good ideas are rarely implemented except as
> solutions to mature problems. The mature problems of this domain are the
> ones you just don't have time to react to.

As I say in my book, one challenge of the singularity
is to get the public engaged in the issue early. They
have been somewhat primed by science fiction stories
about intelligent machines, and see computers play an
increasing role in their lives. There will be a
period of years from the time when machines start
surprising people with their intelligence until the
singularity. That will be the critical time to inform
the public and politicians about the issues. When the
public gets excited, politician get excited, and
politicians naturally reach out to experts. It is
encouraging that Ray Kurzweil has already testified
before congress about machine intelligence.

I think there will be considerable concern among the
public and politicians about the dangers of machine
intelligence. There will be a debate with a wide
spectrum of opinions. The key will be to get a good
policy and regulatory mechanism in place before the
singularity really takes off. Not all government
agencies are slow. For example the Centers for
Disease Control generally do a good job in
containing disease outbreaks.

> > The legal and trusted AIs
> > will have much greater resources available to them and thus
> > will probably be more intelligent than the unregulated AIs.
> > The trusted AIs will be able to help with the regulation
> > effort. I would trust an AI with reinforcement values for
> > human happiness more than I would trust any individual human.
>
> Are you talking about tool-level AGIs or >H AGIs? In the latter case, do
> you really think a >H AGI would make laws the way we do? It's possible,
> but even I can think of ways to establish a much larger degree of
> control over the things I would need control over.

What do you mean by ">H AGI"? Google couldn't find it.

I am not suggesting that AGIs make laws any more than
nuclear plant inspectors make laws. Of course, at some
stage of the singularity the whole concept of law will
change radically.

> > It really comes down to who you trust. I favor a broad
> > political process because I trust the general public more
> > than any individual or small group.
>
> Most people aren't geniuses. What's even worse, most people deduce
> ethics from qualia. Average intelligence and utilitarian ethics might
> get you a business degree, but this is not the caliber of people that
> need to be working on AGI.

Democracies have a good track record of employing
their best scientists and technologists on their
hardest problems. Harvard, Yale, Stanford and the
other great universities are in fierce competition
to enroll smart poor kids, and this situation was
created by pressure from citizens and their elected
government. It is in non-democracies where you find
the dingbat relatives of politicians screwing up
science policies.

> > Of course, democratic
> > goverement does enlist the help of experts on technical
> > questions, but ultimate authority is with the public.
>
> What public? Do you mean the 50% of American citizens of voting age that
> actually cast a ballot? Or do you mean the rest of the world, too?

The fact that we live in a world of multple nations
and great disparities in wealth poses some interesting
questions for how society approaches the singularity.
I discuss this a bit in my book. The reality is that
AGI will first appear in wealthy countries. Hopefully
the wealth that AGI creates will motivate some
generosity toward poorer countries (wealthy countries
already provide aid to poorer countries). I am also
hopeful that the issue of banning weapons based on
intelligent machines will have wide public support,
which will help motivate international cooperation on
AI safety and help the public understand the broader
issues of AI safety.

> > When
> > you say "AI would be incomprehensible to the vast majority of
> > persons involved in the political process" I think you are
> > not giving them enough credit. Democratic politics have
> > managed to cope with some pretty complex and difficult problems.
>
> This is not directed to me, but can you name some of them that approach
> the complexity and difficulty of AGI?

There has never been a problem as complex as AI, but the
ability of society to cope is constantly increasing. For
its time in history, the problem of defeating the nazis
and fascists was pretty complex and difficult. The
democracies have done a good job in the fight against
disease during the 20th century, which is certainly a
complex and difficult problem. They have also reduced
deaths from the other two mass killers: famine and war
(of course these killers are still around, but
percentage-wise they kill many fewer people than they
used to - Stephen Pinker made a point of this during
his talk at UW last year).

And I can't leave out one of my favorite contributions of
democratic politics: development and free distribution to
the world of Vis5D and VisAD (thanks to support from the
US, Europe and Australia) ;)

Bill



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT