Re: Why friendly AI (FAI) won't work

From: Kaj Sotala (xuenay@gmail.com)
Date: Thu Nov 29 2007 - 12:09:46 MST


On 11/29/07, Harry Chesley <chesley@acm.org> wrote:
> Robin Lee Powell wrote:
> > OMFG has that topic been done to death. Read the archives on AI
> > boxing.
>
> And nothing that I've read about it has yet convinced me. What I've seen

I'm not entirely sure I agree with the traditional AI boxing
arguments, either - even if boxing wasn't a bulletproof method of
designing a Friendly AI, it'd sure be a heck of a lot *easier* to
build an AI that was content to remain in the box than to build an AI
that knew what was the morally correct course of action in all
situations.

However, even if you rejected the AI boxing arguments, there's still a
crucial flaw with the idea of just restricting the AI's output, one
which I find much easier to accept. Quoted from
http://www.saunalahti.fi/~tspro1/objections.html#advisors :

"Objection 13: Couldn't AIs be built as pure advisors, so they
wouldn't do anything themselves? That way, we wouldn't need to worry
about Friendly AI.

Answer: The problem with this argument is the inherent slowness in all
human activity - things are much more /efficient/ if you can cut
humans out of the loop, and the system can carry out decisions and
formulate objectives on its own. Consider, for instance, two competing
corporations (or nations), each with their own advisor AI that only
carries out the missions it is given. Even if the advisor was the one
collecting all the information for the humans (a dangerous situation
in itself), the humans would have to spend time making the actual
decisions of how to have the AI act in response to that information.
If the competitor had turned over all the control to their own,
independently acting AI, it could react much faster than the one that
relied on the humans to give all the assignments. Therefore the
temptation would be immense to build an AI that could act without
human intervention.

Also, there are numerous people who would /want/ an independently
acting AI, for the simple reason that an AI built only to carry out
goals given to it by humans could be used for vast harm - while an AI
built to actually care for humanity could act in humanity's best
interests, in a neutral and bias-free fashion. Therefore, in either
case, the motivation to build independently-acting AIs is there, and
the cheaper computing power becomes, the easier it will be for even
small groups to build AIs.

It doesn't matter if an AI's Friendliness could trivially be
guaranteed by giving it a piece of electronic cheese, if nobody cares
about Friendliness enough to think about giving it some cheese, or if
giving the cheese costs too much in terms of what you could achieve
otherwise. Any procedures which rely on handicapping an AI enough to
make it powerless also handicap it enough to severly restrict its
usefulness to most potential funders. Eventually there will be
somebody who chooses not to handicap their own AI, and then the
guaranteed-to-be-harmless AI will end up dominated by the more
powerful AI."

-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/
Organizations worth your time:
http://www.intelligence.org/ | http://www.crnano.org/ | http://lifeboat.com/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT