Singularity Objections: Friendliness, alternatives

From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Jan 29 2008 - 13:40:37 MST


 Alternatives to Friendliness
[edit]
Couldn't AIs be built as pure advisors, so they wouldn't do anything themselves?

The problem with this argument is the inherent slowness in all human
activity - things are much more efficient if you can cut humans out of
the loop, and the system can carry out decisions and formulate
objectives on its own. Consider, for instance, two competing
corporations (or nations), each with their own advisor AI that only
carries out the missions it is given. Even if the advisor was the one
collecting all the information for the humans (a dangerous situation
in itself), the humans would have to spend time making the actual
decisions of how to have the AI act in response to that information.
If the competitor had turned over all the control to their own,
independently acting AI, it could react much faster than the one that
relied on the humans to give all the assignments. Therefore the
temptation would be immense to build an AI that could act without
human intervention.

Also, there are numerous people who would want an independently acting
AI, for the simple reason that an AI built only to carry out goals
given to it by humans could be used for vast harm - while an AI built
to actually care for humanity could act in humanity's best interests,
in a neutral and bias-free fashion. Therefore, in either case, the
motivation to build independently-acting AIs is there, and the cheaper
computing power becomes, the easier it will be for even small groups
to build AIs.

It doesn't matter if an AI's Friendliness could trivially be
guaranteed by giving it a piece of electronic cheese, if nobody cares
about Friendliness enough to think about giving it some cheese, or if
giving the cheese costs too much in terms of what you could achieve
otherwise. Any procedures which rely on handicapping an AI enough to
make it powerless also handicap it enough to severely restrict its
usefulness to most potential funders. Eventually there will be
somebody who chooses not to handicap their own AI, and then the
guaranteed-to-be-harmless AI will end up dominated by the more
powerful AI.

    * A human upload would naturally be more Friendly than any AI.
          o Rebuttal synopsis: just look at the atrocities people with
unlimited power have committed, and say how Friendly that looks like.
    * Trying to create a theory which absolutely guarantees Friendly
AI is too unrealistic / ambitious of a goal; it's a better idea to
attempt to create a theory of "probably Friendly AI".
          o Rebuttal synopsis: This is probably true.
    * We should work on building a transparent society where no
illicit AI development can be carried out.
          o Rebuttal synopsis: That is a good goal, and worth pursuing
simultaneously with actual Friendliness development.

 - TOm



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT