Suicide by committee (was: How hard a Singularity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jun 27 2002 - 04:11:39 MDT


James Higgins wrote:
>
> You don't endorse the existence of an organization to promote the meme
> of SI friendliness? Are you kidding? I thought that was YOUR ENTIRE
> GOAL! Is this only ok if you are the one doing it?

It is not my entire goal. It is nowhere near my goal. My goal is to
*actually ensure* that any seed AI built is *successfully* Friendly.

Right now, I can, in all humility, speak my complete mind about Friendly AI.
  This is "humility" because it doesn't trust that Earth will be okay if I
start deciding on behalf of others what they ought to know. In order to do
this I have to say, along with reassuring true things, frightening true
things, academically out-of-fashion true things, and politically incorrect
true things.

Right now I can focus solely on creating a theory that actually works,
instead of one that looks good on paper. I don't think that's possible once
the problem is turned over to committees.

This is a Singularity problem. You cannot solve Singularity problems with
silly little human solutions like committees. All you can do is create the
illusion of effectiveness and authority, while actually involving petty
politics in the problem and thereby destroying all hope of a correct
solution. I will say it again: You cannot solve Singularity problems by
inventing committees. The inability of any human to be entrusted with AI
morality is a Singularity problem. If the question were getting people to
trust AI morality, instead of *how to actually do it* - or, more to the
point, if I was dumb enough to see the issues in those terms - then yes, I
could have "solved" this problem by creating a committee to decide on AI
morality, which would have a greater appearance of authority and
trustworthiness. But you cannot solve Singularity problems like that.
Political problems, yes; Singularity problems, no.

My interest is *not* in convincing people that solutions will work. I want
a solution that *does work*. I suppose, as a secondary goal, that I want
people to know the truth, but that is not primary; solving the problem is
primary. It is not supposed to be persuasive, it is supposed to ACTUALLY
WORK. Lose sight of that and game over.

I know a *lot* of AI morality concepts that sound appealing but are utterly
unworkable. For that reason, above all else, I am scared to death of
committees. It seems very predictable what the result will be and it will
be absolutely deadly.

>> That said: This is a fucking stupid suicidal idea.
>
> Well, alrighty then. Could you please clarify your point a bit? It
> sounds like your reacting in a completely irrational manor, heavily
> influenced by emotions. I don't see anything suicidal about promoting
> Friendliness in regard to the Singularity or trying to ensure that the
> Singularity is attempted in a reasonable and safe manner.

A Friendly AI designed by a committee? Why aren't more people panicking
over this? It sounds like the backstory of Ed Merta's "Worst-Case Scenario".

And no, I will not design a Friendly AI to please a committee either. I
will not design a Friendly AI for any purpose other than being Friendly. IF
such a committee exists I will attempt to convince it that its first duty is
to disband. It is inherently difficult to convince a committee of this,
*regardless of whether it is true*, which in itself shows that a committee
is a bad idea. Committees don't know what they don't know.

Here's an idea: Instead of convening the Committee to Fuck Up Friendly AI,
let's convene the Committee to Decide Whether the CFUFAI Should Exist in the
First Place, with a clear understanding that the members of CDWCSEFP will
probably *not* serve on CFUFAI.

Look at the mess we have right here on SL4! You can't agree over whether
CFUFAI should be a purely advisory organization, a small transhumanist
organization with real powers (enforced how?), or a government committee;
you can't agree whether or not military AI development is inherently
frightening...

The natural solution is the one we have right now. If an AI project refuses
to listen to advisory boards out of sheer pigheadedness, where those
advisory boards are actually useful, then *for that reason* the project's
hubris will likely be punished with a failure in the domain of AI as well.
If a project fails to publish sufficient documentation of its Friendliness
efforts or fails to convey an adequate understanding of its reasoning, then
for that reason the project will have more difficulty finding funding (and,
more importantly, Singularity-savvy workers). If it's a commercial project
or government project that can get funding anyway then it certainly isn't
going to halt just because your little transhumanist committee says so.

The natural situation is probably as good as we're going to get. Random
people fighting over who gets to give orders to AI projects will simply make
things much, much worse. If you want to influence the Singularity, do the
moral thing and devote your entire life to doing nothing else, thereby
gaining some measure of influence according to your talent and effort. I
see no benefit for the Singularity in transforming this admittedly imperfect
situation into a human tribal fight over who gets to sit back far removed
from the actual efforts and give orders. I think it guarantees that the
final resting state will be an unworkable compromise solution designed to
please everyone and include a tidbit for every special interest. And that's
if the situation is in the hands of a committee of transhumanists. An
actual Congressional committee would either (a) fail to get the point
completely or (b) impose an outright ban on AI, guaranteed.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT