Re: Suicide by committee (was: How hard a Singularity?)

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Jun 27 2002 - 14:01:24 MDT


At 03:38 PM 6/27/2002 -0400, Eliezer S. Yudkowsky wrote:
>James Higgins wrote:
> >
> > God damn it. Do you speak English? Do you actually READ what I type?
> > How many times do I have to say that NO ONE want's to turn the problem
> > over to committee! As Ben put it this would be an advisory board.
> > What's, your telling us neither yourself nor anyone working on such
> > things could even benefit from advice from your peers?
>
>Ahem. Do YOU actually read what you type?

Yes, I am very self aware.

>James Higgins wrote:
> >
> > Ideally, I think deployment (kick off) of a Singularity project would be
> > impossible without the agreement of this group. (the keys would not be
> > in the possession of the developers). All 10 people would have to agree
> > in order to launch a Singularity attempt. Ideally this same group would
> > oversee all potential Singularity projects, so that they could analyze,
> > compare and pick the one with the best potential to be launched.
>
>ADVISORY BOARD, MY LEFT FOOT.
>
>I understand that you later, in addition to the suggestion above, floated
>a modified suggestion for an advisory board. Just to be absolutely clear,
>I am panicking over the suggestion above and not the suggestion of an
>advisory board. SIAI has already considered creating an advisory board.

Ah, I see. Glad to see we're discussing it, finally. And if you had
raised this point earlier, instead of "panicking" (your word) we could have
cleared this up quite easily. This, once again, is why I don't think
intelligence directly equates to wisdom. I'm not saying you have no
wisdom, Eliezer, just that you don't frequently show as much as I would
like to see of it on SL4. Please see the reply I just posted (in response
to this exact same quote you used in your reply to Ben) for clarification
on my viewpoint.

> >> My interest is *not* in convincing people that solutions will work. I
> >> want a solution that *does work*. I suppose, as a secondary goal, that
> >> I want people to know the truth, but that is not primary; solving the
> >> problem is primary. It is not supposed to be persuasive, it is
> >> supposed to ACTUALLY WORK. Lose sight of that and game over.
> >
> > Yes, but have you actually considered the idea that you could be wrong?
> > That YOUR ideas may not work?
>
>Oh, for the love of -
>
>Would I have spent O(N^2) hours working out a goal system architecture
>that lets an AI sanely spot and fix architectural errors in its own goal
>system if I thought I was infallible?
>
>Now ask me whether I think your "not a committee, just 10 bickering people
>with individual veto power over the Singularity" idea is going to work.

I think the advisory board would be a good think and would help. The
committee I originally suggested probably would not work, but the real goal
of it was to try and reign you in a bit (since you seemed to be getting
more and more detached from reality). :)

> > Have you considered what would happen if
> > this were the case? As they say, two heads are better than one (which,
> > btw, I very much agree with). So having a few different people all
> > working on the Friendliness problem would be highly beneficial. If for
> > no other reason than it would give all of them alternate ideas to look at
> > and think about, which may then improve their own designs.
>
>CFAI is on the Web.

?

Not certain how that relates to the point that more solutions to choose
from and take ideas from would be better.

> >> I know a *lot* of AI morality concepts that sound appealing but are
> >> utterly unworkable. For that reason, above all else, I am scared to
> >> death of committees. It seems very predictable what the result will be
> >> and it will be absolutely deadly.
> >
> > Would you please get off the committee thing. Your sounding like a
> > broken record because you just keep repeating the same point, over and
> > over. Advisory board, not committee (I should NEVER have called it such
> > - my bad). Everyone can always use some good advice.
>
>Okay, if the resolution of this is that you call for the creation of the
>advisory board and the earlier committee concept gets sunk, buried, and
>burned, then I'd accept that as a temporary "win" for the good guys. Not
>that the notion isn't going to shamble forth from its grave and have to be
>defeated again every six months.

See, we do agree (at least sometimes). That is all I wanted you to agree
with. If you can convince me that your not egomaniacal nor have a
self-important viewpoint then I'll work just as hard as you to defeat the
idea of "The Committee" should it shamble forth from its grave...

> > Its seems to be inherently difficult to convince you that your not god
> > and shouldn't personally be making all the decisions that will
> > permanently seal the fate of the human race.
>
>If I thought that I'd be advocating the idea of SIAI having veto power
>over all AI projects, on the basis of our greater expertise, dedication,
>altruism, and understanding of the Singularity. *NOBODY* should have that
>power. Not me, not your committee, not anyone. Once we start fighting
>over who gets to give orders instead of working out the best engineering
>solution, the main issue has been lost.
>
>No, I'm not God. Why would your ten people with veto power be God? What
>exactly has been gained here? Maybe you need to stew over the problem and
>let it slowly drive you insane for a few weeks, rather than picking on the
>first solution that comes to mind.

They wouldn't be God, of course. Although, it strikes me that creating the
Singularity is as close to playing God as one can get. But, I digress.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT