Re: How hard a Singularity?

From: James Higgins (jameshiggins@earthlink.net)
Date: Wed Jun 26 2002 - 15:42:55 MDT


At 05:03 PM 6/26/2002 -0400, Eliezer S. Yudkowsky wrote:
>James Higgins wrote:
> > At 03:59 PM 6/26/2002 -0400, Eliezer S. Yudkowsky wrote:
> > I never said they could, or should, DESIGN anything. Simply approve
> > designs.
>
>I think that handing something to a committee imposes an upper limit on the
>intelligence of the resulting decisions. Committees can be smart but they
>cannot be geniuses. If Friendly AI requires genius, then turning over the
>problem to a committee guarantees failure, just as it would for the problem
>of AI itself.

Aargh, this is frustrating.

The committee is there for RISK MANAGEMENT. A task which should very much
be done thoroughly on such a task as creating a Singularity. They do not
have to, collectively, understand all the inner working of the
design. They simply have to be convinced to a reasonable degree that the
design, as a whole, is safe. There are many such examples of this in
present day life, where an entity is responsible for ensuring safety. If
it is impossible for a group of 10 intelligent people to agree that it is
safe to launch a Singularity then, frankly, it shouldn't be launched.

> > I believe your goal, Eliezer, is to make the Singularity as friendly and
> > safe as possible, is it not? If so you should welcome such a committee
> > as a way to ensure that the safest and most friendly design is the one
> > launched.
>
>I should NOT welcome such a committee unless I believe the ACTUAL EFFECT of
>such a committee will be to ensure that the safest and most friendly design
>is launched. Friendly AI design is not as complex as AI design but it is
>still the second most complicated thing I have ever encountered in my life.
> I would trust someone who built an AI to make it Friendly. I would not
>trust a committee to even understand what the real nature of the problem
>was. I would trust it to spend its whole time debating various versions of
>Asimov Laws, never moving on the issue of structural Friendliness.

So, Eliezer, your saying that if YOU were appointed to such a committee you
would all of a sudden stop thinking rationally and start spouting off
Asimov Laws and such? You think we should throw darts at the white pages
to pick the members of the committee or something? Your making my case for
me here as to why a single individual should not be trusted with this decision.

> > You should under no circumstances fear such a committee since, if you
> > really are destined to engineer the Singularity, the committee would
> > certainly concede that your design was the best when it was presented to
> > them.
>
>That's outright silly. One, I don't think that destiny exists in our
>universe, so I can't have one. Two, there is no reason why a committee
>would be capable of picking the best design when the problem is inherently
>more complex than the intelligence of a committee permits. The committee
>will pick out a set of Asimov Laws designed by Marvin Minsky in accordance
>with currently faddish AI principles. If the committee has to build their
>own AI, they'll pick a faddish design and fail. I will not provide an AI
>for them if they are not smart enough to build it themselves.

Careful, your starting to look like little more than an ego maniac,
Eliezer. Using irrational arguments to defend the position that you should
be free to decide the fate of the human race, as a whole, yourself won't
work forever.

>The fact that, at this moment, it takes (I think) substantially more
>intelligence to *build* an AI, at all, than to build a Friendly AI, is one
>of the few advantages that humanity has in this - although Moore's Law is
>slowly but steadily eroding that advantage. I have not and never will
>propose that SIAI (a 501(c)(3) nonprofit) be given supervisory capacity
>over the Friendliness efforts of other AI projects, regardless of whether
>future circumstances make this a plausible outcome.
>
>It is terribly dangerous to take away the job of Friendly AI from whoever
>was smart enough to crack the basic nature of intelligence! Friendly AI
>is not as complex as AI but it is still the second hardest problem I have
>ever encountered. A committee is not up to that!

A committee may not be up to designing a Friendly AI (because design by
committee is slow for one) but there is no reason they could not decide if
a given design was SAFE. Your seem rather convinced that human beings
can't be trusted to make their own decisions (based on post-Singularity
speculation you've posted) so why should we trust whoever gets their first
to make such major decisions? Just because someone is INTELLIGENT enough
to design an AI doesn't mean they are WISE enough to use it
properly. Intelligence does not equate to wisdom.

> >> Sometimes committees are not very smart. I fear them.
> >
> > I don't like committees either, and I can understand why you, in
> > particular, would fear such a committee. It would take away your ability
> > to single handedly, permanently alter the fate of the human race. Which
> > is exactly why such a committee would be a good thing. Such decisions are
> > too big for any one person to make.
>
>Then they're too big for N people to make and should be passed on to a
>Friendly SI or other transhuman.

So, how do you propose we find a Friendly SI or Transhuman to judge which
Singularity attempts will be safe? Since the decision would need to be
made prior to the existence of any Friendly SIs or Transhumans that would
seem to be quite difficult.

>Friendly AI is a test of intelligence. If the minimum intelligence to
>crack Friendly AI is more than the maximum intelligence of a committee,
>turning the problem over to a committee guarantees a loss.

Neither Friendly AI nor the Singularity is a TEST of any kind. Neither is
it a competition! No one should be in a race to create the Singularity to
prove anything. Such thinking will certainly be the demise of us all.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT