From: James Higgins (firstname.lastname@example.org)
Date: Wed Jun 26 2002 - 17:28:54 MDT
At 06:50 PM 6/26/2002 -0400, Eliezer S. Yudkowsky wrote:
> > They
>>do not have to, collectively, understand all the inner working of the
>>design. They simply have to be convinced to a reasonable degree that the
>>design, as a whole, is safe. There are many such examples of this in
>>present day life, where an entity is responsible for ensuring safety. If
>>it is impossible for a group of 10 intelligent people to agree that it is
>>safe to launch a Singularity then, frankly, it shouldn't be launched.
>I see. 10 intelligent people? Let me inquire about the selection process
>for this committee. Is it a government committee? Does it have legal
>enforcement powers? Is it sponsored by a nonprofit such as, oh,
>SIAI? Who gets to pick the guardians? I might *maybe* consider trying to
>justify matters to a majority of a committee composed of Nick Bostrom,
>Mitchell Porter, Vernor Vinge, Greg Egan, and... damn, can't think of
>anyone else offhand who isn't already on my side.
See, you have some suggestions for committee members already.
>>>It is terribly dangerous to take away the job of Friendly AI from
>>>whoever was smart enough to crack the basic nature of intelligence!
>>>Friendly AI is not as complex as AI but it is still the second hardest
>>>problem I have ever encountered. A committee is not up to that!
>>A committee may not be up to designing a Friendly AI (because design by
>>committee is slow for one) but there is no reason they could not decide
>>if a given design was SAFE.
>The task of verification is easier than the task of invention but is *not*
>easy in any absolute sense.
I never said it would be easy.
> > so why should we trust
>>whoever gets their first to make such major decisions? Just because
>>someone is INTELLIGENT enough to design an AI doesn't mean they are WISE
>>enough to use it properly. Intelligence does not equate to wisdom.
>"Intelligence does not equate to wisdom."
>How many times I've heard that...
>Do you think it's possible to build an AI without wisdom? Forget whether
>you think I'm wise. Forget whether I, personally, manage to create AI.
>Consider how many times AI projects have failed, and the reasons for which
>they failed. Consider how much self-awareness it takes, or creates, to
>reach an understanding of how minds work. Building AI feeds off
>self-awareness and, in feeding on it, hones it. If you don't believe this
>of me, fine; predict that I will fail to build AI.
>This is the task of building a mind. It isn't a small thing to succeed in
>building a real mind. I would consider it a far greater proof of wisdom
>than membership in any committee that ever existed.
I would not. Building an AI would require intelligence, not wisdom. I'm
not certain if a person without a great deal of wisdom could construct an
AI with a great deal of wisdom, though. Haven't actually thought on that
>If I'm as lousy at this job as you seem to think, then I will fail to
>build AI in the first place. The problem is harder than that.
I never said you were lousy at anything, just over confidant and possibly
egomaniacal. Nor have I ever commented on your degree of intelligence (I
do, in fact, think your intelligent). I'm not certain that you are wise,
though. Which is primary because you seem far too convinced that your
ideas above everyone else's are correct.
>>>Then they're too big for N people to make and should be passed on to a
>>>Friendly SI or other transhuman.
>>So, how do you propose we find a Friendly SI or Transhuman to judge which
>>Singularity attempts will be safe?
>I don't see how intelligence on that particular task is any greater for N
>people than for one person, nor do I see how the moral problem gets any
>better unless you're conducting a majority vote of the entire human
>species, and even then I'm not sure it gets any better; minorities have
A majority vote of the human species would never succeed. They, and I
don't like this opinion, can't even be trusted to elect government
officials in most cases. Not to mention that I just read that only 25% of
the US population has much concept of what Scientific Method really is. So
this would obviously not work.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT