From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 26 2002 - 16:50:13 MDT
James Higgins wrote:
>
> Aargh, this is frustrating.
>
> The committee is there for RISK MANAGEMENT. A task which should very
> much be done thoroughly on such a task as creating a Singularity.
A task at which committees are known to abjectly suck.
> They
> do not have to, collectively, understand all the inner working of the
> design. They simply have to be convinced to a reasonable degree that
> the design, as a whole, is safe. There are many such examples of this
> in present day life, where an entity is responsible for ensuring
> safety. If it is impossible for a group of 10 intelligent people to
> agree that it is safe to launch a Singularity then, frankly, it
> shouldn't be launched.
I see. 10 intelligent people? Let me inquire about the selection process
for this committee. Is it a government committee? Does it have legal
enforcement powers? Is it sponsored by a nonprofit such as, oh, SIAI? Who
gets to pick the guardians? I might *maybe* consider trying to justify
matters to a majority of a committee composed of Nick Bostrom, Mitchell
Porter, Vernor Vinge, Greg Egan, and... damn, can't think of anyone else
offhand who isn't already on my side.
> So, Eliezer, your saying that if YOU were appointed to such a committee
> you would all of a sudden stop thinking rationally and start spouting
> off Asimov Laws and such? You think we should throw darts at the white
> pages to pick the members of the committee or something? Your making my
> case for me here as to why a single individual should not be trusted
> with this decision.
I think that I would not be altered by my appointment to such a committee,
but I have no confidence in my ability to keep a committee thinking
rationally. I can solve a problem posed by Nature because Nature is not as
perverse as humans. I can *maybe* explain to a committee composed of the
smartest people I know, but I make no guarantees.
>> It is terribly dangerous to take away the job of Friendly AI from
>> whoever was smart enough to crack the basic nature of intelligence!
>> Friendly AI is not as complex as AI but it is still the second hardest
>> problem I have ever encountered. A committee is not up to that!
>
> A committee may not be up to designing a Friendly AI (because design by
> committee is slow for one) but there is no reason they could not decide
> if a given design was SAFE.
The task of verification is easier than the task of invention but is *not*
easy in any absolute sense.
> Your seem rather convinced that human
> beings can't be trusted to make their own decisions (based on
> post-Singularity speculation you've posted)
Eh? What? I think that there's a *possibility* that you won't be able to
trust *all* of the people *all* of the time post-Singularity, and I've
posted arguments that this need not be a catastrophe.
> so why should we trust
> whoever gets their first to make such major decisions? Just because
> someone is INTELLIGENT enough to design an AI doesn't mean they are WISE
> enough to use it properly. Intelligence does not equate to wisdom.
Ah.
"Intelligence does not equate to wisdom."
How many times I've heard that...
Do you think it's possible to build an AI without wisdom? Forget whether
you think I'm wise. Forget whether I, personally, manage to create AI.
Consider how many times AI projects have failed, and the reasons for which
they failed. Consider how much self-awareness it takes, or creates, to
reach an understanding of how minds work. Building AI feeds off
self-awareness and, in feeding on it, hones it. If you don't believe this
of me, fine; predict that I will fail to build AI.
This is the task of building a mind. It isn't a small thing to succeed in
building a real mind. I would consider it a far greater proof of wisdom
than membership in any committee that ever existed.
If I'm as lousy at this job as you seem to think, then I will fail to build
AI in the first place. The problem is harder than that.
>> Then they're too big for N people to make and should be passed on to a
>> Friendly SI or other transhuman.
>
> So, how do you propose we find a Friendly SI or Transhuman to judge
> which Singularity attempts will be safe?
I don't see how intelligence on that particular task is any greater for N
people than for one person, nor do I see how the moral problem gets any
better unless you're conducting a majority vote of the entire human species,
and even then I'm not sure it gets any better; minorities have rights too.
>> Friendly AI is a test of intelligence. If the minimum intelligence to
>> crack Friendly AI is more than the maximum intelligence of a
>> committee, turning the problem over to a committee guarantees a loss.
>
> Neither Friendly AI nor the Singularity is a TEST of any kind. Neither
> is it a competition! No one should be in a race to create the
> Singularity to prove anything. Such thinking will certainly be the
> demise of us all.
The Singularity is not a race, but it is a test. All problems posed by
Nature are tests - goals, problems, challenges, whatever you wish to call
them. I'm not sure what dreadful connotations this has for you but it seems
pretty innocent to me. Friendly AI is a problem domain in which the most
in-demand qualities needed for success are creative intelligence,
reflectivity, altruism, and good intentions for the Singularity, *in that
order*.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT