From: Peter C. McCluskey (pcm@rahul.net)
Date: Thu Feb 14 2008 - 17:06:24 MST
rolf.h.d.nelson@gmail.com (Rolf Nelson) writes:
>>If many of the people offering resources to the
>> project don't understand the design, then there is an incentive for people
>> without serious designs to imitate serious researchers.
>
>
>You know who else has this problem? Every other AGI project on the planet.
It might happen to be true of every active AGI project at the moment.
>So I would think that included in the prior estimate. SIAI has, at least for
The 0.01 to 0.0001 probability that I gave was my estimate of what we can
tell from the failure of projects whose approach was understood and taken
seriously by a number of people with credentials as serious AI researchers.
My impression is that SIAI hasn't described enough of a plan for such
people to form an opinion on whether it should be considered a serious
attempt to build an AGI.
>The fact that we're trying to be Friendly should drop the odds by an order
>of magnitude, but I personally have to raise it back up an order of
>magnitude based on my assessment of the SIAI team, and by a belief that
>because FAI is a better idea than UFAI, SIAI will continue to be able to
>expand the community and recruit extremely bright and motivated people, even
I'll remain sceptical of this ability to attract extremely bright people
until I see signs that it is happening.
>Now, one can say that if something really could save mankind at 200:1 odds,
>it would already have been done by someone else, so I must be overconfident.
I attach little weight to this argument.
>Now, here's another twist about asteroid detection: <i>if</i> FAI is
>impossible, then asteroid detection doesn't save humanity anyway: it just
>delays mankind's demise until someone creates a UFAI. So when you give
If you're very confident that that humanity is doomed without FAI, your
conclusion is reasonable. But I see no reason for that confidence. Models
where a number of different types of AI cooperate to prevent any one AI
from conquering the world seem at least as plausible as those that imply
we're doomed without FAI.
>> I'm tempted to add in some uncertainty about whether the AI designer(s)
>> will be friendly to humanity or whether they'll make the AI friendly to
>> themselves only. But that probably doesn't qualify as an existential risk,
>> so it mainly reflects my selfish interests.
>
>
>No, you're not being selfish. That's a legitimate concern with any AGI
>projects, and again one that the "do what I tell you" UFAI projects
>especially neglect.
I'm being selfish in sense that I treat it almost the same as I treat
existential risks, when a genuine altruist would treat existential risks
as significantly more harmful.
>If you don't agree, you don't agree, but personally I converge with the FAI
>community on a number of issues. Yudkowsky used the term "playing to win"
>recently on OB with regard to decision theory and rationality, that's not an
>attitude you see most places.
I find it hard to observe whether people use the wrong attitude when this
issue matters, so it's hard to tell how unusual his attitude is.
> The desire to attempt sometimes to adjust for
>overconfidence is fairly rare outside the FAI community. Yudkowsky and
>Vassar are both people I'm willing to put into the extremely small category
>of "people who are smarter than the people who are smarter than me."
>Yudkowsky's CEV seems the ideal type of approach for an AGI. So it's
>overdetermined that I support the current community.
I suspect CEV, if implemented as I understand it, would take long enough
to implement (due to cpu time needed to run it and due to the time needed
to acquire enough information to adequately model all human) that it
leaves important risks unanswered.
Eliezer may be smarter than me, but I see no sign that he's the smartest
person I know.
>> >all, if I didn't have confidence, I would try to bring about the creation
>> of
>> >a new community, or bring about improvements of the existing community,
>>
>> Does that follow from a belief about how your skills differ from those
>> of a more typical person, or are you advocating that people accept this
>> as a default approach?
>
>
>Default. If I thought the FAI community were idiots, I wouldn't work with
>them. If I'm correct and they're idiots, it's a win because I was right to
>start a new, non-idiot community. If I'm wrong and I'm the idiot, then it's
>also a win because I got myself out of their way.
I think you overestimate the ability of a relatively typical person to
start a useful FAI community.
>If you mean "negligible" as in "the odds of winning the lottery are
>negligible", then I disagree. We live in a market economy; you could save up
>and endow a research grant to study unification. We live in a civil society;
>you might be able to convince someone else to help you, <i>if</i> you have
>compelling reasons why they should do so. But yes, some people will be more
>effective in given domains than others. You should multiply the
>effectiveness with which you can attain a goal, times the desirability of
>achieving the goal; there's no point where you should skip the multiplying
>and say "this is not my core competency, so I'm going to go skiing instead,
>even though I don't like skiing, just because I'm good at it."
I'm unsure how much of that paragraph I understand. I think my endowing
a research grant to study unification has odds that are only modestly
better than winning the lottery, so after multiplying the FAI equivalent
by the desirability, I end up with results that are rather sensitive to
whether I'm feeling optimistic or pessimistic this week.
-- ------------------------------------------------------------------------------ Peter McCluskey | The road to hell is paved with overconfidence www.bayesianinvestor.com| in your good intentions. - Stuart Armstrong
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT