From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Sun Feb 10 2008 - 15:22:12 MST
On Feb 9, 2008 11:55 AM, Peter C. McCluskey <pcm@rahul.net> wrote:
Peter, thank you for the analysis. Here's my own analysis, which is not a
rebuttal nor the "FAI community's perspective", but just my own personal
analysis.
I'd say the number of smart people who have mistakenly thought they
> could create an important AI breakthrough suggests we should assume
> any one AGI effort should have a success probability somewhere around
> 0.01 to 0.0001.
If we say "in the next 20 years, total success" then I will personally
estimate .005 for a typical AGI project.
If many of the people offering resources to the
> project don't understand the design, then there is an incentive for people
> without serious designs to imitate serious researchers.
You know who else has this problem? Every other AGI project on the planet.
So I would think that included in the prior estimate. SIAI has, at least for
now, less moral hazard here than most potential corporate or academic AGI
projects, though if FAI gets "too popular" for funders this could become a
serious problem in the future. If only we had such problems. :-)
The fact that we're trying to be Friendly should drop the odds by an order
of magnitude, but I personally have to raise it back up an order of
magnitude based on my assessment of the SIAI team, and by a belief that
because FAI is a better idea than UFAI, SIAI will continue to be able to
expand the community and recruit extremely bright and motivated people, even
compared with other AGI teams. I can't really adjust for overconfidence here
because it's not clear to me what the beliefs and motivations are of the
majority who work on UFAI. So my own estimate is that SIAI directly saves
mankind at about 200:1 odds.
Now, one can say that if something really could save mankind at 200:1 odds,
it would already have been done by someone else, so I must be overconfident.
True in theory. But, in making the decision "should I help SIAI",
compensating for overconfidence shouldn't flip me all the way from "I should
clearly do X" to "I should not do X, because most people don't do X."
Instead, "If you have a pet project that can save everyone with 200:1 odds,
then go do it" seems like a good moral rule, even if most people who believe
that have historically been wrong.
Now, here's another twist about asteroid detection: <i>if</i> FAI is
impossible, then asteroid detection doesn't save humanity anyway: it just
delays mankind's demise until someone creates a UFAI. So when you give
resources to SpaceGuard, you're conditioning on the hopes that someone else
will spend resources to (1) figure out how to best invest in FAI and then
(2) invest in FAI at some point.
(I don't mean to pick especially on SpaceGuard, obviously saving mankind at
~1M : 1 odds is more worthwhile than anything <i>I've</i> done with my life
to date. :-)
> I'm tempted to add in some uncertainty about whether the AI designer(s)
> will be friendly to humanity or whether they'll make the AI friendly to
> themselves only. But that probably doesn't qualify as an existential risk,
> so it mainly reflects my selfish interests.
No, you're not being selfish. That's a legitimate concern with any AGI
projects, and again one that the "do what I tell you" UFAI projects
especially neglect.
>Perhaps my own conclusions differs from yours as follows: first of all, I
> >have confidence in the abilities of the current FAI community; and second
> of
>
> Can you describe reasons for that confidence?
If you don't agree, you don't agree, but personally I converge with the FAI
community on a number of issues. Yudkowsky used the term "playing to win"
recently on OB with regard to decision theory and rationality, that's not an
attitude you see most places. The desire to attempt sometimes to adjust for
overconfidence is fairly rare outside the FAI community. Yudkowsky and
Vassar are both people I'm willing to put into the extremely small category
of "people who are smarter than the people who are smarter than me."
Yudkowsky's CEV seems the ideal type of approach for an AGI. So it's
overdetermined that I support the current community.
> >all, if I didn't have confidence, I would try to bring about the creation
> of
> >a new community, or bring about improvements of the existing community,
>
> Does that follow from a belief about how your skills differ from those
> of a more typical person, or are you advocating that people accept this
> as a default approach?
Default. If I thought the FAI community were idiots, I wouldn't work with
them. If I'm correct and they're idiots, it's a win because I was right to
start a new, non-idiot community. If I'm wrong and I'm the idiot, then it's
also a win because I got myself out of their way.
There are a number of tasks for which the average member of this list is
> likely to be aware that he would have negligible influence, such as
> unifying
> relativity with quantum mechanics
If you mean "negligible" as in "the odds of winning the lottery are
negligible", then I disagree. We live in a market economy; you could save up
and endow a research grant to study unification. We live in a civil society;
you might be able to convince someone else to help you, <i>if</i> you have
compelling reasons why they should do so. But yes, some people will be more
effective in given domains than others. You should multiply the
effectiveness with which you can attain a goal, times the desirability of
achieving the goal; there's no point where you should skip the multiplying
and say "this is not my core competency, so I'm going to go skiing instead,
even though I don't like skiing, just because I'm good at it."
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT