From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Mon Feb 18 2008 - 17:40:57 MST
> The 0.01 to 0.0001 probability that I gave was my estimate of what we can
> tell from the failure of projects whose approach was understood and taken
> seriously by a number of people with credentials as serious AI
> researchers.
> My impression is that SIAI hasn't described enough of a plan for such
> people to form an opinion on whether it should be considered a serious
> attempt to build an AGI.
So the mean probability of success for a current AGI project that other
people are funding, but that doesn't claim to be understood by at least
three academic AI professors, is even less than the range you gave (that is,
orders of magnitude less than even one-in-ten-thousand?) That's probably
where the main area where we disagree, then.
> If you're very confident that that humanity is doomed without FAI, your
> conclusion is reasonable. But I see no reason for that confidence. Models
> where a number of different types of AI cooperate to prevent any one AI
> from conquering the world seem at least as plausible as those that imply
> we're doomed without FAI.
Is there a specific model you're talking about? It doesn't matter whether
the architecture is diverse or monolithic as long as it works, so are you
bringing this up because you believe a friendly outcome will somehow emerge
naturally, without anyone have to lift a finger to design friendliness into
the total system? In that case, that may be another core area of
disagreement between you and me.
-Rolf
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT