From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sun Jan 23 2005 - 09:37:15 MST
Eliezer Yudkowsky wrote:
>
> Undoubtedly some will reply, "Given the potential payoff, you should buy
> that lottery ticket even if the odds were a billion to one." (Since
> what's wrong with the lottery ticket isn't just the tiny odds, but that
> the payoff doesn't match the tiny odds, especially when the diminishing
> marginal utility of money is taken into account. Otherwise, there'd be
> an industry in millions of people clubbing up to buy a sufficient number
> of lottery tickets, then dividing the proceeds.)
>
> I do not answer in this way, because I am on the inside, not the
> outside, of SIAI. I do not think that a billion-to-one chance of
> humanity surviving is acceptable. I do not think that a hundred-to-one
> chance is acceptable. I would not be investing my effort in SIAI unless
> I thought that, given some set of actions cognitively reachable to me,
> the odds could reach at least one-in-ten and preferably five-in-ten. I
> may not regard these odds as fixed because they depend on my actions.
A clarification:
These are my ass numbers on what I think are the best achievable *complete*
odds of success including all adversities (floods, wildfires, killer bees,
nanowar, an AGI project crossing the finish line before an FAI project, and
failure of Friendliness). They are *not* meant to describe the achievable
odds of Friendliness|AGI-success. As I indicated before, I think that we
ought to be able to get a theoretical guarantee on that. Even in fallible
human practice, a theoretical guarantee ought to drive the conditional
probability of failure at that junction into the range of ten percent,
maybe even 1 in 300 - not historically out of question for an engineering
project based on clearly understood principles with engineers given free
rein to implement multiple rings of safety margin.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST