# Re: When does it pay to play (lottery)?

From: maru (marudubshinki@gmail.com)
Date: Mon Jan 24 2005 - 16:27:06 MST

W/r/t the lottery: the intial chance doesn't really matter. What
matters is when you plug in that intial chance and possible payoff into
your utility formula, is the expected utility high enough to justify the
expense and oppurtunity costs?
As far as AI goes, I think (you don't have to agree with me here) that
barring existential threats or wholesale abandonment of tech, AI is
pretty close to a certainty- as others have pointed out, within the next
100 years it will be entirely feasible to bruteforce it. So the
relevant probability is the chance that a FAI will be first, and that
FAI is possible at all. Do you think it is less than several hundred
million to one (the most favorable odds for a multimillions jackpot I've
heard) that a FAI will be first?

~Maru

Slawomir Paliwoda wrote:

> Maru wrote:
>
>> Your first paragraph doesn't make much sense. Of course if the event
>> you are betting on fails, you get nothing or less. The events where
>> you do win compensate, probablistically, for when you don't. That's
>> why the expected utility can never be equal to the reward of a
>> success (unless the probability is 1; but Bayesian reasoners we are,
>> we know that unity as a possibility is impossible.), but must always
>> be less to compensate for the failures. So expected utility *is*
>> important.
>
>
>
> If expected utility is important when the probability of winning is
> virtually zero, then a rational person should be buying lottery
> tickets, shouldn't he?
>
>
>> If you regard the probability of a win as infintesmial (but not 0,
>> bayesian reasoners we are...), than the expected utility is likewise
>> minuscule.
>> And my assesment of the chances that AI will be done and a win is
>> calculated through my own understandings of AI and the difficulty
>> thereof, the chance of a win, the computing power available now and
>> in the future (which allow more brute-force techniques to be used,
>> lowering the difficulty of AI). I cannot believe that it is easier
>> to win a Mega-millions jackpot for any one randomly selected ticket
>> than for all of humanity to develop an AI in the next 100 years.
>
>
> AI may indeed be developed in the next 100 years, but I bet when you
> of FAI, not merely AI. So let me ask you this. Considering that
> development of safe FAI is equally hard or even harder than the task
> of creating functional AI, and that in order for humanity to benefit
> from FAI at all, the thing must work perfectly on the first try, are
> you still convinced that probability of winning lottery is
> *significantly* smaller than creation of successful AI and FAI?
>
> (And as for calculating the total expected utility of supporting FAI
> research, let's not forget to factor in the expected utility of