From: Slawomir Paliwoda (velvethum@hotmail.com)
Date: Sun Jan 23 2005 - 15:05:07 MST
Maru wrote:
> Your first paragraph doesn't make much sense. Of course if the event you
> are betting on fails, you get nothing or less. The events where you do
> win compensate, probablistically, for when you don't. That's why the
> expected utility can never be equal to the reward of a success (unless the
> probability is 1; but Bayesian reasoners we are, we know that unity as a
> possibility is impossible.), but must always be less to compensate for the
> failures. So expected utility *is* important.
If expected utility is important when the probability of winning is
virtually zero, then a rational person should be buying lottery tickets,
shouldn't he?
> If you regard the probability of a win as infintesmial (but not 0,
> bayesian reasoners we are...), than the expected utility is likewise
> minuscule.
> And my assesment of the chances that AI will be done and a win is
> calculated through my own understandings of AI and the difficulty thereof,
> the chance of a win, the computing power available now and in the future
> (which allow more brute-force techniques to be used, lowering the
> difficulty of AI). I cannot believe that it is easier to win a
> Mega-millions jackpot for any one randomly selected ticket than for all of
> humanity to develop an AI in the next 100 years.
AI may indeed be developed in the next 100 years, but I bet when you were
writing about expected utility you were thinking about benefits of FAI, not
merely AI. So let me ask you this. Considering that development of safe FAI
is equally hard or even harder than the task of creating functional AI, and
that in order for humanity to benefit from FAI at all, the thing must work
perfectly on the first try, are you still convinced that probability of
winning lottery is *significantly* smaller than creation of successful AI
and FAI?
(And as for calculating the total expected utility of supporting FAI
research, let's not forget to factor in the expected utility of
inadvertently creating UFAI.)
Slawomir
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST