From: maru (firstname.lastname@example.org)
Date: Sun Jan 23 2005 - 12:44:50 MST
Your first paragraph doesn't make much sense. Of course if the event
you are betting on fails, you get nothing or less. The events where you
do win compensate, probablistically, for when you don't. That's why the
expected utility can never be equal to the reward of a success (unless
the probability is 1; but Bayesian reasoners we are, we know that unity
as a possibility is impossible.), but must always be less to compensate
for the failures. So expected utility *is* important. If you regard the
probability of a win as infintesmial (but not 0, bayesian reasoners we
are...), than the expected utility is likewise minuscule.
And my assesment of the chances that AI will be done and a win is
calculated through my own understandings of AI and the difficulty
thereof, the chance of a win, the computing power available now and in
the future (which allow more brute-force techniques to be used, lowering
the difficulty of AI). I cannot believe that it is easier to win a
Mega-millions jackpot for any one randomly selected ticket than for all
of humanity to develop an AI in the next 100 years.
Slawomir Paliwoda wrote:
> Taking into account expected utility of FAI seems like a good way of
> justifying the argument that it is rational to support SIAI while it
> is irrational to buy lottery tickets until one realizes that all
> potential benefits to humanity from having FAI around are entirely
> dependent on the successful event actually happening. In other words,
> a person won't benefit from FAI any more than from a winning lottery
> ticket if FAI doesn't get made and he doesn't win the lottery. In both
> cases the utility will equal zero. This makes the issue of expected
> utility of FAI completely irrelevant to the question of whether one
> should support SIAI rather than buy lottery tickets, if we treat both
> "winning" events as having virtually no chance of happening.
> So Eliezer's argument implicitly tries to convince people not to
> bother investing in anything with low probability of success,
> regardless of the return value on the investment (otherwise we should
> be encouraged to buy lottery tickets), inadvertently making also a
> good case for not supporting SIAI since FAI project has also low
> probability of success in the eyes of many.
>> I consider the chance of Yudkowsky et al. succeeding to be considerably
>> better (but still not acceptable) than the billions-against lottery
> Can I ask what you are basing this statement on?
>> Add in the higher return, the expected utility is considerably higher
>> donating to the Institute than for a lottery ticket.
> True, but, like I said, the amount of expected utility is orthogonal
> to the issue.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT