Re: Friendly Existential Wager

From: James Higgins (jameshiggins@earthlink.net)
Date: Fri Jun 28 2002 - 14:07:54 MDT


At 03:25 PM 6/28/2002 -0400, Mark Walker wrote:
>Pascal's wager, you'll recall, goes roughly like this. 1. If there is a God
>and you are a believer, then up you go up to heaven. (Best outcome). 2. If
>there is no God and you are a believer then little lost. 3. If there is no
>God and you are not a believer then little is gained. 4. If there is a God
>and you don't believe then down you go to hell to eat flaming shit. (Worst
>outcome).
>
>E.Y. thinks Friendliness first, B. G. thinks AGI first. Who is right?
>Suppose we don't know. How should we act? Well either attempting to design
>for Friendliness before AGI will be effective in raising the probability of
>a good singularity or it will not. From best to worst are as follows:
>
>1. We believe (and act as if) Friendliness should come first and it is true
>that Friendliness should come first. (Best outcome).
>2. We believe (and act as if) Friendliness should come first and it is false
>that Friendliness should come first. (Slightly negative).
>3. We believe (and act as if) Friendliness should not come first and it is
>false that Friendliness should come first. (Slightly positive outcome).
>4. We believe (and act as if) Friendliness should not come first and it is
>true that Friendliness should come first. (Worst outcome).
>
>1 is best because we have effectively raised the probability of a good
>singularity. Of course we should not make light of 2 and 3. If it takes X
>years to figure out Friendliness and it is inefficient to focus on it now
>(as B.G. maintains) then a lot of time (= a lot of lives) could be wasted.
>With 4 at best we squander our opportunity to raise the probability of a
>good singularity, at worst we are responsible for a bad singularity
>(unforeseen hard tack off). Thus, given our uncertainty and what is at
>stake I think we should act as if Friendliness should come first.

Pascal's wager makes perfect sense where only a single individual in
involved. If *I* don't believe and God exists *I* go to hell. This is not
if *we* don't believe in God *I* go to hell nor if *I* don't believe in God
*we* go to hell. If only one person (or one team) were working on the
AI/Singularity problem then the obvious logical course of action would be
#1. This is not the case, though. Multiple people/teams can pursue the
different possibilities in parallel. This is actually the best course of
action, assuming we can prevent a hard takeoff, since we don't know which
answer is correct. To illustrate this consider this rule applied to
Pascal's wager: as long as at least one person believes in God everyone
would go to heaven (if God exists).

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT