Re: Friendly Existential Wager

From: James Higgins (jameshiggins@earthlink.net)
Date: Fri Jun 28 2002 - 14:59:51 MDT


At 04:25 PM 6/28/2002 -0400, Mark Walker wrote:

>----- Original Message -----
>From: "James Higgins" <jameshiggins@earthlink.net>
> >
> > Pascal's wager makes perfect sense where only a single individual in
> > involved. If *I* don't believe and God exists *I* go to hell. This is
>not
> > if *we* don't believe in God *I* go to hell nor if *I* don't believe in
>God
> > *we* go to hell. If only one person (or one team) were working on the
> > AI/Singularity problem then the obvious logical course of action would be
> > #1. This is not the case, though. Multiple people/teams can pursue the
> > different possibilities in parallel. This is actually the best course of
> > action, assuming we can prevent a hard takeoff, since we don't know which
> > answer is correct. To illustrate this consider this rule applied to
> > Pascal's wager: as long as at least one person believes in God everyone
> > would go to heaven (if God exists).
> >
>I hope you are right, however, I fear that the parallel may run the other
>way: that as long as one person does not believe in God we are all dammed.
>Why? If group races ahead and does not attempt to implement Friendliness
>then all may be lost. (I should clarify that we only need assume this is a
>necessary condition not a sufficient condition for averting disaster).

Ah, but that is a completely different issue. If a single person/team gets
to trans-human AI who does not have sufficient wisdom to implement the
necessary safeguards we surly are doomed. This has been part of my point
on here the last week or so.

However, any research prior to hard-takeoff is good research. The key is
to keep a Singularity from being initiated before we (in a broad sense) are
ready to proceed in the best manner possible.

James HIggins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT