From: Norman Noman (email@example.com)
Date: Mon Jan 12 2009 - 02:23:02 MST
On Mon, Jan 5, 2009 at 4:59 AM, Stathis Papaioannou <firstname.lastname@example.org>wrote:
> 2009/1/5 Norman Noman <email@example.com>:
> > Of course, here we have just two AIs with relatively simple,
> > goals. In real life it would be dizzyingly complicated. But, I contend,
> > less significant. A cooperation between all potential powers, in ratio to
> > their likelihood to exist, wold look very different than an individual
> > acting alone.
> This is the main problem with Rolf's Gambit / Pascal's Wager. You
> convince yourself that you should pay a small amount for a large
> reward of small probability, which seems reasonable until every other
> tribe, prophet or special interest group demands payment for their
> unprovable reward or punishment. And then there are all the special
> interest groups that might possibly exist, or come to exist.
The difference between pascal's wager and rolf's gambit is that gods do not
exist, and AIs do. In fact, for rolf's gambit to work, you have to already
The probability that the christian god exists is more or less zero, because
there's no mechanism by which he could, indeed by most formulations his very
nature is incoherent and badly defined. The probability that a friendly AI
exists, or a maverick AI which wants to calculate the digits of pi, is
clearly NOT zero, because we're trying to MAKE AIs and there's nothing
physically stopping us from doing so.
> It's only
> if you find evidence in support of one god among all the possible gods
> that you should pay it special attention.
Why is "these are the kind of AIs people are likely to try to create, these
are the ways they are likely to screw up" not evidence enough for you that
the resulting AI is likely to exist?
This archive was generated by hypermail 2.1.5 : Wed Jun 19 2013 - 04:01:43 MDT