Re: Fermi Paradox explained (was Re: Memory as Simulation)

From: Olie Lamb (
Date: Tue Mar 07 2006 - 17:24:45 MST

Please apologise for the lameness!

On 3/8/06, Phillip Huggan <> wrote:

> The simplest refutation of SA is that it is either evil,

How does that make it a refutation? Who says that bigger intelligences have
to be friendly, let alone can't be nasty? If anyone could prove that, it
would make the Institute's role a lot easier.

We already know that we're not in a friendly simulation (thank you argument
from Evil, I mean, Banana test). How does this show anything about anything

> or as Eliezer said pointless.

An entirely different matter.

[non-SL4 material follows]

> A Fermi Paradox refutation ...

There's another hypothesis for why we haven't encountered a post-singularity
intelligence: repeatedly improving intelligence is not inevitable.

Certainly, our society seems to be headed that way (barring severe
misfortune), but with a different culture, a different approach to
technology from early on, there is no inevitability about that {group}
exceeding a certain level of general intelligence within a very long

How likely is this? Not likely enough to account for lots of
Extra-Terrestrial Intelligences not exceeding certain technology levels over
many millions of years.

Other hypotheses for why we haven't encountered post-singularity tech
include: Other existent post-singularity entities might not be interested
in expansion ("colonising" etc). They might also not be interested in
interfering with technological development, although this is again not

The potential for a society to experience a Technological Singularity does
not "solve" the Fermi Paradox. It does make it more interesting,
as the models are more complicated. It certainly makes SETI seem less
likely to get a positive result from their experiment. However, absence of
proof is still not proof of absence (purple ravens, of course being duly
accounted for)

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT