From: Olie L (firstname.lastname@example.org)
Date: Tue Jan 10 2006 - 18:12:37 MST
(Responding to Psy Kosh <email@example.com> 's observations re replicators
and Fermi paradox, below)
This seems like a perfectly reasonable application of the same meta-idea as
the Anthropic principle:
Intelligence is most likely to evolve independently, looking for other
existing intelligences, only where there has not been some pre-existing
Please, consider for a second, whether it matches your view of Friendlyness,
to consume the resources available for future intelligences (and the
ecosystems that cultivate them), before they have an opportunity to utilise
It seems to me that stunting the development of future intelligences through
resource depletion is just as inconsiderate as stunting the development of
Now, is it just me, or does this seem to be another example of how
consumptive* attitudes towards resources - that they are there to be used at
the first available opportunity - have a strong tendency to cause selfish
All that dead rock out there /can/ be seen as a resource available for
exploitation. But it can also be viewed as having value in itself.
There is wisdom in being prudent - doing the most with limited resources,
rather than immediately reaching for more.
I'm not saying that using more resources is necessarily a bad thing, but
using more resources where fewer resources would suffice _is_ Waste.
Christian Zielinski <firstname.lastname@example.org> posted the following the
"With growing technology the energy consumption of such a civilisation
should grow too.
Let's assume it for the sake as a linear relation"
I don’t see that technology need have a linear relationship with energy
consumption. Quite the contrary – I think that most technology revolves
around reducing the resources needed to achieve outcomes. A bigger
engineering project isn’t an advancement of technology; it’s just more of
that one tech. A mound of 1000 blocks uses no more tech than a mound of 100
blocks. However, a tower of 100 blocks fitted together in a structurally
sound manner /is/ an instantiation of greater technology. A taller
structure using more bricks isn’t technology – using glue is.
If it is possible to perform computations with negligible energy
consumption, as reversible computing suggests, that would imply that there
would be no need for an advanced civilisation to increase its energy
So, why would an advanced intelligence “spam the galaxy with replicators or
themselves or something?” Why would we spam other peoples’ inboxes?
To quote Calvin:
“Sometimes I think the surest sign that intelligent life exists elsewhere in
universe is that none of it has tried to contact us”
* I went to a dictionary to check that this was a "real" word. Turns out
the nearest dictionary available, published in 1962, didn't have any other
meaning for consumption except "sickness".
>From: Psy Kosh <email@example.com>
>Subject: Fermi's paradox and observer selection effects?
>Date: Tue, 10 Jan 2006 03:04:20 -0500
>Just a wee bit of wild speculation I had:
>Let's consider universes in which some other intelligent species
>developed slightly before us, sent out Non Newman replicators or
>Well, if they had, then we'd expect to see such probes and perhaps
>them all over... But wait, if they were, especially via the Von Newman
>spamming of the galaxy, including our own personal favorite planet,
>where would we evolve?
>ie, I'm going to suggest the possibility that once at least one
>species has spammed the galaxy with replicators or themselves or
>something, then it'd be much less likely that human (or human like)
>life could then develop, since the resources of the world would
>already be in the process of being munched on by the replicators and
>or the members of the species themselves. Any galaxy already taken
>over would have less places and/or opportunity for something
>"essentially like us" (leave it to you do decide what the proper
>reference class is here) to develop.
>So we may then be able to validly say that we have no right to be
>supprised at the lack of signs of such, since the lack of such may be
>a near prequesite for us existing.
>Of course, we'd also expect as a prior "number of galaxies in all
>possible universes with something showing up sufficiently before
>humans to 'take over' before we show up" > "number of galaxies in
>which we're among the first, sufficiently early at least that
>nothing's 'taken over' yet."
>So the exact relation of p(humans developing | someone else took over
>first) * p(someone else took first) vs p(humans developing | noone
>took over first) * p(no one took over first) is the question. My claim
>is that the second conditional is significantly larger than the first,
>and my wild eyed speculation is that the ratio is sufficiently large
>compared to the ratio of the first prior to the second that we
>shouldn't actually be supprised at, well, at the Fermi "paradox"
>Or is this all just complete nonsense? :)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT