From: Eric Burton (brilanon@gmail.com)
Date: Fri Sep 26 2008 - 17:02:38 MDT
> Nope. The reason it hasn't happened yet is because if it did, we would not be here to
> ask the question.
This seems like the anthropic principle for Doomsday, to me. It's
never been demonstrated that a process as rich and extensive as a
planetary biosphere can be stopped in its tracks. There are no control
groups! For all we know biological evolution is a stronger force than
thermodynamic entropy, guaranteed by the laws of physics to conquer
dumb matter over some intractable span of time.
As a single-planet species with no off-world contacts this is
necessarily an open question. Mars, for instance, could turn out to be
an extinguished Biome. But then we could point at the overall
perniciousness of living creatures within Sol system...
I think life may be pushed into simpler, marginal, or extreme forms by
catastrophes of the largest order we'd worry about, but for how long?
A billion years? Three billion? Sooner or later the ice will thaw.
Stars burn for a very long time, and this could be happening many
places at once.
On 9/26/08, Matt Mahoney <matmahoney@yahoo.com> wrote:
> --- On Fri, 9/26/08, Stuart Armstrong <dragondreaming@googlemail.com> wrote:
>
>> > If winning an election is good, then becoming supreme
>> dictator of the earth is better. If running a successful
>> company is good, then acquiring all of the world's
>> wealth and starving the rest of the population is better.
>>
>> I never said that they would be safe tests. They are tests
>> you should only run if the AI is friendly to start with.
>
> So we need to test for friendliness *before* we test for intelligence. How
> do you propose we do that?
>
> -- Matt Mahoney, matmahoney@yahoo.com
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT