From: Olie Lamb (neomorphy@gmail.com)
Date: Sun Apr 22 2007 - 07:47:09 MDT
Just on the original comment:
On 4/16/07, kevin.osborne <kevin.osborne@gmail.com> wrote:
>
> recipe for: a little future shock moment.
>
> proposition: take the Anissimov/Yudkowsky view on the seriousness of
> Friendly A.I and other existential risks as a given.
>
> empirical observation: as per Fermi. the aether is silent and
> lifeless. all other intelligent species on all other near-space
> life-supporting worlds have failed to reach uplift.
>
> theory: the galaxy is dead and void. existential risk has proven
> lethal and/or progress-suppressive in all prior cases.
>
> prediction: our chances of reaching/surpassing/outliving the
> Singularity are negligible -> nil.
>
Yes, but the emptiness observed is also a pretty good indication that
there's no big consumpive/expansionist ("hegemonising") intelligence out
there; at least there's no observable evidence that societies that build
AGIs are likely to end up with a paperclipper.
-- Olie
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT