From: Gordon Worley (redbird@mac.com)
Date: Thu Apr 19 2007 - 16:33:45 MDT
I attempted to send this at 7:00 am EDT on the 16th, but it seems the
delivery failed. My apologies if this is double posted.
On Apr 16, 2007, at 3:40 AM, kevin.osborne wrote:
> proposition: take the Anissimov/Yudkowsky view on the seriousness of
> Friendly A.I and other existential risks as a given.
>
> empirical observation: as per Fermi. the aether is silent and
> lifeless. all other intelligent species on all other near-space
> life-supporting worlds have failed to reach uplift.
>
> theory: the galaxy is dead and void. existential risk has proven
> lethal and/or progress-suppressive in all prior cases.
>
> prediction: our chances of reaching/surpassing/outliving the
> Singularity are negligible -> nil.
To me the most interesting question in this scenario is: is the
theory of Friendly AI unique? That is, did all these other
civilizations fail because they didn't stumble upon Friendly AI, or
did they fail despite having their own Eliezer? I think the latter
case is something interesting to look at, in so far as it matches the
following scenario:
The theory of Friendly AI is fully developed and leads to the
creation of a Friendly AI path to Singularity first (after all, we
may create something that isn't a Friendly AI but that will figure
out how to create a Friendly AI). However, when this path is
enacted, what are the chances that something will cause an
existential disaster? Although I suspect it would be less than the
chances of a non-Friendly AI path to Singularity, how much less? Is
it a large enough difference to warrant the extra time, money, and
effort required for Friendly AI?
That said, I think your scenario, Kevin, is a little too conditional
to be shocked by. Notably, it requires assuming that there are other
intelligent civilizations in the universe and that an uplifted
society would generate radio traffic in our light cone. I would feel
a little more shocked if you could argue this with fewer, what I
consider to be, low probability assumptions. Of course, if you can
convince me my probability assignments are wrong, I may be duly shocked.
-- -- -- -- -- -- -- -- -- -- -- -- -- --
Gordon Worley
e-mail: redbird@mac.com PGP: 0xBBD3B003
Web: http://homepage.mac.com/redbird/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT