From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jul 13 2005 - 18:56:10 MDT
justin corwin wrote:
> On 7/13/05, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
>
>
>>Even if you nuke the entire world back to the seventeenth century and the UFAI
>>survives on one Pentium II running on a diesel generator you're still screwed.
>
> You are absolutely right. But if I nuke the whole world and get it, or
> if it runs out of diesel, or if it's too stupid to follow the
> civilization reboot because it's running on low hardware, or if we
> implement locally secured computers next time, then we aren't.
> Possibilities are not inevitibilities, and the possibility that the AI
> might kill you doesn't mean you can't do anything.
>
> I recognize you're trying to focus AI survival strategy on the top
> level case, but other cases exist, and establishing what they are is
> potentially important.
Rather than argue over whether you'd be "inevitably" screwed in the specific
event of facing down a transhuman UFAI, let's hear what you think you can do
about that. Perhaps it will turn out to be a good idea regardless, like an
Internet off switch. Or if you cannot say what anyone could do, then arguing
about possibility seems pointless.
In truth it seems pointless to me anyway, like the conversations commonly seen
on the Extropians list about how the poster would redesign the world political
system, if the poster had that power. The poster doesn't have that power, and
never will. I do not expect to see an organized governmental agency that even
understands the concept of UFAI, let alone has the authority to launch a
saturation nuclear attack on human infrastructure, this side of the
Singularity. But I suppose it is at least worth discussing.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT