Re: Beyond evolution

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Feb 04 2001 - 21:53:03 MST


"Eliezer S. Yudkowsky" wrote:
>
> Multiple entities are multiple points of failure of Friendliness and even
> greater dangers.
>
> A failure of Friendliness in a transcending seed AI results in a total
> takeover regardless of what a Friendly AI thinks about the Sysop
> Scenario. Once an AI has *reached* the Sysop point you're either screwed
> or saved, so forking off more Sysops after that is a particularly
> pointless risk.

Note: If you construct 1000 seed AIs and ask them *not* to do a total
takeover, then you are *guaranteed* a total takeover by the first AI to
undergo failure of Friendliness.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT