From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jul 28 2001 - 16:18:23 MDT
James Higgins wrote:
>
> And if done perfectly it might be a really good thing. But even the
> smallest mistake in how this is done could lead to a horrible existence for
> some or all of the individual involved.
If so, it won't be just *any* "small mistake", it will be a "smallest
mistake" with some extremely unusual properties - i.e., a small mistake
that manages to totally destroy the mistake-recovery mechanisms, tear the
AI loose of the entire goal system's grounding, destroy the AI's
connection with the human programmers, escape detection, and so on.
> Plus it will, by nature, be
> impossible to change, ever. I don't like that too much either.
I don't see there being much middle ground between "Impossible to change,
ever" and "Easily breakable by a single hostile superintelligence, might
as well not be there in the first place". If there is a feasible and
desirable middle ground, then a Friendly Transition Guide would move to
occupy it, rather than implementing a Sysop Scenario.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT