Re: Augmenting humans is a better way

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Jul 28 2001 - 16:45:08 MDT


At 06:18 PM 7/28/2001 -0400, Eliezer wrote:
>James Higgins wrote:
> > And if done perfectly it might be a really good thing. But even the
> > smallest mistake in how this is done could lead to a horrible existence for
> > some or all of the individual involved.
>
>If so, it won't be just *any* "small mistake", it will be a "smallest
>mistake" with some extremely unusual properties - i.e., a small mistake
>that manages to totally destroy the mistake-recovery mechanisms, tear the
>AI loose of the entire goal system's grounding, destroy the AI's
>connection with the human programmers, escape detection, and so on.

As a software architect & programmer (and I really good one if I do say so
myself), escaping bugs is incredibly difficult. The more complex the
system, the more this is the case. So I'm personally having a very
difficult time believing that all will go well. Maybe the mistake-recovery
mechanism will over compensate and induce mistakes? Maybe the goal system
will be modified and the AI will decide it needs different goals. A
complex system is one thing, a system that learns and self-modifies is
another. All bets are off IMHO.

I'm not saying it isn't worth doing, I just have serious doubts about its
success at present.

> > Plus it will, by nature, be
> > impossible to change, ever. I don't like that too much either.
>
>I don't see there being much middle ground between "Impossible to change,
>ever" and "Easily breakable by a single hostile superintelligence, might
>as well not be there in the first place". If there is a feasible and
>desirable middle ground, then a Friendly Transition Guide would move to
>occupy it, rather than implementing a Sysop Scenario.

I realize why this is the case, but it makes error recovery after the fact
impossible. Thus it has to work perfectly the first time, which is where
my doubt comes in.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT