From: Emil Gilliam (emil@emilgilliam.com)
Date: Sat Jul 17 2004 - 19:19:05 MDT
Again, quoting the Collective Volition page:
 >   Friendly AI requires:
 >
 >   1.       Solving the technical problems required to maintain a 
well-specified
 >            abstract invariant in a self-modifying goal system. 
(Interestingly,
 >            this problem is relatively straightforward from a 
theoretical standpoint.)
 >
 >   2.       Choosing something nice to do with the AI. This is about 
midway in
 >            theoretical hairiness between problems 1 and 3.
 >
 >   3.       Designing a framework for an abstract invariant that 
doesn't automatically
 >            wipe out the human species.  This is the hard part.
It seems that in order to understand a Friendly abstract invariant with 
the deepness of (2), and to understand what does or does not fit the 
spirit of this abstract invariant as humans would judge it, a seed AI 
would have to know an immense number of details about human brains. If 
this is so, then there may be no practical way for the seed AI to know 
all these details without scanning actual humans -- but, as SIAI's 
strategy currently goes, we don't want it to have any capability of 
this sort until takeoff time, and by that time the job of the Seed AI 
programmers should be *done*.
Is "finding a way out of this deadlock" a useful way of characterizing 
any part of (3)'s complexity?
- Emil
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:42 MST