From: Stephen Reed (reed@cyc.com)
Date: Mon Jun 24 2002 - 16:32:16 MDT
On 24 Jun 2002, James Rogers wrote:
> ...I don't
> think it is unreasonable to assert that architectural self-modification
> is an unnecessary capability as all likely human implementations of an
> AI will almost have to be optimal (or close approximations) to even be
> practical.
>
> If this is the case and the first "real" AGI architecture is a close
> approximation of optimal, then the qualitative bootstrap process will
> essentially be hardware limited no matter how intelligent the AGI
> actually is. Obviously there has to be some self-modification at higher
> abstractions or a system couldn't learn, but that doesn't need to impact
> the underlying architecture (and is essentially orthogonal to the
> question in any case).
In my opinion, an AGI stemming from Cyc would have most of its behavior
defined as algorithmic knowledge in the knowledge base, where it could be
acquired, planned, and reasoned with in a first-class fashion. Thus I
would trivially agree with you that the code itself need not be modified,
but that is because Cyc would combine the primitive code routines in novel
ways as its KB-stored behaviors bottomed out into instantiated sequences
of code.
I generally agree with implementers here who envision bootstrapping AGI
from self-improving behaviors.
-Steve
-- =========================================================== Stephen L. Reed phone: 512.342.4036 Cycorp, Suite 100 fax: 512.342.4040 3721 Executive Center Drive email: reed@cyc.com Austin, TX 78731 web: http://www.cyc.com download OpenCyc at http://www.opencyc.org ===========================================================
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT