From: James Higgins (jameshiggins@earthlink.net)
Date: Mon Jun 24 2002 - 18:39:55 MDT
At 05:18 PM 6/24/2002 -0600, Ben Goertzel wrote:
>I think it is impossible for a nontrivial mind to *fully* understand
>itself, but that a partial understanding is adequate for making
>significant intelligence-improving optimizations....
An interesting hypothesis that we will hopefully get to test in the not to
distant future...
>I'd also like to point out that a "hard takeoff" can happen without an AGI
>improving its code at all, merely by the AGI inventing *better and better
>hardware infrastructures* for itself, and implementing itself on better
>and better hardware, thus making itself smarter and smarter while leaving
>its software basically the same...
Duh. Before I started heavily reading this list I primarily thought of the
Singularity as performance gains, not intelligence gains. Thus I knew this
but somehow lost track of it (probably because this list seems to focus
almost exclusively on intelligence gains over raw performance gains). So
you are, of course, correct.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT