RE: How hard a Singularity?

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Jun 24 2002 - 17:18:52 MDT


I think it is impossible for a nontrivial mind to *fully* understand itself,
but that a partial understanding is adequate for making significant
intelligence-improving optimizations....

I'd also like to point out that a "hard takeoff" can happen without an AGI
improving its code at all, merely by the AGI inventing *better and better
hardware infrastructures* for itself, and implementing itself on better and
better hardware, thus making itself smarter and smarter while leaving its
software basically the same...

In reality I suspect we'll see the semihard takeoff occur via a combination
of AI-driven software and hardware improvements...

-- Ben G
  -----Original Message-----
  From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf Of
James Higgins
  Sent: Monday, June 24, 2002 4:35 PM
  To: sl4@sysopmind.com
  Subject: RE: How hard a Singularity?

  At 06:12 PM 6/24/2002 -0400, Smigrodzki, Rafal wrote:

    I would call it dead certain in favor of a hard takeoff, unless all the
    intelligences at the core of that hard takeoff unanimously decide
otherwise.

    ### I do not share this certainty, but then you are the better informed
person. However, I'd like to posit the following objection - what if there
are some natural-law-type limits to the total problem-solving ability that
can be controlled by a single self-aware unit? There are limits to the size
of dinosaur, not apparent when you are building a mouse. The SI *could* find
itself limited by the sheer complexity of the processing needed to produce
aditional growth. BTW, I find it quite useless to

  A good point which reminds me of something I was thinking about the other
day.

  It seems to me that there is a reasonable probability that it may be
impossible for a mind to understand its own inner workings. In other words
a sentient's mind may by beyond the complexity threshold that it itself can
fully comprehend. Thus it may only be possible to comprehend a mind which
is an order of magnitude less intelligent/complex (pick your own numbers,
I'm just using this number as an example). If this were the case a hard
takeoff would be impossible and it would take much, much longer to make
positive progress than generally anticipated. I'd like to hear some
rational thoughts on this matter.

  James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT