From: James Higgins (jameshiggins@earthlink.net)
Date: Mon Jun 24 2002 - 16:35:26 MDT
At 06:12 PM 6/24/2002 -0400, Smigrodzki, Rafal wrote:
>I would call it dead certain in favor of a hard takeoff, unless all the
>intelligences at the core of that hard takeoff unanimously decide otherwise.
>
>### I do not share this certainty, but then you are the better informed
>person. However, I'd like to posit the following objection - what if there
>are some natural-law-type limits to the total problem-solving ability that
>can be controlled by a single self-aware unit? There are limits to the
>size of dinosaur, not apparent when you are building a mouse. The SI
>*could* find itself limited by the sheer complexity of the processing
>needed to produce aditional growth. BTW, I find it quite useless to
A good point which reminds me of something I was thinking about the other day.
It seems to me that there is a reasonable probability that it may be
impossible for a mind to understand its own inner workings. In other words
a sentient's mind may by beyond the complexity threshold that it itself can
fully comprehend. Thus it may only be possible to comprehend a mind which
is an order of magnitude less intelligent/complex (pick your own numbers,
I'm just using this number as an example). If this were the case a hard
takeoff would be impossible and it would take much, much longer to make
positive progress than generally anticipated. I'd like to hear some
rational thoughts on this matter.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT