Re: An essay I just wrote on the Singularity.

From: Perry E. Metzger (perry@piermont.com)
Date: Fri Jan 02 2004 - 13:06:02 MST


Randall Randall <randall@randallsquared.com> writes:
>> "Combining a few issues here. I believe that strong
>> superintelligence is possible. Furthermore, I believe that to argue
>> to the contrary is amazingly rank anthropocentrism, and should be
>> laughed at. Beyond that, I think full AI is possible. It's the
>> combination of the two that's interesting."
>>
>> I agree that people who believe strong superintelligence is
>> impossible are memetically distant enough from Singularity-aware
>> thought that trying to avoid offending/confusing them is pointless.
>
> This is not at all true. I think it's quite arguable that
> strong superintelligence is impossible. For instance, it
> might be that physical law doesn't permit complexity above
> some ceiling. If so, it might be that the smartest possible
> humans are already very close to that limit.
>
> This might not seem very likely to you, but unless you
> can logically rule it out, it isn't incompatible with
> a technologically induced singularity (in the weak sense).

I tend to think one can logically rule this one out -- strong
superintelligence seems compatible with physical law. Why? Because we
already know of ways to make components with much higher speed and
much smaller size than human neurons have. One could therefore assume
that it is possible to get at least a few orders of magnitude
improvement by moving to non-biological substrates. Even if we just
made a human who could think a few hundred million times faster, that
would be a pretty amazing improvement -- and we can be (I think)
reasonably sure that is possible.

Whether or not we humans could design such a thing in such a way as to
be sure of the "Friendliness" of the outcome, though, seems dubious to
me at best.

-- 
Perry E. Metzger		perry@piermont.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT