Re: How hard a Singularity?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jun 22 2002 - 18:09:02 MDT


James Higgins wrote:
>
> Lets say we could, 6 years from now, upload Eliezer & Ben into (separate)
> hardware. They resulting intelligence would be equivalent to what it was
> prior to the upload, but it will be running on computing hardware. Let's
> also go with Ben's suggestion that the amount of hardware required will
> be substantial (which seems likely).
>
> Now, can either of you explain to me why a human-equivalent intelligence
> will, all of a sudden, be capable of creating bounds & leaps of
> technology that were otherwise impossible, just because it is running on
> silicon??? It seems likely that it would take a human-equivalent AI
> roughly as long as a single human (discounting sleep, eating, etc) to do
> the same amount of work! It doesn't think smarter (yet) so why in the
> heck should new architectural designs and technologies spring forth form
> its mind like a fountain? As I see it, it would do no more for the
> project than employing 3 or more engineers (discounting moral & financial
> boosts due to the success, of course)...

Because the upload, if she's smart, will not concentrate on working as a
researcher on some other, ordinary technological project; she will
concentrate on improving herself. The very first change that upload makes
which successfully increases her own intelligence (though it might take much
more than a month to manage this, for an unprepared upload) will increase
the ease of all successive improvements, which will increase the ease of
further improvements even further - a runaway positive feedback loop.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT