How to gauge positive progress (was How hard a Singularity?)

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Jun 22 2002 - 18:55:14 MDT


At 08:09 PM 6/22/2002 -0400, you wrote:
>James Higgins wrote:
> >
> > Lets say we could, 6 years from now, upload Eliezer & Ben into (separate)
> > hardware. They resulting intelligence would be equivalent to what it was
> > prior to the upload, but it will be running on computing hardware. Let's
> > also go with Ben's suggestion that the amount of hardware required will
> > be substantial (which seems likely).
> >
> > Now, can either of you explain to me why a human-equivalent intelligence
> > will, all of a sudden, be capable of creating bounds & leaps of
> > technology that were otherwise impossible, just because it is running on
> > silicon??? It seems likely that it would take a human-equivalent AI
> > roughly as long as a single human (discounting sleep, eating, etc) to do
> > the same amount of work! It doesn't think smarter (yet) so why in the
> > heck should new architectural designs and technologies spring forth form
> > its mind like a fountain? As I see it, it would do no more for the
> > project than employing 3 or more engineers (discounting moral & financial
> > boosts due to the success, of course)...
>
>Because the upload, if she's smart, will not concentrate on working as a
>researcher on some other, ordinary technological project; she will
>concentrate on improving herself. The very first change that upload makes
>which successfully increases her own intelligence (though it might take
>much more than a month to manage this, for an unprepared upload) will
>increase the ease of all successive improvements, which will increase the
>ease of further improvements even further - a runaway positive feedback loop.

Well, obviously we agree that the upload *should* work on it/his/her
self. Will she agree to that?

What about the very first change the upload makes which lowers her own
intelligence? Will she know the difference? Would you? Many people think
they are as cognitively capable when drunk as when sober, but they
aren't. However, having been drunk, it is very difficult to judge for
yourself (unless your VERY drunk, obviously). How could you/she tell which
changes were improvements? What if the improvements added subtle deficits
that were not readily obvious? I believe this would be very slow
going. Over time you would make progress and the pace of progress would
gradually improve. It would likely take major improvement, however, to get
to the point that she was capable of creating substantial progress on her
own. For quite some time she would just be one more mind working on the
project.

---
The best way I can see to make rapid progress, if that is your goal, would 
be to have numerous copies of the AI running (lets say 10).  Each AI runs 
on equivalent hardware and is cloned (copied) from one original source 
(thus are identical at the instant of copy).
Start off by working on the problem with you staff + the 10 AIs (maybe 
equivalent to 30+ additional staff members).  Get proposals on tweaks to 
make in order to improve intelligence.  Pick 5, stop the AIs, copy one to 
all hardware, implement each change on 2 AIs, restart the AIs.  Give all of 
them the exact same input and run them though a series of mental 
exercises/tests.  Record their answers (maybe letting them pick several 
answers for each question and having them rank them) along with the time 
required to answer.  Once finished discuss the results (include the AIs I 
would imagine).  Determine which change made the best improvement.  At this 
point you have 2 options:
         1.  Implement the change to all 10 AIs (possibly undoing their 
previous modification and/or re cloning them).
         2.  Leave them as is
Repeat the process and pick 5 more changes to try.
---
The above process should produce positive progress fairly quickly.  As the 
AIs become smarter they will usually be suggesting the best ideas to 
try.  Coming up with mental exercises will become difficult as they get 
smarter.  One possibility is to have them devise the exercises, giving it 
significant thought.  Then, when cloning them, go back to a state before 
they created the exercises.
Actually, this has another significant benefit.  This method would keep the 
human creators in the loop (and in charge) for some time.  Which could be 
used as an effective delay/slowdown if desired.  It would also place them 
in a reasonable position to monitor and gauge progress.  Allowing them to 
make their own tweaks and, if necessary, pull the plug quite easily.
One obvious down side, however, is that you could be seen as killing 9 
sentient intelligences with every cloning operation (since you effectively 
delete 9 of them).
James Higgins


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT