From: James Higgins (jameshiggins@earthlink.net)
Date: Mon Jul 30 2001 - 17:53:01 MDT
At 06:52 PM 7/30/2001 -0400, you wrote:
>At 6:05 PM -0400 7/30/01, Carl Feynman wrote:
>>Let's be optimistic and say that Webmind had an AI with a capacity of
>>0.5 brains. It will take Moore's Law about 16 years to upgrade their
>>machine to 1.3 kilobrains. If we assume that the rate of progress in AI
>>algorithms (doubling every two years) continues, and that the AI field
>>is working on the right problems, the time is decreased to about ten
>>years. Still pretty long.
>
>Um, I'm not sure how you got 16 years, but the Moore's Law analysis can't
>be right. Just by doubling the speed of a 0.5 brains AI, all it means is
>that a moron will have twice as many moronic thoughts in a year. Moore's
>Law alone can't get us to transhuman AI, but it does help once we have at
>least a 1 brain AI. When the AI reaches human level intelligence, I'd
>assume it can think things humans can, like how to write better
>algorithms, at which point Moore's Law will matter.
>
>I think you're mixing up speed increases with getting smarter.
Well, actually increased processing speed alone could get us to transhuman
AI all by itself. It would just take a very, very long time. Assuming an
AI that could come up with equivalent ideas as 1 AI researcher, but worked
at 30% the speed. Since the AI could run 24/7 it would equal a single AI
researcher @ 50 hours/week. Now, give enough time so that this same exact
software could run 1 million times as fast on better hardware and you would
very likely have a winner. Because a single AI researcher could explore 1
million times as much territory in the same amount of time. Much of this
would lead nowhere, but some great insights would certainly be made as
well. Add to this the fact that all this knowledge, both successful and
unsuccessful, would be available to this single individual. Just having
that much raw experience alone should lead to much more productive
research. Give this thing a year and you should have incredible progress,
even though the thing isn't any smarter than any a single human working in
the same field.
To put it another way, if Ben Goertzel had a million years to work on AI
(without serious distraction like medical/financial problems), how much
progress do you think he would have made on AI by the end of that
time? Quite substantial, I'd actually bet that he would have solved all of
the hard problems by then. The only potential problem is, if he was
working in complete isolation would he stagnate at some point? But this
never needs to be a problem for the AI since it could have dozens (if not
more) scientists to discuss ideas with.
The big hurdle is getting general AI that is roughly equivalent to a single
human scientist implemented and working. If we can do that I believe the
Singularity will be inevitable.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT