From: Brian Atkins (brian@posthuman.com)
Date: Tue Jul 31 2001 - 01:04:49 MDT
Gordon Worley wrote:
>
> At 6:05 PM -0400 7/30/01, Carl Feynman wrote:
> >Let's be optimistic and say that Webmind had an AI with a capacity of
> >0.5 brains. It will take Moore's Law about 16 years to upgrade their
> >machine to 1.3 kilobrains. If we assume that the rate of progress in AI
> >algorithms (doubling every two years) continues, and that the AI field
> >is working on the right problems, the time is decreased to about ten
> >years. Still pretty long.
>
> Um, I'm not sure how you got 16 years, but the Moore's Law analysis
> can't be right. Just by doubling the speed of a 0.5 brains AI, all
> it means is that a moron will have twice as many moronic thoughts in
> a year. Moore's Law alone can't get us to transhuman AI, but it does
> help once we have at least a 1 brain AI. When the AI reaches human
> level intelligence, I'd assume it can think things humans can, like
> how to write better algorithms, at which point Moore's Law will
> matter.
>
> I think you're mixing up speed increases with getting smarter.
Think about this: brain size didn't change much between apes,
neaderthals (or whatever preceded humans) and us. Yet the increase
in what we call intelligence was rather huge. So what if rather than
using a doubling in computing power to run the mind twice as fast, we
instead use it to double the size of it, leading to a huge increase
(probably much more than doubling) in the complexity/intelligence/
creativity of the mind.
So to recap, we seem to have evolutionary evidence showing that
intelligence scales exponentially with available computing power, not
linearly, and therefore the idea of comparing "AI years" to human
work years is probably not going to work. Likewise it is quite
possible if we look at evolutionary evidence that a 1.3 kilobrain
AI would not simply be equal to 1300 humans working on stuff. It
might very well be hugely more intelligent/smart than than any
human, allowing it to easily see all the wrong research paths.
Of course this is dependent on your AI design being able to scale
its intelligence with additions of processing power. If it simply
runs faster, or runs more copies of itself, that is not going to be
anywhere near as good.
P.S. I basically plagiarized all this from our "Seed AI" webpage
P.P.S. I just did a little Googling on Einstein's brain. Apparently,
it actually was a bit smaller than average, yet a certain area of it
relating to visualizing/mathematics was 15% larger than average. So
if indeed this is what made him special, you can see how a very small
increase in available computing resources might indeed vastly increase
intelligence/abilities.
-- Brian Atkins Director, Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT