Re: Floppy take-off

From: Jimmy Wales (jwales@bomis.com)
Date: Tue Jul 31 2001 - 16:00:45 MDT


James Higgins wrote:
> To put it another way, if Ben Goertzel had a million years to work on AI
> (without serious distraction like medical/financial problems), how much
> progress do you think he would have made on AI by the end of that
> time? Quite substantial, I'd actually bet that he would have solved all of
> the hard problems by then. The only potential problem is, if he was
> working in complete isolation would he stagnate at some point? But this
> never needs to be a problem for the AI since it could have dozens (if not
> more) scientists to discuss ideas with.

I (loosely) agree with this, but there is an interesting phenomenon
that should be noted. Someone mentioned a gas station operator the
other day, and that's a reasonable example. But keeping in mind that
it's possible to find very bright people almost anywhere whose life
circumstances have led them to a position of that sort, I'll use a
fictional example instead: Homer Simpson.

I've known some dumbasses in my life, as I'm sure you all have unless you have
lived a life of incredible privilege. You could give Homer Simpson one million
years and he couldn't solve the AI problems that Ben Goertzel will solve *this
year*.

But is Ben Goertzel "one million times smarter" than Homer Simpson? Not by any
of the usual measures! Maybe twice as smart, let's say. 10 times as smart.
Doesn't matter. By any conventional measure (IQ, neuron firings, whatever), it
isn't a million times.

But the relatively small *degree* of increase intelligence of Ben over Homer, gives
rise to an extraordinary difference in *kind* of thinking.

I think that this is what Gordon meant. If you have a .5 human intelligence, and you
just speed it up by a factor of 2, then you've now given your idiot the ability to think
up more stupid thoughts more quickly. This doesn't mean that it will make more progress,
since it will still not be able to correctly judge which ideas are right or wrong.

Still, I agree with this:
> The big hurdle is getting general AI that is roughly equivalent to a single
> human scientist implemented and working. If we can do that I believe the
> Singularity will be inevitable.

It will be interesting to see if there is a long delay between building our first
"human level dumbass" and our first "human level scientist", keeping in mind that
building even 1 billion human level dumbasses isn't likely to help us much in getting
that scientist built.

--Jimbo

-- 
*************************************************
*            http://www.nupedia.com/            *
*      The Ever Expanding Free Encyclopedia     *
*************************************************


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT