obstacles to unbounded intelligence

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jan 26 2002 - 18:13:16 MST


Hi all,

I was explaining some Singularity stuff to my kids this morning, and my
12 year old said "Yes, but what if after a machine gets ten times as
smart as people, IT figures out a reason why it's incredibly hard to make
machines any smarter than that... a reason that we can't understand because
we're too dumb."

Coincidentally, last night I was doing some simple calculations regarding
the
time complexity of learning computer programs and I obtained some results
that sort of point in this direction. Of course, it's all very heuristic
and inconclusive, but it's food for thought...

The question I was addressing was the *expected time complexity* of
*learning programs expressed as binary function trees* via focused Monte
Carlo
search or genetic programming (which shares some properties with focused
Monte Carlo search).

Roughly speaking, the time complexity of these program learning methods
is logarithmic in the size of the search space.

On the other hand, the number of program trees on n nodes is roughly

        K^n C(n)

where K is the number of functions that can occur at the nodes, and C(n)
is the n'th Catalan number.

For small n, C(n) behaves roughly like c^n, which means that GP or focused
Monte Carlo type methods are going to have a time complexity *linear* in
n (linear in the program size).

But when n gets big enough (say, n > 100 trillion according to my
calculations),
C(n) behaves roughly like c^(n^2), which means that these learning methods
are
going to have a time complexity *quadratic* in n.

It's interesting how big this number is. The human brain has around a
trillion
neurons.... Of course, the mapping from neurons into function nodes in
program
trees is incredibly rough, but the order of magnitude agreement is still
interesting, especially since, in a sense, the brain was learned by a kind
of
"genetic programming."

As I said, this little narrow calculation is in no way conclusive about
anything
general. It's not even directly relevant to my own AI work, since both
Webmind
and Novamente use evolutionary methods only to create *small* program trees,
which are then pieced together using other techniques (association-finding,
probabilistic inference). But I share it because it's a concrete example of
a general *type* of result that we've been vagely talking about on this list
for a while: Results showing that after a brain/mind reaches a certain *
very
large* size, learning and adapting it becomes significantly more difficult.

-- Ben G

, and came across something that reminded me of all
this Singularity talk...

The conclusion I found is that the learning of program trees by Monte Carlo
or evolutionary type methods begins to get *radically more difficult* when
the program trees exceed around 10^14 in size.... The reason is that when
the number of nodes N exceeds around 10^14, the number of possible program
trees begins to grow like e^(N^2) rather than just e^N .

The attached file contains my calculations, which are really just rough
notes, not written for comprehensibility or expository purposes...

What this indicates is that there *may* be tricky mathematical issues that
arise when one tries to make programs vastly exceeding the human brain in
intelligence. The human brain has about 10^12 neurons which is interesting
in the light of this calculation....

Of course I'm not claiming to have shown anything with any definitiveness
here... just more food for thought..

-- ben g



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT