Detailed Explanation re: PAPER: Theory of Universal AI based on Algorithmic Complexity

From: James Rogers (jamesr@best.com)
Date: Tue Apr 17 2001 - 19:00:15 MDT


Well, semi-detailed perhaps. I'm short on time as always. I am not so
interested in the guy's implementation ideas as I am in the mathematics, at
least insofar as the mathematics provide something resembling a foundation
for AI -- it serves as a basis for a hard discussion of limits and
possibilities.

The non-computability aspect of the general UI model is overplayed and not
particularly relevant. Resource bounded UI variants are provably optimal
within the given constraints (it isn't clear to the extent that the author
is aware of work in this area) and demonstrably very doable on modern
silicon. Since all practical implementations will be resource bounded, I
don't see a problem with this as long as you approach it as such.

At 07:24 AM 4/16/2001 -0400, Ben Goertzel wrote:
>At the end of the paper, after lots of nice math, the author suggests that
>in order to get a realistic AI to work, one will have to introduce lots of
>specialized algorithms to deal with different particular types of learning
>confronted by the mind. But he doesn't go into details.

His math is good, but I don't feel that his implementation ideas are very
mature. Which is fine, I was only really referring to the math. He comes
up short in a few areas due to lack of breadth, but I think it was meant to
be an overview-ish paper of a specific aspect anyway. The biggest
shortcoming is that he doesn't seem to be very knowledgeable on the topic
of bounded models, which while only peripherally related to his particular
paper, are really pretty important in the context of tractable AI
implementations using UI.

>With this small comment at the end, he's ~starting~ -- just barely -- to
>veer toward the really interesting part of AI. The next step to observe is
>that you need a bunch of specialized methods, all able to learn from each
>other rather than working against each other. This is what I've called
>"emergent intelligence."

I am strongly in favor of large-scale specialization like you describe
above myself (which I go into somewhat below). However, I model it as a
vast clusters of bounded, specialized UI components. More importantly,
environmentally-driven self-specializing components (excepting those times
where you don't want to wait for UI convergence).

>As for the mind having a Universal Intelligence core, I believe that it
>really works like this. Among the many specialized modules in the mind,
>there are some (more than one) with universal intelligence ability, as well
>as some without it. Since universal intelligence is only definable up to an
>arbitrary constant, it's of at best ~heuristic~ value in thinking about the
>constructure of real AI systems. In reality, different universally
>intelligent modules may be practically applicable to different types of
>problems.

Although one can actually demonstrate that all intelligence can be realized
in a UI core, some specializations are extremely inefficient when done this
way. Numerical computation is a good example of this; silicon does it very
well natively, so I am easily persuaded that an engineered solution is
better than waiting for a useful convergence of a UI core to do the same
(and even then, it will be very, very slow at run-time). The poor number
crunching ability of humans may be related to this.

The upside to this is that one can always use a UI to implement
capabilities that we don't care to implement in a hardware optimal
way. Alternatively, one could explicitly engineer the relationship between
multiple UIs to partially optimize performance for a particular specialization.

Personally, I believe that resource-bounded UIs are the fundamental
building blocks of intelligence with the exact organization of any
particular UI being the consequence of its specialization and
environment. I have strong evidence (though no proof yet -- no one has
attempted to formally characterize anything like this to my knowledge) that
the re-synthesis of vast clusters of specialized and bounded UIs optimally
approximates the enormous single UI model used by the author of the
paper. (Note that this is only true if the specialization in question was
self-generated due to environmental stimulus.) The difference being that,
unlike the authors general UI model, convergence is demonstrably tractable.

The concepts and mechanisms of adaptive partitioning are potentially too
lengthy to approach in email (there is a fair bit of background to
cover). Maybe if we cross paths I can spend some time on it.

> > Now it looks as if the UI-core architecture
> > is just too slow to plausibly be an SI architecture.
>
>I think that the space of UI algorithms is pretty large.

As a minor nitpick, UI *is* an algorithm (a meta-algorithm?).

If you use UI like the author seems to suggest, it would take a very long
time for you to get useful convergence. However, there are a lot of ways
to use it where you can get convergence in a "reasonable" amount of time,
particularly in cases of small, specialized UIs.

Something else to consider: For tightly bounded problems, non-optimal UI
implementations can actually offer better approximations than optimal ones,
but they don't scale as well with resource availability.

>One might say that the UI concept is valuable in a theoretical sense: It
>helps us clarify the nature of the problem of intelligence. One the other
>hand, I have found that what it mostly does is to make explicit what those
>of us with a strong AI predisposition already feel. In many discussions
>with AI skeptics, I have not found the UI arguments to help me to convince
>them that AI is possible. They'll reject the mathematical definition of
>intelligence underlying the UI approach, just as surely as they'll reject
>the statement "AI is possible."

I concur; I had made a similar conjecture long before I was able to come up
with a formal mathematical model. The nice thing about having a formal
mathematical basis is that it gives one substantially more confidence that
derivations are at least footed on solid ground.

One of the things I really dislike about AI research in general is that it
is chock full of really shaky conjecture, which makes it hard discuss these
things as they approximate religious doctrine at times. Oh well.

Cheers,

-James Rogers
  jamesr@best.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT