RE: PAPER: Theory of Universal AI based on Algorithmic Complexity

From: Ben Goertzel (ben@webmind.com)
Date: Mon Apr 16 2001 - 05:24:29 MDT


> > The theory of Universal Intelligence isn't so
> > valuable because it is a
> > solution to the problem of AI (although it does give
> > it an excellent
> > mathematical basis), rather it is valuable because
> > it gives us specific
> > implementation problems to solve, that when solved,
> > should theoretically
> > result in a functional AI. Being able to know what
> > needs to be done is a
> > big step in the right direction.
>
> This, I don't understand. What are the implementation
> problems it gives us to solve?

I echo Mitch's skepticism.

At the end of the paper, after lots of nice math, the author suggests that
in order to get a realistic AI to work, one will have to introduce lots of
specialized algorithms to deal with different particular types of learning
confronted by the mind. But he doesn't go into details.

With this small comment at the end, he's ~starting~ -- just barely -- to
veer toward the really interesting part of AI. The next step to observe is
that you need a bunch of specialized methods, all able to learn from each
other rather than working against each other. This is what I've called
"emergent intelligence."

As for the mind having a Universal Intelligence core, I believe that it
really works like this. Among the many specialized modules in the mind,
there are some (more than one) with universal intelligence ability, as well
as some without it. Since universal intelligence is only definable up to an
arbitrary constant, it's of at best ~heuristic~ value in thinking about the
constructure of real AI systems. In reality, different universally
intelligent modules may be practically applicable to different types of
problems.

Examples of universally intelligent modules in Webmind are: the evolution
module (implementing a type of genetic programming) and the reason module
(implementing a kind of first-order & higher-order term logic). [There are
only two examples, there's at least one more.] Both of these modules, like
the others with more limited scope (natural language, data processing,
association-finding, etc.), act on the same data structure (something called
a Relationship, of which semantic nodes and links are special cases).

> Now it looks as if the UI-core architecture
> is just too slow to plausibly be an SI architecture.
>

I think that the space of UI algorithms is pretty large.

For instance, of course UI algorithms based on exhaustively or randomly
searching a space of all programs carrying out some criterion, are too slow
to be usable.

But, on the other hand, using GP to search a space of programs, while in the
worst case no better than exhaustive search of program space, in practice is
generally much better. (For some types of problems, it's a good approach,
for others not, but I doubt there are any realistic problems where it's
worse than exhaustive or Monte Carlo search).

As for GP, a similar thing can be said about higher-order inference using
uncertain term logic, which can be used to leverage past knowledge to help
create programs solving future problems in a way that is highly efficient in
certain domains.

One might say that the UI concept is valuable in a theoretical sense: It
helps us clarify the nature of the problem of intelligence. One the other
hand, I have found that what it mostly does is to make explicit what those
of us with a strong AI predisposition already feel. In many discussions
with AI skeptics, I have not found the UI arguments to help me to convince
them that AI is possible. They'll reject the mathematical definition of
intelligence underlying the UI approach, just as surely as they'll reject
the statement "AI is possible."

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT