RE: PAPER: Theory of Universal AI based on Algorithmic Complexity

From: Ben Goertzel (
Date: Mon Apr 16 2001 - 05:40:56 MDT

Your partitioning algorithm sounds very interesting

Do you have it written up somewhere?


> -----Original Message-----
> From: []On Behalf
> Of James Rogers
> Sent: Monday, April 16, 2001 12:12 AM
> To:
> Subject: Re: PAPER: Theory of Universal AI based on Algorithmic
> Complexity
> On 4/15/01 5:07 PM, "Mitchell Porter"
> <>
> wrote:
> > The main problem with his optimal AI is that
> > in its idealized form, it is noncomputable,
> > and even its resource-bounded form, it still
> > scales horribly (see second comment under
> > "Outlook"). This is because it's meant to
> > deal with every possible environment, so it
> > runs afoul of no-free-lunch theorems. What
> > interests me is whether these concepts can
> > illuminate 'specialized AI' that works with
> > domain-specific representations and algorithms.
> I should have mentioned that I have never tried to apply these
> constructs on
> a global basis, as there are obvious problems with computational
> intractability. I had already developed an interesting adaptive
> partitioning algorithm (something completely outside that paper), that
> allows the problems to be computationally tractable in implementation.
> Human brains appear to partition from a blank substrate at birth into a
> cluster of specialized partitions created as a consequence of the
> environment they are subjected to. You create domains as needed
> on the fly,
> adding depth of specialization where necessary. This is a rather
> conservative approach to learning (learning only what is forced on you by
> the environment), but it creates the type of specialization that allows
> intelligence to be feasible and creates areas of intelligence that have
> proven utility to the human/AI. I don't know anybody who is a universal
> intelligence; humans seem to be large collections of specialization of
> varying depth, which judging from the consequences of the mathematics in
> question here, grants a strong evolutionary advantage over being
> a Universal
> Intelligence (which *can't* specialize significantly for most intents and
> purposes).
> My history on this is kind of backwards. I developed an interesting and
> unusual adaptive partitioning algorithm for some data mining research, and
> while doing related research, tripped across some interesting papers on
> Kolmogorov complexity and universal predictor functions. I immediately
> noticed the potential relationship between algorithmic complexity and AI,
> and after further research, realized that I also had an algorithm that
> allowed the AI model I had developed to become tractable.
> Basically I took
> a Universal Intelligence model and derived a functionally equivalent
> specialization model that is computationally constrained by
> experience/environment.
> The theory of Universal Intelligence isn't so valuable because it is a
> solution to the problem of AI (although it does give it an excellent
> mathematical basis), rather it is valuable because it gives us specific
> implementation problems to solve, that when solved, should theoretically
> result in a functional AI. Being able to know what needs to be done is a
> big step in the right direction.
> Cheers,
> -James Rogers

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT