**From:** Ben Goertzel (*ben@goertzel.org*)

**Date:** Tue Mar 05 2002 - 23:15:46 MST

**Next message:**Eliezer S. Yudkowsky: "CHAT: Today, Wednesday March 6th, at 9PM EST"**Previous message:**Damien Broderick: "Re: Transcendental Fiction List"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Hi all,

I mentioned a week ago that I had an idea for how to mathematically

formalize the different levels of self-modifiability that I had informally

presented.

Well, I haven't had time to do a totally rigorous mathematical treatment,

but I don't know when I'll have that kind of time to devote to something

that isn't Novamente engineering.

However, I have written up a semi-mathematical paper, the last section of

which gives a formalization of the self-modifiability levels. The previous

parts of the paper give formal definitions for things like "intelligence"

and "mind", some of which are needed for the self-modifiability definitions.

I also describe herein one "AI proof of principle" -- a design for an

insanely inefficient AI system that would be intelligent if it could be run

on a powerful enough computer. This idea is close to the theory of

Solomonoff Induction.

Please note, this little article is NOT directly related to the Novamente AI

system. It gives some formalizations that have been useful to me in

thinking about Novamente, but only in a very general sense. The "AI proof

of principle" described in the article has very little to do with Novamente,

which is tremendously more complex and more efficient (it's a real AI design

not a proof of principle).

As an aside, however: The formal definition of pattern given in the paper

*is* used in Novamente in a couple of places (the system tries to

evolve/infer schema and compound relationships that are "patterns" in the

formal sense given here).

Constructive feedback will be much appreciated.

I know the definitions given here are not at all practical to compute. They

are also sufficiently complex that proving anything nontrivial about them

would be a huge pain. So I think their main role is conceptual.

The paper is at

www.goertzel.org/dynapsyc/2002/FormalTheoryIntelligence.htm

I tried to proofread all the equations, but I tend to be poor at this, so

it's possible there are a couple notational slip-ups in here..

-- Ben

**Next message:**Eliezer S. Yudkowsky: "CHAT: Today, Wednesday March 6th, at 9PM EST"**Previous message:**Damien Broderick: "Re: Transcendental Fiction List"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:37 MDT
*