Probabilistic Philosophy of Mind

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Jan 15 2005 - 08:33:14 MST


Stephen,

This long email is a (lightly edited) excerpt from one of the
long-in-progress books on my Novamente AI system and the ideas underlying
it.

It outlines a philosophy of mind that has probability theory at the
foundation, and explains how this philosophy of mind relates to our
probability-theory-based AI design.

First of all: There are many different ways to formulate the "psynet model
of mind", the philosophy of mind that lies at the foundation of the
Novamente design. One formulation that has particular resonance with the
AI design details is what we call the "probabilistic psynet model." This is
a way of expressing and verbalizing the psynet model that introduces
probabilistic notions from the very start - convenient because Novamente's
core AI techniques are probabilistically based.

The key principles of the psynet model of mind, expressed in
probability-theory-friendly form, are as follows (of course, a few
principles could be added or omitted without changing the essence of the
formulation - the psynet model is a theory with fuzzy boundaries):

1) An intelligent system is a system that can achieve complex goals in
complex environments

2) A mind is the set of patterns in, or associated with, an intelligent
system

3) Intelligence is a matter of (implicitly or explicitly) finding procedures
that, if executed, have a high probability of leading to goal achievement in
the observed context

4) A pattern is defined probabilistically (f is a pattern in X if f produces
X and the reference simple-entity-generator is more likely to produce f than
to produce X)

5) In order to achieve intelligence, an intelligent system must execute a
variety of procedures in a variety of contexts, and then create abstractions
allowing it to generate new procedures that will hopefuly be effective in
new contexts. These "abstractions" are patterns relating procedures and
goal-achievement

6) An intelligent system must have a way to take patterns and generate new
ones from them. As patterns are defined probabilistically, this takes the
form (implicitly or explicitly) of probabilistic inference)

7) Patterns directly relating procedures and goal-achievement spawn other
patterns (patterns between patterns). These patterns tend to be organized
in a "dual network" structure, with a hierarchical and heterarchical
(associational) structure overlapping.

8) An intelligent system will tend to construct a probabilistic model of
itself, which is represented by a subnetwork of the dual network that may be
called the "self" of the system

9) Given finite resources devoted to pattern learning, an intelligent system
must choose which aspects of its world and itself to pay more attention to.
This is an example of goal-directed learning, in that the system must learn
which attention-allocation procedures, overall, tend to lead to better goal
achievement. This learning is a process of pattern recognition, similar in
nature to the other probabilistic pattern recognition processes that the
mind carries out.

10) Emergence occurs spontaneously in pattern-networks: i.e. often when two
patterns that were difficult to find are placed together in the same memory,
the result is that there are patterns that are relatively easy to find,
spanning the two difficult-to-find patterns "emergently."

11) Linguistic communication involves minds transmitting abstract patterns
to one another via the use of symbols (which are themselves a particular
type of pattern). This allows minds to acquire very complex
pattern-networks without learning all the patterns themselves, and it
encourages the creation of patterns spanning multiple individual minds, in
effect binding multiple minds into an overall "mindplex."

Does such a probabilistically-expressed theory of mind imply that every mind
must explicitly make use of probability theory formulas? Of course not.
Clearly, it's possible to create AI architectures that are explicitly
probabilistic, and others that are not founded on probabilistic notions at
all, yet (similarly to the human brain) behave overall in accordance with
approximate probabilistic calculations. The human brain is clearly
implicitly but not explicitly probabilistic; Novamente is explicitly
probabilistic, but is far from the only explicitly probabilistic AI design.

Now, how does Novamente embody these general probabilistic/philosophical
principles? A very quick summary (probably only partially comprehensible at
this stage - the list should be reread after the rest of the book has been
digested!) is given here.

In essence, Novamente consists of a large pool of patterns observed in the
world and itself, represented in terms of a special hypergraph-based
formalism that involves various types of nodes and links. Some nodes and
links represent probabilistic patterns directly, others represent the
interaction of other nodes and links, thus allowing complex patterns to be
built up from multiple nodes and links. New nodes and links are learned by
a variety of processes, all of which make use of two basic tools:
probabilistic term logic and the Bayesian Optimization Algorithm.

More explicitly what this boils down to is:

1) Procedures are represented using objects called "combinator trees" using
a special vocabulary defined by the Combo programming language. Combinator
trees contain special nodes and links representing patterns and procedures.

2) The environment of the system is represented via perceptual nodes and
links, which are interpreted as probabilistic patterns in the environment

3) The internal state of the system is represented via feeling-nodes and
associated links, which are interpreted as probabilistic patterns in the
system itself

4) Goals are represented as procedures (i.e. as predicates which output True
to the extent that the goal-state they embody is satisfied)

5) Patterns may be represented explicitly as procedures (i.e. combinator
trees wrapped in PredicateNodes)

6) In simple cases, patterns may also be represented as Node or Link objects
representing conditional probabilities or Boolean combinations thereof

7) Learning of patterns is carried out via two methods:
a. PTL, which performs probabilistic inference on Node and Link objects
b. BOA, which is specialized to efficiently recognize simple patterns among
general procedures

8) Attention allocation is carried out using BOA and PTL applied to the
problem of learning what it's valuable to pay attention to in what contexts
Note that points 1-5 are highly general statements about representation.
They simply state that we're using a certain general mathematical formalism
to represent procedures, patterns and goals, and that patterns may involve
tokens that represent percepts or feelings. Of course, there are already
major choices being made at this level: to use combinator trees instead of
e.g. neural networks; to use the same representational approach for patterns
and procedures.

Point 6 represents another major choice: basically, to bias the system
toward the recognition of patterns compactly representable in terms of
probabilistic term logic.

Point 7a identifies a generic learning method (BOA) that can be used on any
procedures, but is fairly resource-intensive. 7b identifies a more
specialized learning method that works on special, simple patterns and is
much less resource-intensive. Finally, 8 states that attention allocation
will be handled using these same methods (rather than some other heuristic
technique).

As noted above, there are many other choices that could have been made -
Novamente is certainly not the only approach to AGI consistent with the
probabilistic variant of the psynet model, let alone the only valid approach
to AGI. All we claim is that it seems to be a valid approach to AGI, and
one that we have managed to spell out in great detail, and are partway
through implementing and tuning.

Unfortunately, due to a lack of hardly any funding oriented toward AGI, the
work is going very slowly, and we are spending most of our time working on
various short-term narrow-AI applications of the partially-complete
Novamente codebase. But we'll get there....

-- Ben Goertzel

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Stephen
> Tattum
> Sent: Saturday, January 15, 2005 6:21 AM
> To: sl4@sl4.org
> Subject: Fuzzy vs Probability
>
>
> I was looking over the Singularity Institute page on becoming a seed AI
> Programmer the other day and I couldn't help but feel that there is an
> overwhelming bias towards bayesian reasoning and I have noticed that a
> lot of contributors to sl4 hail this as all-powerful - should they?
> Check out this paper by Bart Kosko (clearly a 'brilliant' individual)
> and his other work -
>
> http://sipi.usc.edu/~kosko/ProbabilityMonopoly.pdf
> http://sipi.usc.edu/~kosko/
>
> I couldn't help noticing also that generally there are gaps in the
> plan. As a philosopher I saw the ommission of any philosophy of mind -
> crucial to any AI discussions and for any 'deep understanding' of the
> issues actually outlined - strange... I have witnessed in the past
> prejudice against philosophy and philosophers here too (apology already
> accepted of course) and I wondered if the project of creating AI is
> being pushed forward before it is ready. Now I believe that the
> singularity is inevitable and I am not suggesting that the institute is
> wrong, just that creating an Artificial General Intelligence, needs more
> emphasis on the general. Any thoughts?
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT