Re: Towards a prototype mini-AI

From: Slawomir Paliwoda (velvethum@hotmail.com)
Date: Wed Jan 29 2003 - 11:51:26 MST


> So - there are five levels: code, modality, concept,
> thought, deliberation. What I'm going to do is just
> informally sketch something which *sounds to me* like
> it has all those levels.

But do we really know how do minds work or we are going to just code
whatever we know
and hope that whatever we'll create turns out to be a mind?

> Code - no problem here, every program is made of code.

Does any mind use code? It seems to me that the main purpose of code is to
keep a record of the structure so the AI will have the access to that code
once it knows how to manipulate it.

> Modality - Above all, a modality seems to be a *feature
> extractor*, operating in a specific domain. The input is
> some sort of raw data set, static or dynamic, and the
> output is a representation as a list of features, or
> objects with features. Well, neural nets can do all of
> that.

> Concept, thought, deliberation - In LOGI, these are
> described as analogous to word, sentence, stream of
> consciousness. So it seems that *propositional content*
> (think Prolog) might be enough for those top two
> levels -

Okay, but what is the nature of concepts? What will be the "code" of a
concept. They will not be run through traditional compilers, I assume, are
they?

Next, what is the nature of the "computation" that might be performed on
those concepts? How should interaction between concepts look like? If you
load "lightbulb" and "triangle" and want to create "triangular lightbulb"
inside the mental workspace, what will need to happen between the two chunks
of code, and more importantly, how will it contribute to intelligence of the
mind? In other words, how would we know that implementing concepts and
mental
workspace creates an intelligent system rather than sophisticated
representational system.

And what mechanisms might govern reasoning with the structures made of
concepts?

>'sentences' in some internal language of
> representation. The AI needs to produce descriptive
> sentences which express what its modalities tell it,
> and normative sentences which express its intentions.
> If you can get those, there is a wealth of symbolic AI
> work on putting them to use in a rational fashion.
>
> So, the key concept seems to be 'concept', and the way
> it bridges the gap between subsymbolic feature extraction
> and symbolic-level propositional processing. Well, one
> simple way to do that is to have concepts which are just
> lists of features.

If the concept is proposed to be a list of features, then how would those
features interact with other features to produce something meaningful?
Wouldn't a mere list of features lack the "substance" that all
conceps/symbols necessarily require?

> Each modality outputs an inventory
> of features possessed by its current input; the art of
> appropriate concept formation is all about picking out
> just a few of those features as 'co-relevant', worthy
> of being grouped together. And Copycat provides a model
> for *that* process.
>
> It looks to me like we already have all the ingredients,
> not for a seed AI, but certainly for a "general AI" as
> defined by LOGI.

I think we have a good direction, but lack the means of reaching the
destination.

Slawek



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT