From: Ben Goertzel (ben@goertzel.org)
Date: Sat Apr 13 2002 - 23:51:30 MDT
hi,
> Recently (I'm not sure if I was doing this during the whole of the DGI
> paper) I've been trying to restrict "sensory" to information produced by
> environmental sense organs. However, there are other perceptions than
> this. Imaginative imagery exists within the same working memory that
> sensory information flows into, but it's produced by a different source.
> Abstract imagery might involve tracking "objects", which can be very
> high-level features of a sensory modality, but can also be features of no
> sensory modality at all.
I think this is an acceptable use of the term "perceptual", but only if you
explicitly articulate that this is how you're using the word, since it
differs the common usage in cognitive psychology.
Your usage is part of common English of course. We can say, e.g. "I sense
you're upset about this," or "I perceived I was going through some kind of
change" -- abstract sensations & perceptions. These common English usages
aren't reflected in the way the terms have become specialized within
cognitive psych, that's all.
These issues are impossible to avoid in getting scientific about the mind.
For instance, the way we use the word "reason" in Novamente includes things
that logicians don't all consider "reason" -- very speculative kinds of
reasoning.
> Right - but it's associated with the behaviors, in abstract imagery, of
> other math concepts. That's why you can't discover complex math concepts
> without knowing simple math concepts; the complex math concepts are
> abstracted from the abstract imagery for complex behaviors or complex
> relations of simple math concepts. But there is still imagery. It is not
> purely conceptual; one is imagining objects that are abstract objects
> instead of sensory objects, and imagining properties that are abstract
> properties instead of sensory properties, but there is still
> imagery there.
> It can be mapped onto the visual imagery of the blackboard and so on.
I am not at all sure you are right about this. I think that abstract
reasoning can sometimes proceed WITHOUT the imagining of specific objects.
I think what you're describing is just *one among many modes* of abstract
reasoning. I think sometimes we pass from abstraction to abstraction
without the introduction of anything that is "imagery" in any familiar
sense.
And the mapping onto the visual blackboard may be a very very distortive
mapping, which does not support the transformations required for accurate
inference in the given abstract domain. Visual thinking is not suited for
everything -- e.g. it works well for calculus and relatively poorly for
abstract algebra (which involves structures whose symmetries are very
DIFFERENT from those of the experienced 3D world).
Again, I think if you're right, it can only be by virtue of having a very
very very general notion of "imagery" which you haven't yet fully
articulated.
> Be it noted that this is a somewhat unusual hypothesis about the mind, in
> which "propositional" cognition is a simplified special case of mental
> imagery.
I have seen yet more extreme versions of this hypothesis. One of the papers
submitted for the "Real AI" volume argues that all cognition is largely an
application of our 3D scene processing circuitry.
I still don't really feel I know what you mean by "mental imagery" though.
> Abstract imagery uses non-depictive, often cross-modal
> layers that
> are nonetheless connected by the detector/controller flow to the depictive
> layers of mental imagery.
Again, I think this is a very important *kind* of abstract thought, but not
the only kind.
> Maybe I should put in a paragraph somewhere about "sensory
> perception" as a
> special case of "perception".
Definitely.
> I think that human concepts don't come from mixing together the internal
> representations of other concepts.
I think that some human concepts do, and some don't.
This is based partly on introspection: it feels to me that many of my
concepts come that way.
Now, you can always maintain that my introspection is inaccurate.
Of course, your intuition must also be based largely on your own
introspection (cog sci just doesn't take us this far, yet), which may also
be inaccurate.
I am largely inclined to believe that both of our introspections are
accurate *in terms of what they observe*, but that both of our
introspections are incomplete.
Two factors here: we may have different thought processes; and we may each
for whatever reason be more conscious of different aspects of our thought
processes.
Certainly the combinatory aspect of cognition that I describe has
significant neurological support. Edelman's books since Neural Darwinism,
among other sources, emphasize this aspect of the formation of neural maps.
> I think that's an AI idiom
> which is not
> reflected in the human mind. Humans may be capable of faceting
> concepts and
> putting the facets together in new ways, like "an object that smells like
> coffee and tastes like chocolate", but this is (I think) taking apart the
> concepts into kernels, not mixing the kernel representations together.
Well, Edelman and I disagree with you, he based mostly on his neurological
theory, I based mostly on my introspective intuition.
What support do you have for your belief, other than that when you
introspect you do not feel yourself to be combining concepts in such a way?
> Now it may perhaps be quite useful to open up concepts and play with their
> internals! I'm just saying that I don't think humans do it that way and I
> don't think an AI should start off doing it that way.
My intuition is quite otherwise. I think that very little creative
innovation will happen in a mind that does not intercombine concepts by
"opening them up and playing with their internals."
> > I don't think that a new math concept i cook up necessarily has
> anything to
> > do with imagery derived from any of the external-world senses.
> Of course
> > connections with sensorimotor domains can be CREATED, and must be for
> > communication purposes. But this may not be the case for AI's,
> which will
> > be able to communicate by direct exchange of mindstuff rather than via
> > structuring physicalistic actions & sensations.
>
> The new math concept has plenty to do with imagery, it's just not sensory
> imagery.
OK, then I still don't know what you mean by "imagery".
> > I don't understand why you think a baby AI can't learn to see the Net
> > incrementally.
>
> It doesn't have a tractable fitness landscape. No feature structure to
> speak of. No way to build up complex concepts from simple concepts.
I think this is quite wrong, actually. There is an incredible richness, for
example, in all the financial and biological databases available online.
Trading the markets is one way to interact with the world online, a mode of
interaction that incorporates all sorts of interesting data. Chatting with
biologists (initially in a formal language) about info in bio databases is
another. I think an AI has got to start with nonlinguistic portions of the
Net, then move to linguistic portions that are closely tied to the
nonlinguistic portions it knows (financial news, Gene Ontology gene
descriptions, etc.).
I think the implicit fitness landscapes in the financial trading and
collaborative biodatabase analysis spaces are quite tractable.
> It's
> all complexity in a form that's meant to be perceived by other humans.
Not really. Most trading is done by programs, and so is most biodata
analysis.
The Net is not just Web pages.
> > "Differential functions on [-5,5] whose third derivative is
> confined to the
> > interval [0,1]"
>
> This isn't much of a concept until it has a name. Let's call a function
> like this a "dodomorphic fizzbin".
Well, i disagree -- I have concepts like this all the time with no names.
Naming such a concept only occurs, for me, when I want to communicate it to
others, or write it down for myself.
> > then how is this concept LEARNED? I didn't learn this, I just
> INVENTED it.
>
> You learned it by inventing it. The invention process took place on the
> deliberative level of organization. The learning process took
> place on the
> concept level of organization and it happened after the invention. You
> created the mental imagery and then attached it to a concept.
> These things
> happen one after the other but they are still different cognitive
> processes
> taking place on different levels of organization.
Somehow, you're telling me it's not a "concept" until it's been named??
I don't see why such a concept has to be "attached" to anything to become a
"real concept", it seems to me like it's a "real concept" as soon as I start
thinking about it...
I guess I still don't fully understand your notion of a "concept"
> > Evolutionary & hypothetically-inferential combination of
> existing concepts &
> > parts thereof into new ones, guided by detected associations between
> > concepts. With a complex dynamic of attention allocation guiding the
> > control of the process.
>
> I would have to say no to this one, at least as an idiom for human
> intelligence. There's a repertoire of background generalization processes
> but they act on current imagery (generalized perceptual imagery), not the
> representations of stored concepts - as far as I know. It might be a good
> ability for AIs to have *in addition* to the levels-of-organization idiom,
> but it can't stand on its own.
You're right that this kind of cognition can't stand on its own. But I
don't think the levels-of-organization cognitive structure/dynamic can do
much good on its own either: it can *stand* on its own, but it can't run....
It's fine for dogs and bunnies ... but I think the crux of what makes human
cognition special is the synergization of the levels-of-organization
cognitive structure/dynamic, with the evolutionary/inferential
structure/dynamic as I just described.
> 1: "I need to get this paper done."
> 2: "I sure want some ice cream right now."
> 3: "Section 3 needs a little work on the spelling."
> 4: "I've already had my quota of calories for the day."
> 5: "Maybe I should replace 'dodomorphic' with 'anhaxic plorm'."
> 6: "If I exercised for an extra hour next week, that would make
> up for it."
>
> If these thoughts are all (human idiom) mental sentences in the internal
> narrative, I wouldn't expect them to be pronounced simultaneously
> by a deep
> adult voice and a squeaky child voice. Rather I would expect them to be
> interlaced, even though {1, 3, 5} relate to one piece of open goal imagery
> and {2, 4, 6} relates to a different piece of goal imagery. So the
> deliberative tracks {1, 3, 5} and {2, 4, 6} are simultaneous, but the
> thoughts 1, 2, 3, 4, 5, 6 occur sequentially.
This is not how my mind works introspectively. In my mind, 1, 2 and 3
appear to me to occur simultaneously.
Now, you can tell me that I'm deluded and they REALLY occur sequentially in
my mind but I don't know it.
But I'm not going to believe you unless you have some REALLY HARD
neurological proof.
> If it's a static mapping, based on a (claimed) correspondence, it's a
> "sensory" mapping. (I know this is overloading 'sensory' in an entirely
> different sense, dammit. Maybe I should replace 'sensory' within SPDM.
> "Correlative?" "Correspondence?")
I don't know what the right word is, because I'm not sure I understand what
you mean yet.
Generally speaking I think there are two things going on here, that are
causing me to be confused:
1) You're using terms in ways that are not incorrect but are just a little
nonstandard, without giving quite explicit enough definitions for them.
2) You're taking intuitions gained from your own introspection, combined
with your study of cognitive science, and generalizing them to come to
conclusions that disagree with the conclusions I've gained from my own
introspection, combined with my study of cognitive science
Until problem 1 is more fully resolved, I'm not going to be able to really
assess how deep problem 2 is.
Of course, problem 1 doesn't make the paper a bad paper by any means. I
think this kind of problem runs through nearly all work on cognitive science
and serious AI. People use words in slightly different ways and talk past
each other, it's the norm. One of the many reasons progress is so slow!
And problem 2 isn't necessarily a "problem" at all, in the sense that
different people are validly going to have different intuitions, and right
there is often not enough data from cog sci or AI to prove one intuition or
another correct.
Pei Wang (one of my chief Webmind collaborators) and I often spent a long
time arriving at a mutually comprehensible language. Then, once we had, we
could make 75% of our differences go away. The other 25% we just had to
chalk up to different intuitions, and move on...
-- ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT