Re: DGI Paper

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Apr 13 2002 - 20:07:52 MDT


Ben Goertzel wrote:
>
> By a "reflective percept" you mean a perception of something inside the mind
> rather than something in the external world?

Yes - a cognitive event that can be, at the least, perceived; is often
visualizable as reflective imagery; and is sometimes even taken as an
action.

> > "Differential operator" is abstract but that doesn't mean it's
> > non-perceptual. It means that its important perceptual correlates are
> > abstract perceptual models and realtime skills in abstract
> > models,
>
> I don't think I understand your use of the terms "percept" and "perception"?
> Could you tell me how you define these things?
> You seem to be using them much more broadly than me, which may the the
> source of much of my confusion.

Recently (I'm not sure if I was doing this during the whole of the DGI
paper) I've been trying to restrict "sensory" to information produced by
environmental sense organs. However, there are other perceptions than
this. Imaginative imagery exists within the same working memory that
sensory information flows into, but it's produced by a different source.
Abstract imagery might involve tracking "objects", which can be very
high-level features of a sensory modality, but can also be features of no
sensory modality at all. Even abstract concepts usually tend to be
associated with sensory percepts of one kind or another, such as
"differentiation" and the symbol "d/dx" or a visual image of a tangent to a
curve, but the *relevant* behaviors of abstract imagery are usually
interactions with other abstract objects and abstract properties, as opposed
to sensory behaviors of the kind tracked by sensory modalities proper -
although often these abstract behaviors can map onto sensory behaviors;
hence metaphor.

> Sure, but when one comes up with a NEW mathematical concept, sometimes it is
> not associated with ANY visual, auditory or otherwise "imagistic" stuff.
> It's purely a new math concept, which then has to be, through great labor,
> associated with appropriate symbols, pictures, names, or what have you.

Right - but it's associated with the behaviors, in abstract imagery, of
other math concepts. That's why you can't discover complex math concepts
without knowing simple math concepts; the complex math concepts are
abstracted from the abstract imagery for complex behaviors or complex
relations of simple math concepts. But there is still imagery. It is not
purely conceptual; one is imagining objects that are abstract objects
instead of sensory objects, and imagining properties that are abstract
properties instead of sensory properties, but there is still imagery there.
It can be mapped onto the visual imagery of the blackboard and so on.

Be it noted that this is a somewhat unusual hypothesis about the mind, in
which "propositional" cognition is a simplified special case of mental
imagery. Abstract imagery uses non-depictive, often cross-modal layers that
are nonetheless connected by the detector/controller flow to the depictive
layers of mental imagery. For example, consider the rats who were trained
to press lever A on seeing two flashes *or* hearing two sounds, and trained
to press lever B on seeing four flashes *or* hearing four sounds, who
spontaneously pressed lever B on seeing two flashes *and* hearing two
sounds. I would explain this by reference to an Accumulator Model that is
smoothly extracted from depictive sensory perceptions, but which leaves
behind the "topographic" structure of those sensory modalities, and hence
can be cross-modality. The Accumulator Model might even be "depictive" in
the sense of having a quantitative scale that is mapped to a linear stretch
of rat neurons, but it's not "depictive" in a way that topographically maps
to the sensory imagery. For this reason, the rats' Accumulator Model can be
cross-modality. I suspect that "objects" and "properties" within abstract
imagery may exist at a similarly high, cross-modality level - while still
being part of the overall modality system, permitting analogic mappings and
so on, and being smoothly connected to depictive sensory workspaces.

> > Far as I know, they're all perceptual in the end. It's just that the
> > perceptual idiom - modalities, including feature structure,
> > detector/controller structure, and occasionally realtime motor structure -
> > extends far beyond things like vision and sound, to include
> > internal reality
> > as well.
>
> This is getting to the crux of my issue, I think. You define "perception"
> as a kind of abstract structure/process, but in the paper I don't think it's
> entirely clear that this is how you're defining "perception". At least it
> wasn't that clear to me. I generally think of perception as having to do
> with the processing of stimuli from the external world.

Maybe I should put in a paragraph somewhere about "sensory perception" as a
special case of "perception".

> Based on your very broad definition of perception, I'm not sure how to
> distinguish it from cognition. I guess in your view perception serves
>
> 1) to process external-world data
> 2) as one among many cognitive structures/processes
>
> I don't think this is the standard use of the term "perception", though
> there's nothing particularly wrong with it once it's understood.

Perception is a very broad part of the mind - hence the "modality level of
organization" - and virtually everything that goes on inside the mind is
going to involve percepts/imagery in one way or another. It won't all be
sensory imagery, however. Senses and sensory imagery are a big chunk of the
modality level of organization, but they are not - quite - a level of
organization in themselves.

> I'm still not sure however that a new abstract math concept that I conceive
> in the bowels of my unconscious is "perceptual in the end." I think that
> its conception may in some cases NOT involve feature structures and
> detector/controller structures. A new math concept may arise thru
> combinatory & inferential operations on existing math concepts, without any
> of the perceptual/motor hierarchy-type structures you're describing.

I think that human concepts don't come from mixing together the internal
representations of other concepts. I think that's an AI idiom which is not
reflected in the human mind. Humans may be capable of faceting concepts and
putting the facets together in new ways, like "an object that smells like
coffee and tastes like chocolate", but this is (I think) taking apart the
concepts into kernels, not mixing the kernel representations together.

Now it may perhaps be quite useful to open up concepts and play with their
internals! I'm just saying that I don't think humans do it that way and I
don't think an AI should start off doing it that way. I think it needs to
happen through the modality level of organization, not through the internal
representations of concepts.

> Math concepts are not the only example of this, of course, they're just a
> particularly clear example because of their highly abstract nature.

Math concepts are *abstract* but not *non-perceptual*. Perceiving yourself
manipulating mathematical objects is still perceiving, and you can
generalize a concept kernel over it.

> The key point is still, however, whether by "perceptual modalities" you mean
> modalities for sensing the external world, or something more abstract.

I am referring to the superclass that contains "sensory modalities" but also
other things.

> I don't think that a new math concept i cook up necessarily has anything to
> do with imagery derived from any of the external-world senses. Of course
> connections with sensorimotor domains can be CREATED, and must be for
> communication purposes. But this may not be the case for AI's, which will
> be able to communicate by direct exchange of mindstuff rather than via
> structuring physicalistic actions & sensations.

The new math concept has plenty to do with imagery, it's just not sensory
imagery.

> > I think some thoughts rely on reflective imagery or imagery which is not
> > visualized all the way down to the sensory level.
>
> Again this same language. You're talking about some kind of "visualizing"
> at a non-sensory level. I'm not sure what you mean by "visualizing" then.

I mean creating mental imagery within a perceptual workspace that doesn't
flow all the way down to the visual or auditory modalities - although of
course a non-deaf human will almost always activate the auditory modalities
because our symbol tags are auditory.

> > "Smooth" in fitness landscapes means that similar things are separated by
> > short distances, and especially that incremental improvements are short
> > distances. In the case of a modality smoothing a raw scene, you can think
> > of distance as being the distance between feature detectors instead of the
> > distance between raw pixels, or "distance" as being inversely proportional
> > to the probability of that step being taken within the system.
>
> This is just a terminology point, but I still think that your terminology is
> not the standard one.

Quite possibly. "Smoothed fitness landscapes" or "smoother fitness
landscapes" might be better. "Rugged fitness landscapes" or "fractal
fitness landscapes" might be more mathematically accurate, but to someone
who doesn't know the math, it says exactly the opposite of what I want to
say! Regardless of whether the landscapes are rugged or fractal in an
absolute sense, it is their smoothness that is salient in DGI's discussion.

Maybe "tractable fitness landscapes"? Got any suggestions?

> I still believe that, in the standard terminology, a fitness landscape that
> has local minima and maxima at all perceivable scales, is not "smooth" in
> standard usage. It's fractal.
>
> The processing done in visual & auditory cortex often resembles
> windowed-fourier or wavelet transforms, and this does result in a kind of
> smoothing in that hi-frequency components are omitted.
>
> Anyway it would be good if you just clarified in the text what you meant by
> "smooth" -- it's certainly no big deal.

I'll add it to the list. In fact, I'll start keeping track of the list
instead of trying to keep it in my head. (No guarantees about whether I'll
find the time, though; like I said I'm already over time-budget.)

> > The Net can't help you here. You can't have a modality with a
> > computationally tractable feature structure unless your target environment
> > *has* that kind of structure to begin with. If you're going to put a baby
> > AI in a rich environment, the richness has to be the kind that the baby AI
> > can learn to see incrementally.
>
> I don't understand why you think a baby AI can't learn to see the Net
> incrementally.

It doesn't have a tractable fitness landscape. No feature structure to
speak of. No way to build up complex concepts from simple concepts. It's
all complexity in a form that's meant to be perceived by other humans.
Steven Harnad once compared the symbol grounding problem to trying to learn
Chinese as a first language using a Chinese-Chinese dictionary; if you're
blind and deaf then reading Chinese webpages won't help. Strip away all the
semantic aspects of the Web and I don't quite see how the remaining
perceptible structure is a good environment in which to practice thinking -
a lot of directory and link structure, but that isn't a significant amount
of complexity as feature structures go. Likewise for motor correlations
between HTTP requests and returned pages, and correlations between recurring
opaque Chinese symbols. The tractable part is too simple to be useful and
the useful part is too complex to be tractable.

> > What I mean is that noticing a perceptual cue that all the
> > billiards in the
> > "key" group are red, and that all the billiards in the "non-key" group are
> > not red, is not the same as verifying that this is actually the case. The
> > cognitive process that initially delivers the perceptual cue, the
> > suggestion
> > saying "Hey, check this out and see if it's true", may not always
> > be the one
> > that does the verification.
>
> So the verification is just done by more careful study of the same perceived
> scene, in this case?

In this case, yeah. The key point is that the cueing process can be simpler
and sloppier and even of an entirely different computational nature than the
process that goes through and verifies the initial suggestion.

> > > Complex concepts are certainly “invented” as well, under the normal
> > > definition of “invention.” …
> > >
> > > The concept of a pseudoinverse of a matrix was invented by Moore and
> > > Penrose, not learned by them. I learned it from a textbook.
> > >
> > > The concept of "Singularity" was invented as well...
> >
> > Well, you can learn a concept from the thoughts that you invent -
> > generalize
> > a kernel over the reflective perceptual correlates of the
> > thoughts. But the
> > concept-creating cognitive process will still reify ("learn") a
> > perception,
> > and the deliberative thought process that created the abstract/reflective
> > perceptions being reified will still be inventive.
>
> I don't understand this. If I create a silly concept right now, such as,
> say,
>
> "Differential functions on [-5,5] whose third derivative is confined to the
> interval [0,1]"

This isn't much of a concept until it has a name. Let's call a function
like this a "dodomorphic fizzbin".

> then how is this concept LEARNED? I didn't learn this, I just INVENTED it.

You learned it by inventing it. The invention process took place on the
deliberative level of organization. The learning process took place on the
concept level of organization and it happened after the invention. You
created the mental imagery and then attached it to a concept. These things
happen one after the other but they are still different cognitive processes
taking place on different levels of organization.

> Evolutionary & hypothetically-inferential combination of existing concepts &
> parts thereof into new ones, guided by detected associations between
> concepts. With a complex dynamic of attention allocation guiding the
> control of the process.

I would have to say no to this one, at least as an idiom for human
intelligence. There's a repertoire of background generalization processes
but they act on current imagery (generalized perceptual imagery), not the
representations of stored concepts - as far as I know. It might be a good
ability for AIs to have *in addition* to the levels-of-organization idiom,
but it can't stand on its own.

> > No, you often have mental imagery that depicts ongoing cognition
> > within more
> > than one train of thought, and you switch around the focus of attention,
> > which means that more than one deliberative track can coexist. You still
> > think only one thought at a time. Or do you mean that you pronounce more
> > than one mental sentence at a time? You've got to keep the thought level
> > and the deliberation level conceptually separate; I said "one thought at a
> > time", not "one deliberation at a time".
>
> I don't understand how you define "thought", then. Could you give me a
> clearer definition?
>
> And please don't use a variant of the "there can only be one at a time"
> restriction in the definition! ;)

Deliberation is embodied in the cyclic interaction of thoughts and mental
imagery. If you have two separate pieces of goal imagery and switch your
focus of attention back and forth between them, they will attract thoughts
of two different kinds and you will be able to carry out two interlaced
tracks of deliberation. (And of course humans represent goals in a much
more complicated way than just imagery, which also contributes to our having
several going at once.) So it might be like:

1: "I need to get this paper done."
2: "I sure want some ice cream right now."
3: "Section 3 needs a little work on the spelling."
4: "I've already had my quota of calories for the day."
5: "Maybe I should replace 'dodomorphic' with 'anhaxic plorm'."
6: "If I exercised for an extra hour next week, that would make up for it."

If these thoughts are all (human idiom) mental sentences in the internal
narrative, I wouldn't expect them to be pronounced simultaneously by a deep
adult voice and a squeaky child voice. Rather I would expect them to be
interlaced, even though {1, 3, 5} relate to one piece of open goal imagery
and {2, 4, 6} relates to a different piece of goal imagery. So the
deliberative tracks {1, 3, 5} and {2, 4, 6} are simultaneous, but the
thoughts 1, 2, 3, 4, 5, 6 occur sequentially.

> So far as I know, the physiology of human consciousness indicates that
> humans can have multiple perceptual-cognitive-active loops of conscious
> awareness running at once.

???

> I guess that if you count kinesthetic sensation as a sense, then all motor
> actions can be mapped into the domain of sensation and considered that way.
> In practice of course, these particular "sensory mappings" (that are really
> motor mappings ;) will have to be treated pretty differently than the other
> sensory mappings.

If it's a static mapping, based on a (claimed) correspondence, it's a
"sensory" mapping. (I know this is overloading 'sensory' in an entirely
different sense, dammit. Maybe I should replace 'sensory' within SPDM.
"Correlative?" "Correspondence?")

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT