Re: Is generalisation a limit to intelligence?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Dec 02 2000 - 14:21:18 MST


Ben Goertzel wrote:
>
> > To sum it up: Is there some way to combine the fuzzy quality that
> > intelligence
> > relies on with the rigid quality of not making a single mistake? Is
> > generalisation a limit to intelligence?
>
> With infinite memory, a mind could store both the generalization it had made
> from some
> raw data, and all the raw data itself. Thus it could use the generalization
> to make predictions,
> and it could also use the raw data to continually re-evaluate its
> generalization to be sure it
> still worked.

"Infinite memory" depends on whether you just want to record experiential,
external-reality data, or record internal-reality data as well, or just
record a few checkpoints and store the seed data in the random-number
generators. Storing every internal state is not possible, but it may not
take all *that* much memory to create a perfect flight recorder capable of
exactly recreating the mind's state at any given clock tick. There will,
however, be a substantial computational cost for accessing that "memory",
which will increase with increasing time. Let's say you've got a mind
running on 100TB of RAM with experience coming in at 1GB/second (including
'Net connection) and access to 100PB of disk space. If half the disk
space is used for experience, then the mind can store 50,000,000 seconds
of experience - around one and a half years. If the remaining 50PB is
used for snapshots, then that's enough to store a total of 500 snapshots.
The first 50 snapshots could be hundredth-second snapshots, the next 50
snapshots could be tenth-second snapshots, and so on, going up
(theoretically) to 50 different million-second snapshots. Accessing an
exact internal state, to the exact clock tick, that occurred within the
last half second (50 hundredth-second snapshots), would cost at most one
hundredth of a second. All of this assumes infinite disk access speeds.
In any case, the upshot is that a Diasporan upload might find "perfect"
memory to not actually be that much of a demand, especially if they have
vastly more living space than they're using (yet!).

Leave that aside. Ben Goertzel clarified a part of Gandara's thesis that
hadn't made any sense to me; e.g., that forced generalization may result
in necessary creativity. I had thought Gandara was proposing that perfect
memory *prevented* generalization. I think that perhaps the answer here
may lie in the difference between *producing* creativity and *verifying*
creativity; creativity, especially the kind of brilliant solutions
produced by generalization, is often much easier to verify than produce.
Thus you can create all the degrees of generalization, from zero to nearly
complete abstraction, run them all, and pluck off the best solutions from
each. Or you could just use the most frequently-useful generalization
most of the time, while still checking more specific and more general
levels every so often, just in case... the kind of "heuristics tuning
heuristics" thing that EURISKO was so good at.

As Ben Goertzel wrote:
>
> But, the refutation: A sufficiently intelligent, self-aware system is
> quite capable of modifying itself to make itself MORE ERROR-PRONE if it
> finds through experimentation that this makes it more intelligent ;>

Precisely.

Given a specified memory base, you have the precise experiences, the
first-order generalization from a group of nearly identical experiences,
the second-order generalization from groups of experiences that share many
but not all characteristics but still share a common usage or
manipulability, the fourth-order generalization from groups that are not
treated in the same way but still have a single characteristic of
interest, and so on. ("Apple 3132", "Red Delicious Apple", "Apple",
"Plant Life"). I don't see that storing all or none experience changes
the nature of this classical hierarchy; rather, it increases your ability
to generalize specifically from Apple 3132 if you wind up in a situation
that is almost precisely similar to Apple 3132 rather than any of the
other categories you have available. This is very rarely required for an
intelligent being, which is why we don't store all of our memories using
snapshots...

Actually, I don't know that! It could be precisely the other way around.
It could be that generalizing from Apple 3132 is incredibly useful, but we
don't have the neural disk space available to store specific experiences,
so we've evolved this entire mode of thinking that's so utterly tuned to
generalization that, in our poverty of existence, it looks to us like
generalizing from precise experiences is "very rarely required". But I
think evolution is right on this one. Precise experiences are almost
always superfluous. I propose the snapshot method simply on the principle
that information should never be lost... if you have the disk space
available.

> However, we lack a quantitative science that can tell us exactly how quickly
> the error rate approaches
> zero as the memory (&, in a real-time situation, processing power able to
> exploit this memory)
> approaches infinity. Eliezer and I differ in that I believe such a science
> will someday exist ;>
> We also differ in that he intuits this error rate approaches zero faster
> than I intuit it does.

Let us also note that there is a single cause behind both of my beliefs; I
believe that generalizing is a creative and intelligent task, which to me
means there's room for arbitrarily brilliant solutions. The error rate
approaches zero very quickly, not for mathematical reasons, but because
someone came up with a brilliant solution - or a thousand different
brilliant solutions for a thousand different domains. Each different
solution has a different mathematical behavior, if it has any mathematical
behavior at all. The act of coming up with a brilliant solution changes
whatever mathematical behavior previously existed, and will not be a
continuation of the previous curve. This drama is played out on so many
different levels of the system - mathematics is so easily broken by the
application of intelligence, at any level - as to render it likely that
mathematics will simply not be used at all.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT