From: James Rogers (jamesr@best.com)
Date: Wed Oct 23 2002 - 12:32:16 MDT
On Wed, 2002-10-23 at 03:39, Bill Hibbard wrote:
> On Tue, 22 Oct 2002, James Rogers wrote:
> > This is a nonsensical assertion on a number of levels, and I fear that it
> > effectively pollutes those things derived from the assumption that this
> > makes any kind of sense.
>
> I agree that it is nonsense to say that intelligence is
> learned, but that's not what I said. I said intelligent
> behaviors are learned.
The way it was written, you also said that "Intelligent behavior cannot
be programmed", which contradicts "[Intelligent behavior] must be
learned". Reading these two statements one after the other, DOES read
as nonsense to me. Granted, it was late and I was tired when I
originally wrote that email, but it reads the same this morning.
> > Second, you CAN program intelligent
> > behavior; its so obvious I'm not even sure where that came from. Granted,
> > for extremely complex learning tasks it becomes less wise to let monkeys
> > program machines for intelligent behavior if you expect to maintain some
> > average quality of results, but it is certainly doable. Third, any designs
> > to "mimic the reinforcement learning of human brains" seem misguided,
> > largely because ANY system that can learn has these properties (ignoring the
> > edge case of parrots); there is nothing categorically special about human
> > brains in this regard and don't see where it buys anything, at least not as
> > a checklist item.
>
> As to your second point, a programmed rather than learned
> implementation of intelligent behavior is only slightly
> less absurd than Searle's Chinese Room. Perhaps I should
> not have used the absolute word "cannot", but in any
> pratical sense what I said is true.
Intelligence is a property of the behavior, and even a strictly
programmed "intelligent" behavior can be defined as having some level of
intelligence within that context. This is not a good argument though,
as "intelligent" as applied to behavior is observer dependent, and can
only be objectively applied to the intrinsic properties of the
machinery. An intelligent machine can learn ANY behavior, so whether or
not it is stupid or intelligent from a particular viewpoint is
immaterial. "It isn't that the bear waltzes so gracefully, it is that it
waltzes at all." From a theoretical standpoint, observer-independent
intelligence is far more interesting than observer-dependent
intelligence. This whole argument has similarities to the argument that
results from the conflict between pedestrian definitions of
"information" and mathematical definitions of the same.
Just to throw in the point, Searle's Chinese Room does define a system
capable of expressing measurable intelligence. It just seems absurd
because the context is narrow by definition and it violates our
intuition (which is wrong as often as not about these things). As has
been pointed out in various forums, all programs expressible on finite
state machinery can be expressed as Giant Look-Up Tables ("GLUTs").
Therefore, if we accept the premise that general intelligence is
expressible on a FSM, we must also accept that it can be implemented as
a GLUT.
> Third, I used the phrase "mimic the reinforcement learning
> of human brains" just to make the point that intelligent
> machines have more in common with human brains than with
> current machines.
I could mostly agree with this with some qualification.
> I am following Francis Crick and Gerald Edelman in my use of
> the word "emotion". They both say that emotions are essential
> for intelligence based on the role of emotions for reinforcing
> or selecting intelligent behaviors. Of course, "emotion" is an
> overloaded term and you can find different neuroscientists who
> use it in different ways.
I don't disagree but I do have a different perspective, mostly derived
from intelligence in the context of behavior being observer dependent.
>From an evolutionary standpoint, you need emotion to bootstrap the
feedback loops that lead to learning advantageous behaviors in an
evolutionary sense. The behaviors are learned, but I wouldn't classify
them as "intelligent" being merely the consequences of a biasing system
selected by evolutionary pressures. As I stated previously, if you are
going to classify a behavior as "intelligent", you have to qualify it
with the observer context. The intelligence exists in the system, and
the emotion makes it "do something", smart or stupid not being a
consideration.
> The difference between human and animal consciousness can be
> described in terms of whether animal minds include models of
> other animals minds, of events tommorrow, etc. Similarly, I think
> a key difference between human and machine consciousness will
> be the machines' detailed model of billions of human minds, in
> contrast to our detailed model of about 200 human minds.
My point of contention is primarily with the idea that a
super-intelligent machine's consciousness is the result of interaction
with humans. Intelligent machines develop themselves by interacting
with other machinery, of which humans are an interesting and relatively
complex form. But the entire universe is filled with machinery that
will do the job; having a machine interact with human machinery is
largely useful if you want it to interact well with humans (which we
presumably do want -- I'm merely asserting it isn't strictly
necessary). "Consciousness" on a machine capable of it comes from the
interaction with other machinery, but there is no requirement that the
other machinery be particularly intelligent.
> The statement "the essential property of consciousness in humans
> and animals is that it enables brains to process experiences that
> are not actually occuring" says something pretty rigorous. The
> simplest animal brains can only process events as they happen.
> But at some level of evolution, brains break free of "now".
The ability to work with models in the abstract is a limited only by
resources. Human ability to do this exists solely as a consequence of
having more available resources for the machine. The emergence of
something we call "consciousness" is essentially a function of the size
of the machinery, and therefore not really crucial to interesting
intelligence per se. At the very least, there is a clear mathematical
relationship between resources and the computational ability to
manipulate model abstractions.
> And the temporal credit assignment problem is a well known
> and rigorous problem. There has been some very exciting
> neuroscience into how brains solve this problem, at least when
> delays between behaviors and rewards are short and predictable,
> in the paper:
>
> Brown, J., Bullock, D., and Grossberg, S. How the Basal Ganglia
> Use Parallel Excitatory and Inhibitory Learning Pathways to
> Selectively Respond to Unexpected Rewarding Cues. Journal of
> Neuroscience 19(23), 10502-10511. 1999.
>
> I think that the need to solve the temporal credit assignment
> problem when delays between behaviors and rewards are not
> short and predictable was the selectional force behind the
> evolution of consciousness. Any known effective solution to
> this problem requires a simulation model of the world.
Interesting paper, it is my first time seeing it. I am pleased to see
that it seems to suggest a system that works in a very similar fashion
to the model we derived from our computational theory. I don't follow
neuroscience to closely because I feel it has a tendency to pollute
research on the more theoretical side of intelligence, but I do like to
occasionally checkpoint neuroscience against the theoretical models that
I work with. Trying to derive computational theory from neuroscience
has never seemed to be a particularly productive endeavor, but that is
another topic for another day.
Cheers,
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT