RE: Loosemore's Proposal

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Oct 24 2005 - 20:32:33 MDT


> We have to build systems that grow their own representations, we cannot
> presuppose those representations and then, later, tack on some learning
> mechanisms that will feed those representations with new knowledge. This
> fundamental point is crucial to my argument, so make sure that you are
> absolutely clear about that before we discuss fine details about the
> environment.
>
> Richard Loosemore

I agree with you 100% that we need to build systems that "grow their own
representations". I don't see what that has to do with your earlier
comments about the fantastic tools we'll need in order to build AI systems,
but I'll let that pass for the moment...

I am not sure I understand what *you* mean by "growing your own
representation,"
however. This phrase is not so well-defined.

For instance, think about how the human brain represents its tactile inputs,
using a little "homunculus." Each of us grows our own homunculus, it's
true,
but we each grow them in the same basic shape and proportions. The
homunculi
are adaptable: if a limb is removed that part of the homunculus goes away
and those brain cells are used for something else. But it's not the case
that
our brains each individually figure out or "grow" the idea "Hey! Let's
build a homunculus to represent nerve impulses from the body!"

Rather, the homunculus grows as a result of some basic principles, such as
"If two sensory inputs often occur at the same time, then perhaps there
should
be some neuron that responds to both of them." I.e., in this case the brain
contains a hard-wired principle for constructing representations ("wired" is
being used metaphorically here -- of course there is chemistry as well as
electrodynamics going on), and uses this principle to construct
representations
for particular situations.

So, yeah -- a mind must grow its own representations for particular
concepts,
procedures, feelings, situations, etc. But it must be given some principles
and methods and algorithms for constructing representations given various
sorts
of inputs (including introspective as well as external inputs).

But why you think an AGI system that does this sort of thing can't be
programmed using current software tools, I don't really understand. I agree
that it's a BIG PAIN and I wish the tools were better so it would be less of
a pain, but that doesn't add up to a fundamental impossibility. I think
we're
more in the position of the Wright Brothers trying to build a plane or Ford
trying to build a Model T, than of Babbage trying to build an Analytical
Engine. And I have not heard any convincing arguments from you on this
point.

-- Ben G



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:18 MST