Re: Control theory, signals, dynamics (was Re: Retrenchment)

From: Phil Goetz (philgoetz@yahoo.com)
Date: Mon Aug 22 2005 - 14:49:46 MDT


--- Michael Wilson <mwdestinystar@yahoo.co.uk> wrote:

> > Look at Chris Eliasmith's book, "Neural engineering".
> > This is an excellent start on constructing modular
> > systems out of neural networks.
>
> This is much closer to Loosemore's position, though I'm
> not
> /sure/ that he's proposing to use emergence to build an
> AGI
> (it just seems likely). As I've doubtless made clear, I
> consider this silly.

I don't know what to make of this answer,
since neither my statement nor the book have anything
to do with emergence.

> But this is learned complexity, that we want an AGI to
> induce from an environment or possibly some processed
> knowledge source. I assume that you're claiming that we
> need control theory and signal processing to design the
> basic substrate of the AI, before it learns anything.

Something like that - I would say that, if you want to
replicate what brains do, it would be good to think about
functions that map high-dimensional inputs into
high-dimensional outputs, rather than about arithmetic
or logic operators. Signal processing has accumulated
a large toolkit of useful functions of that kind.

> I do not see the relevance of either
> of these to the design of a rational inference substrate.

If you're thinking of programming an AI by creating
a rational inference substrate - e.g., a 1970s-style
logic engine - I'm surprised. There are many
lines of evidence that such an approach is hopeless,
and I seldom use the word "hopeless". I would mention
category theory, image recognition, consistent human
failures of logic, and metaphorical language as a few
among many. If you still hold to that approach, I doubt
an attempt to persuade you otherwise will help either of
us.
I have already banged my head against that wall long
enough.

Not that I haven't programmed plenty of logic engines.
It's where the AI funding is. It just isn't what we need.

> I can see why you'd think you needed them, if you thought
> that connectionism and emergence were a good idea.

You're still speaking as if I were the one advocating
emergence, but you're the one advocating creating a
"basic substrate" and letting learning do the rest,
which is a more emergence-friendly viewpoint.

> Ok, so we have at least two people sharing this view,
> possibly more if the AAII people are taking this view of
> pattern processing.

What is AAII?

> Clearly you would not go as far in simplification,
> but in my view you have inverted the sensible layering of
> complexity as well as introducing some pointless and
> poorly understood holdovers from human cognitive design.

I'm guessing that you are thinking that pure logic is the
way to go, and human logic "failures" are design flaws,
whereas I am thinking of them as necessary tradeoffs
that must be understood rather than being dismissed
as "pointless and poorly understood holdovers".
How is your viewpoint different from the old AI
viewpoint that we spent the 1980s and 1990s demolishing?

> > We have evidence that, at least in some cases, brains
> use attractors, possibly chaotic ones, as memory
elements.
>
> I agree. I've read plenty of papers on the subject. We
> can
> do better. That's not even a statement of hubris, because
> the task of designing an AGI to run on a (pretty good and
> fast approximation of a) Turing machine is much easier,
> in the absolute sense, than designing one to run on
> mammal neurons.

I don't think we can do better. The "inaccurate",
highly flexible, fuzzy categorization is an advantage,
not a design flaw. I'm afraid I may even want to add
endocrinology to our list of necessary communities.

> I do agree that you need fuzzy logic to do those things
> (it is, at the risk of sounding like a broken record,
> necessary
> but not sufficient). Fuzzy logic is of course a subset of
> Bayesian/probabilistic reasoning.

I don't mean fuzzy logic. Fuzzy logic is still logic.
Real intelligent systems don't use logic, ever,
except when humans have explicitly learned logic,
in which case the use of logic is probably implemented
using the same mechanisms as other learned tasks, such
as playing the piano.

- Phil

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT