From: ben goertzel (ben@goertzel.org)
Date: Thu Apr 18 2002 - 10:33:24 MDT
Daniel Amit's book on Neural Networks (I forget the title) is a very good
overview of the scope of work in Hopfield nets. There has been a lot of
neat stuff done using asymmetric weights, temporal learning, and so forth.
I don't think there is a standard opinion on the biological relevance of
these models. There are advocates and there are detractors. My own view
is that they are a good conceptual model of how the brain probably stores
*some types of memories*. But when you get into the storage of temporal
sequences of events and so forth, the details of attractor neural nets (the
other term for hopfield nets) get awfully nasty, and one starts to wonder
if there's not a better way.
An interesting, too little known variation on this theme is Mikhail Zak's
work on "terminal chaos." He has made some interesting attractor neural
net models using equations with mathematical singularities in them,
achieving some favorable practical results in various experiments...
One fact, from an AI engineering point of view, is that hopfield nets are a
very inefficient way to store memories. The standard symmetric-weight
hopfield net stores at most .14n floats worth of memory using n*n neural
net links. Furthermore the networks fill up with memories very quickly,
and then can't learn anything more. If you put caps on the link weights,
you can get a network that continually forgets old memories when its memory
gets full, so it can learn new things. But then the memory capacity goes
down to .05n. Some good results can be obtained with sparse (not fully
connected) networks -- Youlian Troyanov and I did some experiments on this
in 1998 -- but the bottom line is this is not an efficient way to store
information. So even if it is roughly analogous to how the brain stores
some kinds of memories, it is in my view only an appropriate approach in
cases where one has a HELL of a lot of memory to waste storing information
redundantly...
The question with such memory schemes is always whether cognitive
processing is accelerated enough to compensate for the waste of space. Are
we just making a time-space tradeoff, and is it a good one? In the case of
hopfield nets I don't think there is any adequate compensating cognitive
advantage however. One can do partial matching of memories nicely in a
hopfield net, but there are more efficient ways to do that on a digital
computer, without so much redundancy of storage.
This sort of reasoning is why we wound up not using attractor neural net
type knowledge representation in Novamente, although we use NN-style
dynamics for some other purposes (for adaptively allocating the system's
attention)
ben g
ben
-----Original Message-----
From: Ben Houston [SMTP:ben@exocortex.org]
Sent: Wednesday, April 17, 2002 5:23 PM
To: sl4@sysopmind.com
Subject: Hopfield networks as a model of memory?
Eliezer and I discussed how neural networks could "recognize" a
previously learned stimulus. We both sort of agreed that "resonance"
was a commonly proposed description for this computation but we sort of
left it there without getting any more specific. Anyways, I writing up
an assignment for a 4th year AI class and I came across Hopfield
networks -- I would suggest that these networks are probably pretty
decent models, in the abstract sense, of how our neural networks
recognize objects.
I must admit that I only have the chapter on "Associative Memory" from
the fairly standard text "Intro to The Theory of Neural Computation" to
work from at the moment.
What is the standard opinion in more learned circles about this neural
network's similarity to the biological processes?
Cheers,
-ben
http://www.exocortex.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT