From: Nick Hay (nickjhay@hotmail.com)
Date: Sat Aug 16 2003 - 17:59:51 MDT
Paul Fidika wrote:
> Nick Hay wrote:
> >Connectionism, in the sense of building everything out of neurologically
> >inspired networks, has at least two problems. Firstly it is substrate that
> >may not be best suited for the kind of computational hardware we have -
> >fast, digital, serial. .... [snip] Secondly it conflates the levels of
> >organisation - you
> >don't introduce all information on the lowest level, code, but on various
> >different level built on top of code. You don't design things solely on
> >the atomic level.
>
> I don't believe that neural-networks conflate the levels of organization;
> aren't humans living proof of that?
Human neural-networks are very different from computer neural-networks -
neurons are pretty complex things. But it was not neural networks in general
I meant, but our effort to create everything directly out of them. My second
problem only refers to that approach - if you try and build things on top of
a neural substrate but introduce information at all levels of organisation
(however that might be done), then that method doesn't conflate levels of
organisation.
> For example, you could set up 3 or more
> separate Recurrent Neural Networks, and their interactions would produce
> very chaotic behavior, churning hither and thither like a Lorenz attractor.
> Concepts (or higher-levels) could be thought of as specific neural-network
> interaction patterns, which, if they were like they are in humans, would
> never produce precisely the same interaction pattern twice (see Freeman,
> 2000), the "basins of attraction" in these populations of networks
> gradually changing over time.
Concepts can be thought of as specific neural-network interaction patterns, or
as specific patterns of atoms. These levels are too low, and the description
too general, to separate concepts from general chaos. It's not enough that
they don't repeat themselves (I'm not sure even that's necessary, don't you
want them to act the same in the same context?) but that they have all the
various features concepts need (acting like useful learned complexity as LOGI
describes).
> The real problem with neural networks would be that (I think) a Seed AI
> would be nearly impossible. No one programs neural networks like they
> program in C++, they set up the networks and then allow the networks to
> form themselves by repeatedly using some training algorithm. By examining
> the input and output of a neural network, you might get the feeling for
> what it's doing, but how do you precisely change what it's doing? There's
> no documentation, and there's virtually no way of guessing what will happen
> if you change this or that synapse strength, without actually trying it,
> the very idea of neural networks is that many connections may be changed
> randomly without significantly altering the network's overall function. The
> best a Seed AI could do to reprogram itself would be to run training
> algorithms or random mutations on its underlying neural networks; which is
> not the directed-sort of evolution which a Seed AI requires.
> Neural-networks are good for evolution, but not necessarily for directed
> creation.
Right, this is what I mean by conflating levels of organisation - trying to do
everything on the lowest level (code, neural networks, atoms). It's not
impossible to have multiple levels of organisation where the bottom level is
a neural network, but seems like a pretty difficult substrate to work with.
But I have no experience in these neural networks, so I could easily be wrong
about the difficulty.
> LOGI's recipe calls for non-algorithmic, "fluid" processes at the higher
> levels of concepts and thoughts. The only way I know how to do this would
> be through the statistically emergent behavior of an underlying substrate,
> such as connectionist methods like Neural / Bayesian networks, or
> interactions of populations of agents, a la Hofstadter and Mitchell's Copy
> Cat (Mitchell, 1993) or perhaps Minsky's The Society of Mind (Minsky,
> 1985). Is there another method which perhaps I missed, a way to build
> levels of organization beyond mere code?
Perhaps a fluid background of heuristics like Eurisko? I don't understand LOGI
enough to answer this question :) I imagine, however, that the answer is
"yes".
- Nick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT