Re: [SL4] brainstorm: a new vision for uploading

From: Paul Fidika (Fidika@new.rr.com)
Date: Sat Aug 16 2003 - 13:10:28 MDT


Nick Hay wrote:
>Connectionism, in the sense of building everything out of neurologically
>inspired networks, has at least two problems. Firstly it is substrate that
>may not be best suited for the kind of computational hardware we have -
fast,
>digital, serial. .... [snip] Secondly it conflates the levels of
organisation - you
>don't introduce all information on the lowest level, code, but on various
>different level built on top of code. You don't design things solely on the
>atomic level.

I don't believe that neural-networks conflate the levels of organization;
aren't humans living proof of that? For example, you could set up 3 or more
separate Recurrent Neural Networks, and their interactions would produce
very chaotic behavior, churning hither and thither like a Lorenz attractor.
Concepts (or higher-levels) could be thought of as specific neural-network
interaction patterns, which, if they were like they are in humans, would
never produce precisely the same interaction pattern twice (see Freeman,
2000), the "basins of attraction" in these populations of networks gradually
changing over time.

The real problem with neural networks would be that (I think) a Seed AI
would be nearly impossible. No one programs neural networks like they
program in C++, they set up the networks and then allow the networks to form
themselves by repeatedly using some training algorithm. By examining the
input and output of a neural network, you might get the feeling for what
it's doing, but how do you precisely change what it's doing? There's no
documentation, and there's virtually no way of guessing what will happen if
you change this or that synapse strength, without actually trying it, the
very idea of neural networks is that many connections may be changed
randomly without significantly altering the network's overall function. The
best a Seed AI could do to reprogram itself would be to run training
algorithms or random mutations on its underlying neural networks; which is
not the directed-sort of evolution which a Seed AI requires. Neural-networks
are good for evolution, but not necessarily for directed creation.

LOGI's recipe calls for non-algorithmic, "fluid" processes at the higher
levels of concepts and thoughts. The only way I know how to do this would be
through the statistically emergent behavior of an underlying substrate, such
as connectionist methods like Neural / Bayesian networks, or interactions of
populations of agents, a la Hofstadter and Mitchell's Copy Cat (Mitchell,
1993) or perhaps Minsky's The Society of Mind (Minsky, 1985). Is there
another method which perhaps I missed, a way to build levels of organization
beyond mere code?

>CFAI is one of those documents you don't understand even if you read it
>closely. I personally found I had to read it multiple times and I still
don't
>think I understand it all.

Definetly! After reading CFAI or LOGI I thought that I understood them. Only
after reading some of the background books suggested, such as The Adapted
Mind or Fluid Concepts and Creative Analogies, did I realize that I
previously had a mere superficial knowledge of what Eliezer was getting at.
In fact, I might still only have a superficial knowledge of what Eliezer was
getting at, in which case my points above are probably moot. ;-p

~Paul Fidika
Fidika@new.rr.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT