Re: neural nets

From: Richard Loosemore (rpwl@lightlink.com)
Date: Thu Jan 12 2006 - 09:26:44 MST


Such comparisons have very little to teach us, because (among other
things) the kind of ANNs people have been playing with for the last
couple of decades are a really dumb way to go about building an
artificial mind.

It might turn out, for example, that a functional unit (say a cortical
colummn if you want to talk neuroscience, or a concept-instantiation
unit if you want to go the cognitive systems route) can be implemented
by either a million neurons or by a hundred lines of code that can run
as one of a thousand parallel threads on a single blade in a roomful of
hardware.

The crucial question is *what* is the best functional unit to emulate in
a human brain: somthing tiny and horrible numerous, like neurons, or
somethinh pretty huge, like a concept-instantiation module (I am making
that phrase up, but you get what I mean).

Me, I am firmly convinced that we would do best by getting inspiration
from what neurons do, but not even think about emulating them as if they
were the highest functional units.

Richard Loosemore.

CyTG wrote:
> SL4
>
> Hello.
> Im trying to wrap my head around this AI thing, and entirely how far
> along we are in measures of computational power compared to whats going
> on in the human body.
> I know many believes that there's shortcuts to be made, even
> improvements of that model nature has provided us with, the biological
> neural network.
> Still. Humor me.
> Here's my approximated assumptions, based on practical experience with
> ann's and some wiki.
> Computational power of the human mind ;
> 100*10^9 neurons, 1000 connections each gives about 100*10^12 operations
> _at the same time_ .. now on average a neuron fires about 80 times each
> second, that gives us a whopping ~10^14 operations/computations each
> second.
> On my machine, a 3GHz workstation, im able to run a feedforward network
> at about 150.000 operations /second WITH training(backprop) .. take
> training out of the equation and we may, lets shoot high, land on 1
> million 'touched' neurons/second .. now from 10^6 -> 10^14 .. that's one
> hell of a big number!!
>
> Also .. thinking about training over several training sets (as is usual
> the case) wouldn't I be correct at making an analogy to linear algebra ?
> thinking of each training set as a vector, each set having their own
> direction. In essense, two identical training sets would be linear
> 'depended' on each other and subject for elimination? (thinking there
> could be an mathematical sound approach here towards eliminating
> semi-redundant training data!)
>
> Hope its not too far off topic!
>
> Best regards



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT