Re: neural nets

From: CyTG (cytg.net@gmail.com)
Date: Tue Jan 24 2006 - 07:06:11 MST


Daniel -> Indeed, that's why im not basing the comparison upon ticks (MHz)
rather actual performed (ops), the (~3 x 10^9/sec) figure is not saying a
whole lot. It may be relatively easy and fast to implement a boolean
fulladder circuit, while the same neural net would take significantly more
resources and be less time efficeient.
But we're not talking fulladders here.
I assume (wrong perhaps) that the only thing capable of emulating a humanoid
intellect is an emulation of the micro construct that makes up the macro,
the whole, the brain.(I have a hard time seeing anyone beating the universal
approximation engine called nature through darwinistic natural selection!)
But I agree, specialized hardware could easily speed this up by a factor of
10 or even more. (See how GPU's have been (mis)used for calculations not
related to graphics). But even so, that moves me from 10^8 to 10^7

Richard -> You state it's a dumb way, as a fact. Is it indeed a fact?
We'd have to have a complete clinical envoirment in wich we have complete
control and measuring capabilites on the IO of such a column, to dertermine
whether it indeed obeys simple deterministic boolean logic or not. If such
experiment has not been done, or even not attainable, then i'd say its a
pretty bold move to assume that i can emulate a cortical column with a few
lines of code..(for example) ;)

- The humble student picking your brains ;)

On 1/12/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>
> You must be kidding.
>
> As far as Blue Brain is concerned, don't hold your breath.
>
> This is more like Big Fat White Elephant Designed To Suck Federal
> Dollars Into IBM.
>
> There is no point simulating something whose functional structure you
> don't have a clue about.
>
> Mark my words: the net result of the Blue Brain project will be just as
> world-shaking as Japan's Fifth Generation Project. Remember that? Ten
> year superproject to build a complete human-level artificial
> intelligence? Net result: nowt.
>
> Richard Loosemore
>
>
> H C wrote:
> > Not to get into any actual math (too often grossly flawed by factors not
> > taken into consideration), projects like Blue Brain
> > (http://bluebrainproject.epfl.ch/) are probably the most important to
> > take into account when discussing neural network AI implementations.
> >
> > "Scientists have been accummulating knowledge on the structure and
> > function of the brain for the past 100 years. It is now time to start
> > gathering this data together in a unified model and putting it to the
> > test in simulations. We still need to learn a lot about the brain before
> > we understand it's inner workings, but building this model should help
> > organize and accelerate** this quest." Henry Markram
> >
> > This institute has BIG funding, and really 'effing big computers (which
> > are only going to get bigger). I'm not an expert, but in terms of the
> > neural modeling approach to AI, it appears they are at the top of the
> > game, and they are certainly raising the stakes immensely.
> >
> >
> > -hegem0n
> > http://smarterhippie.blogspot.com
> >
> >
> >> From: CyTG <cytg.net@gmail.com>
> >> Reply-To: sl4@sl4.org
> >> To: sl4@sl4.org
> >> Subject: neural nets
> >> Date: Thu, 12 Jan 2006 15:01:26 +0100
> >>
> >> SL4
> >>
> >> Hello.
> >> Im trying to wrap my head around this AI thing, and entirely how far
> >> along
> >> we are in measures of computational power compared to whats going on
> >> in the
> >> human body.
> >> I know many believes that there's shortcuts to be made, even
> >> improvements of
> >> that model nature has provided us with, the biological neural network.
> >> Still. Humor me.
> >> Here's my approximated assumptions, based on practical experience with
> >> ann's
> >> and some wiki.
> >> Computational power of the human mind ;
> >> 100*10^9 neurons, 1000 connections each gives about 100*10^12
> >> operations _at
> >> the same time_ .. now on average a neuron fires about 80 times each
> >> second,
> >> that gives us a whopping ~10^14 operations/computations each second.
> >> On my machine, a 3GHz workstation, im able to run a feedforward
> >> network at
> >> about 150.000 operations /second WITH training(backprop) .. take
> training
> >> out of the equation and we may, lets shoot high, land on 1 million
> >> 'touched'
> >> neurons/second .. now from 10^6 -> 10^14 .. that's one hell of a big
> >> number!!
> >>
> >> Also .. thinking about training over several training sets (as is
> >> usual the
> >> case) wouldn't I be correct at making an analogy to linear algebra ?
> >> thinking of each training set as a vector, each set having their own
> >> direction. In essense, two identical training sets would be linear
> >> 'depended' on each other and subject for elimination? (thinking there
> >> could
> >> be an mathematical sound approach here towards eliminating
> semi-redundant
> >> training data!)
> >>
> >> Hope its not too far off topic!
> >>
> >> Best regards
> >
> >
> >
> >
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT