**From:** CyTG (*cytg.net@gmail.com*)

**Date:** Wed Jan 25 2006 - 05:33:49 MST

**Next message:**Russell Wallace: "Re: neural nets"**Previous message:**Maru Dubshinki: "Re: Some considerations about AGI"**In reply to:**Rok Sibanc: "Re: neural nets"**Next in thread:**Russell Wallace: "Re: neural nets"**Reply:**Russell Wallace: "Re: neural nets"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Sorry for the lack of mail-history..

"Are you saying we can only build an AGI by emulating the fine structure

of the brain? That would mean several people on this list are

completely wasting their time, because they (including me) take a

different tack"

- I am saying I have a hard time pictuirng it .. bear in mind im problary

not coloring with a full set of crayons, so.

But .. from a functional point of view, and the approach i see in pretty

much all software development cases is build proof of concept first,

optimize later. You're optimizing ahead of time in the hopes it will yield

you proof of concept in the end.

Now, remember, im just poking your mind! Have read some of the papers on

extracting rules from a trained neural net ? Visually you take a somewhat

smooth function and you chop it up in x pieces wich together approximates

the function.

I'd even be concerned with the inherent limitations on real-number number

crunshing in a 32/64bit processor(cost effective of course, you wanna do it

natively, not carry alot of carry's in code).

If you say thats not an issue, then fine :) .. could be cool if you could

visualize *why* it aint an issue( maybe it is just an assumption so

intuitively correct, it need no proof!)

"Trouble is, we haven't the slightest clue what exactly a column does.

(Slight exaggeration perhaps: clues I am sure we have! Certainty, not

so much)."

- And thats where i speculate that proof of concept should come first,

optimization later ?

On 1/24/06, Rok Sibanc <rok.sibanc@gmail.com> wrote:

*>
*

*> 1 million neurons/second... beacause every neuron has to do multiplication
*

*> of every incoming input and sumation of all weighted inputs and finally
*

*> sigmoid function which is expressed with taylor's polynomial.
*

*>
*

*> to sum up: neuron update is dependant on number of inputs and activation
*

*> function type.
*

*>
*

*> Rok
*

*>
*

*> On 1/24/06, Russell Wallace <russell.wallace@gmail.com> wrote:
*

*> >
*

*> > On 1/12/06, CyTG <cytg.net@gmail.com> wrote:
*

*> > >
*

*> > > On my machine, a 3GHz workstation, im able to run a feedforward
*

*> > > network at about 150.000 operations /second WITH training(backprop) ..
*

*> > > take training out of the equation and we may, lets shoot high, land on 1
*

*> > > million 'touched' neurons/second
*

*> >
*

*> >
*

*> > I'm curious: in most artificial "neural nets" the basic operation is dot
*

*> > product of input and weight vectors, with some function e.g. sigmoid on
*

*> > the final output - 2 flops per connection. What are you doing that's taking
*

*> > a thousand times longer?
*

*> >
*

*> > - Russell
*

*> >
*

*>
*

*>
*

**Next message:**Russell Wallace: "Re: neural nets"**Previous message:**Maru Dubshinki: "Re: Some considerations about AGI"**In reply to:**Rok Sibanc: "Re: neural nets"**Next in thread:**Russell Wallace: "Re: neural nets"**Reply:**Russell Wallace: "Re: neural nets"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:55 MDT
*