From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Tue May 16 2006 - 02:56:00 MDT
Neural networks are interesting especially because of their largely failed
introduction into computing. There are some success stories, but they tend
to be exhibited in specific components rather than as great generalisers. I
find this interesting.
You might like to google for "rats flight simulators" to read about the rat
brain-in-a-vat that can fly a plane, and look at
http://www.livescience.com/technology/060424_ap_tongue_soldiers.html. These
articles, while not technical, give you an experimental view of brain
science that works now.
The maths behind neural networks is, actually, quite simple. My AI textbook
from Uni covers it briefly and neatly. A bayesian network is not really so
complex that it cannot be understood; nor is a neural network. The
complexity comes not from the componentry, but from the functioning of
larger systems.
I feel that the most relevant parts of AGI study would be to research
structures -- especially any self-organising structures -- that may occur
within larger learning networks.
Cheers,
-T
On 5/16/06, H C <lphege@hotmail.com> wrote:
>
> Your opinion?
>
> Assuming I learn the skill of extreme patience and endurance in working on
> technical material for long periods of time within the next two weeks,
> what
> is a specific math/cs field you would say is useful to learn for AGI?
>
> I've already compiled about 100 articles and websites on Bayesian Networks
> (probability theory, graphical represenations, belief networks, expected
> utility, decision theory, optimization (including stuff on genetic
> algorithms), and many applications) which I'm working through right now...
> so the another question is what would you say are some of the most useful
> sources (websites, articles, books, authors, etc) for technical math/cs
> material specifically for AGI?
>
>
> -Hank
>
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT