From: Adam Safron (asafron@gmail.com)
Date: Tue Feb 27 2007 - 10:29:14 MST
Wouldn't it be a horrible idea to reverse engineer the human brain as
a template for your AI? If you have an AI that's human-like, I fail
to see how you could ensure the "friendliness" of subsequent
iterations of the progressively developing intelligence. Am I
missing something?
Thanks.
-adam
On Feb 24, 2007, at 5:34 AM, Shane Legg wrote:
> On 2/23/07, José Raeiro <zeraeiro@clix.pt> wrote:
>
> And has anyone ever coded a program that can take care of several
> neurons with <1 line / neuron?
>
> Hehe... actually after thinking about it a little I realised that
> basically everybody I know doing
> work in ANNs is below about 1 line/neuron. Until this statistic
> gets to 100 or more it's clear
> that we don't really know how to make ANNs scale properly. Maybe
> it's not all that bad a
> statistic to measure progress in the field? The brain is obviously
> far above this level.
>
> Of course there are people with very large ANNs, however all of
> these that I can think of at
> the moment aren't actually doing anything directly useful. That
> is, they are investigating the
> dynamics of large networks rather than trying to get the network to
> solve some problem.
> For example Izhikevich, the guy who is running Scholarpedia,
>
> http://vesicle.nsi.edu/users/izhikevich/interest/index.htm
>
> Shane
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT