From: Stephen Tattum (S.Tattum@dundee.ac.uk)
Date: Sat Jan 15 2005 - 04:20:49 MST
I was looking over the Singularity Institute page on becoming a seed AI
Programmer the other day and I couldn't help but feel that there is an
overwhelming bias towards bayesian reasoning and I have noticed that a
lot of contributors to sl4 hail this as all-powerful - should they?
Check out this paper by Bart Kosko (clearly a 'brilliant' individual)
and his other work -
http://sipi.usc.edu/~kosko/ProbabilityMonopoly.pdf
http://sipi.usc.edu/~kosko/
I couldn't help noticing also that generally there are gaps in the
plan. As a philosopher I saw the ommission of any philosophy of mind -
crucial to any AI discussions and for any 'deep understanding' of the
issues actually outlined - strange... I have witnessed in the past
prejudice against philosophy and philosophers here too (apology already
accepted of course) and I wondered if the project of creating AI is
being pushed forward before it is ready. Now I believe that the
singularity is inevitable and I am not suggesting that the institute is
wrong, just that creating an Artificial General Intelligence, needs more
emphasis on the general. Any thoughts?
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST