Re: [agi] A difficulty with AI reflectivity

From: Yan King Yin (y.k.y@lycos.com)
Date: Fri Oct 22 2004 - 02:41:19 MDT


Hi Eliezer and others

While I'm not very familiar with Godel's theories etc, I want
to add that human reasoning is fundamentally probabilistic.
We are seldom able to *prove* something as true or false, rather
we perceive subjectively whether something is likely or not.
This property follows from our brain's neural basis. I'm not
entirely sure how did formal logic emerge from human thinking,
but it must be an emergent property rather than a primary one.

Also I think Eliezer's questions are consistently too removed
from practical issues. It is very important for AGI groups to
collaborate to solve real problems.

YKY

-- 
_______________________________________________
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:47 MST