From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Mon Jan 24 2005 - 16:58:55 MST
David Clark wrote:
> 
> I don't mean to sound so harsh to Eliezer's philosophical approach but
> designs and philosophy are admirable goals but if we are talking about a
> software system that works in the real world, then what you see is what
> matters.  Even if most of the plan was on paper but *something* was in
> software, my complaint would be retracted.  I also haven't started to
> actually write my AI code so this same criticism also applies to me.
Even as we speak, plenty of running AI code is trading commodities, playing
chess, filtering spam, etc.  I could easily implement some standard AI
algorithm, and lo, I would have running code.  Lots of people with running
code out there.  Creating AGI is harder than that.  Judging by history,
running code in a limited AI domain is not good reason to think that the
project can go the rest of the way.  And even if one had assurance that
project X would create AGI, FAI is an entirely different order of problem,
which subsumes the AGI problem and piles additional requirements on top.
And this problem I shall solve as swiftly as I can, avoiding detours, even 
detours for the sake of pride.  I, an individual blessed with an 
individual's abilities, as distinct from you and your abilities as an 
individual, currently judge that I can think faster than I can program. 
Your advice has been noted and you do not need to repeat it to me again. 
Thank you.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT