Re: [SL4] AI

From: Eliezer S. Yudkowsky (
Date: Tue May 09 2000 - 16:44:32 MDT wrote:
> Professor Hugo de Garis, physicist, lately of Melbourne and now of Kyoto
> in Japan, fears that his experiments may ultimately lead to the
> extermination of the human race. What do you think?

Hugo de Garis claims to favor the cause of AI, yet goes around making
fear-inducing statements about how "we need to consider where it will
all end up". He doesn't say, for example, that Foresight has been
busily thinking about where it will end up for the last fifteen years.
It's not really very professional behavior for someone knowingly
meddling in human destiny.

With respect to the AI: I don't really think you can build an AI
without knowing what you're doing, but if seed AI never takes off, then
self-evolving neural-net FPGAs might be the next best shot.

--      Eliezer S. Yudkowsky

                 Member, Extropy Institute
           Senior Associate, Foresight Institute
Save 75% on Products!
Find incredible deals on overstocked items with Free shipping!

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:08 MDT