Re: Two draft papers: AI and existential risk; heuristics and biases

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Jun 06 2006 - 18:13:18 MDT


On Sun, Jun 04, 2006 at 09:33:26AM -0700, Eliezer S. Yudkowsky wrote:
>
> _Artificial Intelligence and Global Risk_
> http://intelligence.org/AIRisk.pdf
> The new standard introductory material on Friendly AI. Any links to
> _Creating Friendly AI_ should be redirected here.

Googlewhack!

http://www.google.com/search?q=lumpenfuturistic&start=0&ie=utf-8&oe=utf-8&client=firefox-a&rls=org.mozilla:en-US:official

This implies to me that either you made the word up, in which case
you should explain it, or you mis-spelled it.

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT