Two draft papers: AI and existential risk; heuristics and biases

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 04 2006 - 10:33:26 MDT


These are drafts of my chapters for Nick Bostrom's forthcoming edited
volume _Global Catastrophic Risks_. I may not have much time for
further editing, but if anyone discovers any gross mistakes, then
there's still time for me to submit changes.

The chapters are:

_Cognitive biases potentially affecting judgment of global risks_
   http://intelligence.org/Biases.pdf
An introduction to the field of heuristics and biases - the experimental
psychology of reproducible errors of human judgment - with a special
focus on global catastrophic risks. However, this paper should be
generally useful to anyone who hasn't previously looked into the
experimental results on human error. If you're going to read both
chapters, I recommend that you read this one first.

_Artificial Intelligence and Global Risk_
   http://intelligence.org/AIRisk.pdf
The new standard introductory material on Friendly AI. Any links to
_Creating Friendly AI_ should be redirected here.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT