Re: Worldcon report

From: Eliezer S. Yudkowsky (
Date: Mon Sep 09 2002 - 13:07:37 MDT wrote:
> There were routine questions, usually involving anthropomorphic
> assumptions about AI's. Eventually someone asked about the potential
> to give an AI some human-like emotion, implying that this would reduce
> the risk of unwanted behavior. Vinge side stepped the assumption and
> directly addressed the idea of making an AI safe. He responded with
> something like "Well, of course there are some people, like Eliezer
> Yudkowsky, who think that all we need to do is make AI's friendly to
> humans!" eliciting a sizable chuckle from the audience-- as much from
> his lightheartedly sarcastic tone and dramatic hand-waving as anything
> else. His body language cued them to laugh. And it felt as though it
> was a simple social animal urge to win back status by pointing out
> someone else as being more fringe than he. It strengthens ones case,
> right?

This doesn't particularly bother me, actually. Anyone who hears the
phrase "Friendly AI" without bothering to look into it in depth is likely
to think that it means Asimov Laws. As misunderstandings go, this one is
inevitable and not really all that bad. It just means that people who
already know that Asimov Laws are bad look up Friendly AI and find out
it's not Asimov Laws, and people who don't know that Asimov Laws are bad
look up Friendly AI and find out Asimov Laws are bad. I think in this
case I'm just glad to have been mentioned, regardless of whether matters
were oversimplified.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT