From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Thu Feb 17 2005 - 17:13:50 MST
J. Andrew Rogers wrote:
> Robin Lee Powell <rlpowell@digitalkingdom.org>:
>
>>Not that it's at all unreasonable that someone would think of these
>>things, but just that it's *so* stereotypical a conversation. :-)
>
> That is a nice anecdote, but what is it doing on the SL4 mailing list?
>
> People saying or thinking crazy things about AI is not SL4 material, nor
> is noting an instance of a phenomenon so well-known to this list.
Hm... methinks your standards are set too high. Words I never thought I'd
utter; but still, I was briefly amused by Robin's brief story.
Occasionally it's all right to discuss things that are just funny, or
evidence that prosaically confirms an existing hypothesis. Failure modes
for thinking about AI are SL4-relevant. We already know about the failure
modes Robin described, but we don't have *so* many recorded cases that one
more case is wholly unuseful.
I like to think of @SL4 as a mailing list reserved for posts that could not
have been made anywhere else. There's no other mailing list where Robin's
concluding sentence would have been funny.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT