From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Feb 04 2002 - 13:09:38 MST
Gordon Worley wrote:
>
> Looks pretty good. I really like how it keeps pounding in certain
> ideas. By the end of this I think that the reader will have read some
> things enough to start taking them as fact. This is just the kind of
> writing needed for an FAQ like this.
The way you convince people is not through repetition of flat statements;
the way you convince people is through the repeated novel variation of
structured, rational arguments until one of them sinks in. This is
actually my main problem with Gordon's current Sysop pages; the concepts
are stated, but not argued. Even if your target audience is core
Singularitarians, I still don't think this is good enough. Knowledge that
doesn't come with an explanation is not real knowledge. I'm not saying
that you have to repeat everything that's said in "Created Friendly AI",
but something being presented as a FAQ should at least contain the
outlines of the arguments for each position. At the very least, if you
want to say that the Sysop does not have an anthropomorphic power-corrupts
module, you have to spend a few sentences arguing it and include a link to
a more detailed reference like CFAI 2.2.whatever. You can't just say:
"Don't worry! The Sysop is totally immune to the tendency to be corrupted
by power!" This does not reassure people.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT