From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Apr 05 2001 - 23:11:58 MDT
James Higgins wrote:
>
> At 02:46 PM 4/5/2001 -0400, Brian Atkins wrote:
> >
> >It's unhealthy for those few haters 'cause they don't even have the chance
> >to blow anyone else up? Darn. Too bad. Get some backbone Samantha and draw
> >a line in the sand. Certain things should not be allowed. I think the vast
> >vast majority of Citizens will be 100% happy with the new world.
>
> Yup, especially if they're programmed to be.
You know, a few more comments like this and I really will blow my stack.
(In my personal rather than my moderator's capacity, however, because I
KNOW DAMN WELL WHAT ABUSE OF POWER IS AND I DON'T DO IT. Ahem.) Anyway,
this last comment crosses the thin line between skepticism and living in
your own private reality. For the love of Nebraska, what the hell have we
been discussing on this list for the last three months? Basketball?
NOBODY is proposing reprogramming ANYONE. This is some kind of sick
Orwellian fantasy that has not one damn thing to do with Friendly AI in
any form.
Now, if someone hears about the Sysop Scenario and induces that we are
proposing an Orwellian mind-reprogramming scenario because we think it's
for the best, then they have made a mistaken deduction and are now
"misinformed". If they spend a few months on this list and STILL haven't
caught on to the fact that we are not proposing anything remotely like
this, they are now guilty of "aggravated cluelessness", where "aggravated"
indicates that the initial misapprehension is now being supported by
emotional hostility.
I now understand the fact that induction of an Orwellian scenario tends to
lead to perseverant hostility, and that it is my duty as an evangelist to
avoid triggering this chain of causality in the future. However, the fact
remains that you appear to have decided that we are advancing some
proposal TOTALLY UNRELATED to any proposal which we are, in fact,
advancing, and you are offering criticism on that basis; that is to say,
you are advancing criticism of a proposal which exists only in your
imagination and the paranoid fantasies of Hollywood writers. The
probability that your criticism will be useful or relevant to our ACTUAL
PROPOSAL thus effectively approaches zero.
If you want to read through Friendly AI, look at the ACTUAL ACTIONS WE ARE
PROPOSING, and then explain how they will backfire in some specific way,
that's one thing. Right now, you are just making stuff up and saying we
plan to do it.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT