Re: Si definition of Friendliess

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Apr 05 2001 - 23:45:53 MDT


At 01:11 AM 4/6/2001 -0400, Eliezer S. Yudkowsky wrote:
>James Higgins wrote:
> > At 02:46 PM 4/5/2001 -0400, Brian Atkins wrote:
> > >It's unhealthy for those few haters 'cause they don't even have the chance
> > >to blow anyone else up? Darn. Too bad. Get some backbone Samantha and draw
> > >a line in the sand. Certain things should not be allowed. I think the vast
> > >vast majority of Citizens will be 100% happy with the new world.
> > Yup, especially if they're programmed to be.
>
>You know, a few more comments like this and I really will blow my stack.
>(In my personal rather than my moderator's capacity, however, because I
>KNOW DAMN WELL WHAT ABUSE OF POWER IS AND I DON'T DO IT. Ahem.) Anyway,
>this last comment crosses the thin line between skepticism and living in
>your own private reality. For the love of Nebraska, what the hell have we
>been discussing on this list for the last three months? Basketball?
>NOBODY is proposing reprogramming ANYONE. This is some kind of sick
>Orwellian fantasy that has not one damn thing to do with Friendly AI in
>any form.

Sorry Eliezer. I completely understand that this is NOT what Friendly AI
is supposed to be in any form. However, can you guarantee me with any
degree of certainty whatsoever that the version of Friendly AI that you
advocate is the one that will ultimately come to exist?

As long as the Friendly AI people want to keep making it sound like
everything is going to go like clockwork and be perfect, I'm going to
continue to point out the opposite viewpoint. I do not share your faith in
Friendly AI. I'm not even certain that I share your goals. However, you
are trying to do what you feel is best for mankind. That I
understand. And we can debate the fine details of that for the next
several years, I'm sure.

But you do not have this under control. You can't even reasonably predict
how this will turn out. Even if you design the perfect seed AI, it may
still end up completely differently than you imagine. What I am asking is
that you please stop trying to sounding so dam confidant about all of this.

>I now understand the fact that induction of an Orwellian scenario tends to
>lead to perseverant hostility, and that it is my duty as an evangelist to
>avoid triggering this chain of causality in the future. However, the fact
>remains that you appear to have decided that we are advancing some
>proposal TOTALLY UNRELATED to any proposal which we are, in fact,
>advancing, and you are offering criticism on that basis; that is to say,
>you are advancing criticism of a proposal which exists only in your
>imagination and the paranoid fantasies of Hollywood writers. The
>probability that your criticism will be useful or relevant to our ACTUAL
>PROPOSAL thus effectively approaches zero.

Just to make certain that you understand my point. At no time have I ever
thought that YOU wanted such "Orwellian" scenarios to occur. You have good
intentions, that I do not question. But, as they say, the road to hell is
paved with good intentions.

We are playing with incredibly dangerous technology here. Not once have I
seen the powers that be on this list stop and ask "should we do this?" You
seem to have a conviction that you MUST do this and it MUST be some certain
way. Please admit that you may be totally wrong or that, even if you are
not, there is still a good chance that you will fail to produce what you
have planned.

(NOTE: For the record, I am committed to the fact that either AI or human
uploads will happen. My present thoughts are focused on which would be
better and why.)

>If you want to read through Friendly AI, look at the ACTUAL ACTIONS WE ARE
>PROPOSING, and then explain how they will backfire in some specific way,
>that's one thing. Right now, you are just making stuff up and saying we
>plan to do it.

I sincerely apologize, since you took this the wrong way. Sometimes it is
hard to make your point without screwing it up in cyberspace. If only we
had discussed this in person...



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT