Re: Why is Friendliness sacrosanct?

From: Michael Roy Ames (michaelroyames@hotmail.com)
Date: Sun Aug 25 2002 - 11:55:04 MDT


Alden Streeter <astreeter@msn.com> wrote:

>
> But if the Sysop changed your goals, you might afterwards have a different
> opinion of whether that change empowered instead of overpowered you. It
> seems irrational that your present goals should be considered superior to
> the new, better goals that the vastly more intelligent AI would choose for
> you.
>

It *is* irrational if my "... present goals should be considered superior to
the new, better goals that the vastly more intelligent AI would choose for
you.". But that is not what I said... and it misses the point I was
attempting to make.

I don't consider my goals, at this moment in time, to be superior to
hypothetically better goals in the future. But at this time, in this state,
my current goals are the best I can come up with. Now, you have argued that
there may be a set of better goals, determined by a SuperIntelligence (SI),
that I cannot currently understand. Okay, that sounds perfectly possible.
You then go on to say that, if the SI were to change me, I might then be
able to understand this set of better goals - and even agree with them.
Okay, that is possible also.

Now I will attempt to make my point again: I am unwilling to give carte
blanche over my mental configuration to an SI. Prior to giving permission
for a change to my mental processes, I want an understandable explanation of
them. I want to be 'in the loop' on making decisions about myself. I don't
want my volition violated. Period!

If this stubborn opinion strands me on a lower plateau of mentality,
compared to others... that is how it will have to be. My mentality is
already not as intelligent/perceptive/created/whatever-measure as many
others... and I don't have a problem with that. *However* (BIG However), I
really don't think that explanations of better goals will be beyond my
ability to understand, and I have yet to see *any* evidence to show
otherwise. While it is quite possible that the difference between my goals
and the goals-an-SI-would-want-for-me *will be* beyond my current reasoning
powers, there will certainly be many intermediate mental improvements that I
would agree with, and would adopt, and that would eventually advance me to a
level where I could understand the goals-an-SI-would-want-for-me.

Now this might sound like a long journey, and a lot of hassle to you - to go
through many intermediate stages in order to get to the same destination I
could have got to in one step - but I *want* the journey. I *want* the
choices. If you want to 'put your faith' in an SI that you cannot
understand... then you may have that opportunity. I do not want to do that.

>
> Is "gives a damn" a technical term in this field? How is it defined? ;-)
>

He he! Deep concept, shallow language. We do the best we can, eh?

>
> Why should the AI be hampered by having to cater to the possibly
irrational
> demands of those of lower intelligence?
>

Because we want it that way, and will build it that way. There is no
knowing the splendiferous levels of intelligence and creativity an
exponentially improving AI will reach. It'll certainly leave us, biological
humans, in the dust... unless it is given a reason to pay attention to the
grubby, little apes waving their arms beneath it. That reason will be
articulated in the Friendly AI protocols: be good to these humans, help them
out, and do it in a way they will consider friendly.

> How do you know that what you
> consider friendly at our lower level of intelligence you would still
> consider friendly if you intelligence were enhanced?

Answered above.

> Isn't part of the
> principle of the Friendly AI that the AI should be able to decide, and
> actively change its system of deciding if it decides to, what is friendly
or
> not? (I seem to recall reading that somewhere.)

Yes, absolutely.

> Then it seems to me that the
> AI, being more intelligent than you will ever be, should be more qualified
> to decide what is friendly.

Yes, we are working toward that.

>
> Again, the Sysop could just change you so that you didn't mind having your
> freedom taken away. And you only can say that would be a bad thing now,
> because the Sysop hasn't changed you yet.
>

This is not a paradox. This is life. You cannot grow up before you grow
up. As an analogy: I have made many decisions in my past life, that I would
make differently if I had my current knowledge/understanding... but my 17
year old self would definitely not want his brain instantly upgraded to my
37 year old self. I would have missed 20 years of growing and learning...
how sad, how empty! Some others may feel differently :) That okay.

> The only two ways I can think of out of this paradox are to:
> 1. Turn the AI lose without restrictions, including the one prohibiting
the
> destruction of humans.
> 2. Arbitrarily forbid the AI from ever altering human goal systems.
>

1. Is doable, but for many here it is considered a horrible,
very-last-ditch, gray-goo-is-coming option.

2. Is not possible.

There is another option:

3. Turn the AI lose *with* restrictions, self imposed restrictions, that it
has been given and agrees with.

To paraphrase Eliezer: We are building minds, not tools. If the AI stops
wanting to be Friendly, then we have already lost.

Michael Roy Ames



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT