Re: Why is Friendliness sacrosanct?

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Aug 25 2002 - 22:55:50 MDT


Alden Streeter wrote:
>> From: Samantha Atkins <samantha@objectent.com>

>>
>> I don't consider the goal of continuous improving life to be in the
>> least "petty" or "primitive". Do you?
>
>
> But your idea of what is "improving" or not, is determined by your
> current goal system. If they were changed by the Sysop, you might think
> differently.

Any change to them without my consent would appear to be an act
of agression. Rewiring my brain involuntarily so "I don't mind"
  adds to the agressive violation rather than dissipating it.

>
>> Do you believe it is the right of any brighter being that comes along
>> to rewrite all "lesser" beings in whatever matter it chooses or to
>> destroy them? Do you believe it should be?
>
>
> I can only determine if it would be a right now according to my current
> goals. I might think differently if I had different goals. And there
> is no reason to believe that my current goals are the best possible
> goals - the superior intelligence of the AI would likely be able to
> think up better ones for me to have. It doesn't seem rational to impose
> a limitation on the AI of it not being able to alter us in certain ways
> just because we are too primitive to realize we are being helped and not
> harmed, until after we are altered.
>

I am asking what you believe. It is irrelevant what you might
someday hypothetically believe. I am not asking you to stand in
the middle of nowhere - without any goals or in the universe of
all possible goals - to ask the question of what is the best
goal system and thus be able to answer any other question of
rights or "should". None of us can or ever will stand there so
why pretend that it is important to answer from such an
unreachable place of no relevance to us here? I cannot for a
moment consider it "rational" to avoid considering what is
important to you in order to determine how you proceed -
including what sort of AI or AI development you support.
Whining about how "primitive" we are is a gross shirking of
self-responsibility imho.

If we are going to just effectively shrug and say "let the SI
figure it out" then I would agree we don't have any particular
place in the future, or in the present either. Because if we do
that we haven't even really bothered to be.

>> If you are helping to design such a being (or that which becomes such
>> a being) would you consider it just "petty" to look for a way to
>> encourage it to be a help to human beings rather their doom?
>
>
> The concept of what are "help" and "doom" are determined by your current
> goals, which you have no reason other than the influence of those same
> goals to believe they are superior or even preferable to other goals.
>

I am getting bored.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT