Re: Threats to the Singularity.

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jun 17 2002 - 17:15:13 MDT


Eliezer S. Yudkowsky wrote:

> Gordon Worley wrote:
>
>>
>> On Monday, June 17, 2002, at 05:31 AM, Samantha Atkins wrote:
>>
>>> Do I read you correctly? If I do, then why do you hold this
>>> position? If I read you correctly then how can you expect the
>>> majority of human beings, if they really understood you, to consider
>>> you as other than a monster?
>>
>
> Shouldn't you be trying to figure out what's right before discussing its
> PR value? Or are you arguing that the "yuck factor" reaction of many
> humans is representative of an actual moral wrong? If so, why not argue
> the moral wrong itself, rather than arguing from the agreement of a
> large number of people who have not actually been consulted?
>

What do you think that I am doing other than attempting to
figure out what is being said and why? Killing other sentients
without very dire reasons is in my book a moral wrong. It has
nothing to do with "yuck factor" and I am quite disappointed to
see you taking this line. I thought I remembered you taking the
veiw that a FAI will safeguard the lives of humans, uploaded or
not, as much as possible and this is part of what it means to be
"Friendly". I certainly remember you on several occassions
using the final deaths of so many humans today and the
possibility of giga-death destruction just around the corner as
a strong motivator for the Work. So I hardly see why concern
for the continued existence of our fellow sentience can now be
dismmised as simply evolution-programmed "YUCK" factor.

>
> Exactly. Morality, like rationality, is never on anyone's side. The
> most you can try to do is end up being on the side of morality. The
> price of seeing the morality of a situation clearly is that you start
> out by asking which side you should be on, rather than looking for a way
> to rationalize one side. Sometimes, just as in rationality, evidence
> (or valid moral argument) is weighted very heavily on one side of the
> scales and judgement is easy, but it doesn't mean that judgement can be
> replaced with prejudgement.
>

What is the basis of your morality? Who is talking of
prejudgement? A pre-judgement that humans are expendable if
need be (to be strictly determined by the "Friendly" AI) seems
to be being made.

> It goes back to that same principle of building something eternal. This
> isn't a contest to see who can say the nicest things about humanity.

Who in the hell ever said it was? I am concerned with the
well-being and the freeing of humanity, not with saying nice
things about them.

  
> The decision that a universe with humanity or human-derived minds in it
> is what we want to see lasting through eternity is not a decision for
> either a Friendly AI or a human philosopher to make lightly, whether
> "eternity" is taken to mean a few billion years or an actual infinity.

Well, I never said the minds have to be strictly human. I never
said that human derived minds have to exist forever either. I
said that sentients and their continued well-being to the
maximum extent possible should be a top priority if I am to
believe what is proposed is "Friendly" or even remotely
palatable. I have no problem with the forms of the sentients
changing as they find useful and convenient to quite non-human
modalities or if they join with an SI and so on. I have no
problem with all of certain lines eventually falling behind and
even becoming extinct. I have a very large problem with saying
that it is alright for us to build something where it is an open
option to exterminate all human life by choice. I have a large
problem also if the upliftment of human beings (voltionally of
course) is not a high priority.

> Either way that's a hell of a long time. Isn't it worth an hour to
> think about it today? Even if the moral question is "trivial", in the
> mathematical sense of being a trivial consequence of the basic rules of
> moral reasoning, then this itself needs to be established.
>

Since it is a strawman this paragraph is without much meaning.

> There are also penalties to intelligence if you stop thinking too early.
> What if humanity's survival was morally worthwhile given a certain
> easily achievable enabling condition, but a snap judgement caused you to
> miss it? I can't think of any concrete scenario matching this
> description, but I think that growing into a strong thinker involves
> thinking through every possibility. The conclusions may be obvious but
> you still have to do the math to arrive at the obvious conclusions.

I don't need to do math to prove something so fundamental to me
is in fact fundamental to me! The well-being and freeing of
humankind and other sentients is a supergoal for me. I will
judge your work in terms of its utility for that supergoal.

> Otherwise you *don't know* the math! Maybe this doesn't matter much if
> you're willing to go through your life on autopilot, but it sure as heck
> matters for building AI. And the only way you can know the math is by

You have to start somewhere. How will you prove the supergoal
of Friendliness to the SI?

> being willing to emotionally accept either outcome when you start
> thinking. You can't pretend to be able to accept either outcome in
> order to find the math. You have to be able to *actually* accept the
> moral outcome whatever it is. This is why "attachment", even to good
> things that really turn out to be good, is a bad thing.
>

You seem to be arguing that ethics can be done in a complete
vacuum with no starting basis whatsoever. I do not believe this
can be done.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT