From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Mon Jun 17 2002 - 09:13:36 MDT
Gordon Worley wrote:
> On Monday, June 17, 2002, at 05:31 AM, Samantha Atkins wrote:
>> Do I read you correctly? If I do, then why do you hold this
>> position? If I read you correctly then how can you expect the
>> majority of human beings, if they really understood you, to consider
>> you as other than a monster?
Shouldn't you be trying to figure out what's right before discussing its PR
value? Or are you arguing that the "yuck factor" reaction of many humans is
representative of an actual moral wrong? If so, why not argue the moral
wrong itself, rather than arguing from the agreement of a large number of
people who have not actually been consulted?
> If an SI said it needed to kill a bunch of humans, I would seriously
> start questioning its motives. Killing intelligent life is not
> something to be taken lightly and done on a whim. However, if we had a
> FAI that was really Friendly and it said "Gordon, believe me, the only
> way is to kill this person", I would trust in the much wiser SI.
> This is the kind of reaction I expect and, while I'm a bit disappointed
> to get so much of it on SL4, therefore avoid pointing this view out. I
> never go out of my way to say that human life is not the most important
> thing to me in the universe, but sometimes it is worth talking about.
Exactly. Morality, like rationality, is never on anyone's side. The most
you can try to do is end up being on the side of morality. The price of
seeing the morality of a situation clearly is that you start out by asking
which side you should be on, rather than looking for a way to rationalize
one side. Sometimes, just as in rationality, evidence (or valid moral
argument) is weighted very heavily on one side of the scales and judgement
is easy, but it doesn't mean that judgement can be replaced with prejudgement.
It goes back to that same principle of building something eternal. This
isn't a contest to see who can say the nicest things about humanity. The
decision that a universe with humanity or human-derived minds in it is what
we want to see lasting through eternity is not a decision for either a
Friendly AI or a human philosopher to make lightly, whether "eternity" is
taken to mean a few billion years or an actual infinity. Either way that's
a hell of a long time. Isn't it worth an hour to think about it today?
Even if the moral question is "trivial", in the mathematical sense of being
a trivial consequence of the basic rules of moral reasoning, then this
itself needs to be established.
There are also penalties to intelligence if you stop thinking too early.
What if humanity's survival was morally worthwhile given a certain easily
achievable enabling condition, but a snap judgement caused you to miss it?
I can't think of any concrete scenario matching this description, but I
think that growing into a strong thinker involves thinking through every
possibility. The conclusions may be obvious but you still have to do the
math to arrive at the obvious conclusions. Otherwise you *don't know* the
math! Maybe this doesn't matter much if you're willing to go through your
life on autopilot, but it sure as heck matters for building AI. And the
only way you can know the math is by being willing to emotionally accept
either outcome when you start thinking. You can't pretend to be able to
accept either outcome in order to find the math. You have to be able to
*actually* accept the moral outcome whatever it is. This is why
"attachment", even to good things that really turn out to be good, is a bad
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT