From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sun Sep 26 2004 - 18:05:28 MDT
Sebastian Hagen wrote:
>
> I wasn't certain that it would be; eudaemonia just lacked justification
> imo. I regarded it as a possible answer, but didn't consider it any more
> likely than, say, 'maximize the number of paperclips in the universe'.
> But considering that 'objective morality' is ill-defined, I suppose
> expecting any objective justification was unreasonable.
> Thank you for clearing that up.
The problem word is "objective". There's a very deep problem here, a place
where the mind processes the world in such a way as to create the
appearance of an impossible question. Sort of like similar questions asked
by dualists through the ages: How could mere matter give rise to thought?
How could mere matter give rise to intentionality? How could 'is' ever
imply 'ought'? Considering the amount of philosophical argument that has
gone into this sort of thing, I hope it is not too implausible when I say
that my preferred method for actually untangling the confusion is a bit
difficult to explain. But if you're willing to leave the confusion tangled
and ask after my moral output, then my answer starts out with the me that
exists in this moment, and then asks what changes to myself I would make if
I had that power. "Human is what we are. Humaneness is renormalized
humanity, that which, being human, we wish we were." Etc.
I don't want to say that you can *safely* ignore the philosophical
confusion. That sort of thing is never safe. I do allege that the
philosophical confusion is just that, a confusion, and after it gets
resolved everything is all right again. The apparent lack of any possible
objective justification doesn't mean that life is meaningless, it means
that you're looking at the question in a confused way. When the confusion
goes away you'll get back most of the common sense you started with, only
this time you'll know why you're keeping it.
The root mistake of the TMOL FAQ was in attempting to use clever-seeming
logic to manipulate a quantity, "objective morality", which I confessedly
did not understand at the time I wrote the FAQ. It isn't possible to
reason over mysterious quantities and get a good answer, or even a
well-formed answer; you have to demystify the quantity first. Nor is it
possible to construct an AI to accomplish an end for which you do not
possess a well-specified abstract description.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT