From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sun Sep 26 2004 - 10:52:26 MDT
Sebastian Hagen wrote:
> Aleksei Riikonen wrote:
>
>> As an agent striving to be non-eudaemonic, could you elaborate on what
>> are the things you value? (Non-instrumentally, that is.)
>
> The best answer I can give is 'whatever has objective moral relevance'.
> Unfortunately I don't know what exactly qualifies for that, so currently
> the active subgoal is to get more intelligence applied to the task of
> finding out. Should there be in fact nothing with objective moral
> relevance, what I do is per definition morally irrelevant, so I don't
> have to consider this possibility in calculating the expected utility of
> my actions.
> This rationale has been copied from
> <http://yudkowsky.net/tmol-faq/logic.html#meaning>, but (since I haven't
> found anything that appears to be better) it does represent my current
> opinion on the matter.
TMOL is superceded by CFAI which in turn is superceded by
http://sl4.org/wiki/DialogueOnFriendliness and
http://sl4.org/wiki/CollectiveVolition.
Roughly, the problem with the 'objective' criterion is that to get an
objective answer you need a well-defined question, where any question that
is the output of a well-defined algorithm may be taken as well-defined. In
the TMOL FAQ you've got an algorithm that searches for a solution and has
no criterion for recognizing a solution. This, in retrospect, was kinda
silly. If you actually implement the stated logic or something as close to
the stated logic as you can get, what it will *actually* do (assuming you
get it to work at all) is treat, as its effective supergoal, the carrying
out of operations that have been shown to be generally useful for
subproblems, relative to some generalization operator. For example, it
might sort everything in the universe into alphabetical order (as a
supergoal) because sorting things into alphabetical order was found to
often be useful on algorithmic problems (as subgoals). In short, the damn
thing don't work.
We know how morality evolved. We know how it's implemented in humans. We
know how human psychology treats it. ('We' meaning humanity's accumulated
scientific knowledge - I understand that not everyone is familiar with the
details.) What's left to figure out? What information is missing? If a
superintelligence could figure out an answer, why shouldn't we?
The rough idea behind the 'volitional extrapolation' concept is that you
can get an answer to a question that, to a human, appears poorly defined,
so long as the human actually contains the roots of an answer; criteria
that aren't obviously relevant - that haven't yet reached out and connected
themselves to the problem - but that would connect themselves to the
problem given the right realizations. Like, you know 'even numbers are
green', and you want to know 'what color is 14?', but you haven't applied
the computing power to figure out that 14 is even. That kind of question
can turn out to be well-defined, even if, at the moment, you're staring at
14 with absolutely no idea how to determine what color it is. But the
question, if not the answer, has to be implicit in the asker.
If you claim that you know absolutely nothing about objective morality, how
would you look at an AI, or any other well-defined process, and claim that
it did (or for that matter, did not) compute an objective morality? If you
know absolutely nothing about objective morality, where it comes from, what
sort of thing it is, and so on, then how do you know that 'Look in the
Bible' is an unsatisfactory justification for a morality?
It all starts with humans. You're the one who'd have to determine any
criterion for recognizing a morality, an algorithm for producing morality,
or an algorithm for producing an algorithm for producing morality. Why
would any answer you recognized as reasonable be non-eudaemonic?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT