Re: The Future of Human Evolution

From: Jef Allbright (jef@jefallbright.net)
Date: Sun Sep 26 2004 - 13:07:35 MDT


Eliezer Yudkowsky wrote:

> Sebastian Hagen wrote:
>
>> Aleksei Riikonen wrote:
>>
>>> As an agent striving to be non-eudaemonic, could you elaborate on
>>> what are the things you value? (Non-instrumentally, that is.)
>>
>>
>> The best answer I can give is 'whatever has objective moral relevance'.
>> Unfortunately I don't know what exactly qualifies for that, so currently
>> the active subgoal is to get more intelligence applied to the task of
>> finding out. Should there be in fact nothing with objective moral
>> relevance, what I do is per definition morally irrelevant, so I don't
>> have to consider this possibility in calculating the expected utility of
>> my actions.
>> This rationale has been copied from
>> <http://yudkowsky.net/tmol-faq/logic.html#meaning>, but (since I haven't
>> found anything that appears to be better) it does represent my current
>> opinion on the matter.
>
>
> TMOL is superceded by CFAI which in turn is superceded by
> http://sl4.org/wiki/DialogueOnFriendliness and
> http://sl4.org/wiki/CollectiveVolition.

Which will likely be superseded by something along the lines of
"Rational Actualization".

Eliezer, your current stumbling blocks involve (1) extrapolation fails
to predict the future environment, including our own place in it, due to
combinatorial explosion, cumulative error, and fundamentally bounded
knowledge; and (2) while humanity is the standard by which we judge
morality, this is encompassed by the bigger picture "morality" which is
simply that what works, survives and grows.

The next step in the journey will indeed involve the increasing
awareness of the multi-vectored "collective volition" of humanity, but
in the context of what works, focusing on effective principles to
actualize our collective vision and drive toward an inherently
unknowable future.

While humanity is the measure of our morality, current humanity is only
a milepost along the way, and our current values will be seen as
rational only within their limited context, not to be applied outside
that context due to their limitations. And we can't successfully
extrapolate from limited data if we want solutions that apply to a
larger context.

We can influence the direction, but not the destination of our path
according to our "moral" choices. We can do this by applying increasing
awareness of ourselves, our environment, and the principles that
describe growth, but it's open-ended, not something amenable to
extrapolation and control.

Why is this important? Because we can significantly improve the quality
of the journey by preparing the right tools now.

- Jef
http://www.jefallbright.net

>
> Roughly, the problem with the 'objective' criterion is that to get an
> objective answer you need a well-defined question, where any question
> that is the output of a well-defined algorithm may be taken as
> well-defined. In the TMOL FAQ you've got an algorithm that searches
> for a solution and has no criterion for recognizing a solution. This,
> in retrospect, was kinda silly. If you actually implement the stated
> logic or something as close to the stated logic as you can get, what
> it will *actually* do (assuming you get it to work at all) is treat,
> as its effective supergoal, the carrying out of operations that have
> been shown to be generally useful for subproblems, relative to some
> generalization operator. For example, it might sort everything in the
> universe into alphabetical order (as a supergoal) because sorting
> things into alphabetical order was found to often be useful on
> algorithmic problems (as subgoals). In short, the damn thing don't work.
>
> We know how morality evolved. We know how it's implemented in
> humans. We know how human psychology treats it. ('We' meaning
> humanity's accumulated scientific knowledge - I understand that not
> everyone is familiar with the details.) What's left to figure out?
> What information is missing? If a superintelligence could figure out
> an answer, why shouldn't we?
>
> The rough idea behind the 'volitional extrapolation' concept is that
> you can get an answer to a question that, to a human, appears poorly
> defined, so long as the human actually contains the roots of an
> answer; criteria that aren't obviously relevant - that haven't yet
> reached out and connected themselves to the problem - but that would
> connect themselves to the problem given the right realizations. Like,
> you know 'even numbers are green', and you want to know 'what color is
> 14?', but you haven't applied the computing power to figure out that
> 14 is even. That kind of question can turn out to be well-defined,
> even if, at the moment, you're staring at 14 with absolutely no idea
> how to determine what color it is. But the question, if not the
> answer, has to be implicit in the asker.
>
> If you claim that you know absolutely nothing about objective
> morality, how would you look at an AI, or any other well-defined
> process, and claim that it did (or for that matter, did not) compute
> an objective morality? If you know absolutely nothing about objective
> morality, where it comes from, what sort of thing it is, and so on,
> then how do you know that 'Look in the Bible' is an unsatisfactory
> justification for a morality?
>
> It all starts with humans. You're the one who'd have to determine any
> criterion for recognizing a morality, an algorithm for producing
> morality, or an algorithm for producing an algorithm for producing
> morality. Why would any answer you recognized as reasonable be
> non-eudaemonic?
>



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST