Re: The Future of Human Evolution

From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Sun Sep 26 2004 - 06:47:18 MDT


Aleksei Riikonen wrote:
> As an agent striving to be non-eudaemonic, could you elaborate on what are
> the things you value? (Non-instrumentally, that is.)
The best answer I can give is 'whatever has objective moral relevance'.
Unfortunately I don't know what exactly qualifies for that, so currently
the active subgoal is to get more intelligence applied to the task of
finding out. Should there be in fact nothing with objective moral
relevance, what I do is per definition morally irrelevant, so I don't
have to consider this possibility in calculating the expected utility of
my actions.
This rationale has been copied from
<http://yudkowsky.net/tmol-faq/logic.html#meaning>, but (since I haven't
found anything that appears to be better) it does represent my current
opinion on the matter.
This opinion may well be horribly flawed; correctly solving a complex
problem requires getting everything (important) right, while disproving
a proposed solution only requires finding one critical mistake.

> Note that in Bostrom's essay, even consciousness itself was classified as
> eudaemonic. (At least in the case where the supposition, that consciousness
> isn't necessary for maximizing the efficiency of any optimizing or problem-
> solving process, is true.) Assuming that we are all using the same
> terminology here, it would seem that consciousness is morally irrelevant to
> you.
I don't sufficiently understand consciousness to meaningfully refer to
it when talking about moral. I don't think that what I would
(intuitively) call 'consciousness' is by definition eudaemonic, but
since I don't have any clear ideas about the concept that's a moot point.

> The only somewhat relevant drawback with regard to his suggestions that
> comes to my mind as of now, is that we would indeed be sacrificing some
> problem-solving efficiency by being eudaumonic. This might pose a problem
> in some scenarios in which we are competing with external agents presently
> unknown to us (e.g. extraterrestrial post-singularity civilizations). It
> would seem like quite the non-trivial question, whether the probability
> that we are in fact situated in such a scenario, is non-infinitesimal.
Additionally, even if that were the case, it would be questionable
whether the difference in efficiency is relevant for the outcome of the
conflict, assuming there is one. Considering cosmic timescales it seems
highly unlikely that the two civilizations would reach superintelligence
at roughly the same time (say, within a few hundred years).
Since one of them likely has a lot more time to establish an
infrastructure and perform research before the SI-complexes encounter
each other, lack of efficiency caused by preferring eudaemonic agents
may well be completely irrelevant to the outcome.
There are other relevant aspects like possibilities to enforce a
stand-off given a minimum technology level, but this has been discussed
before on sl4 and isn't really the topic of this thread.

> So let's strive to build something FAIish and find out ;)
Sure - though considering the differences in world-models and goals
between present-day humans we may not find anything that we can agree on
qualifying as FAIish.

Sebastian Hagen



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT