Re: The Future of Human Evolution

From: Aleksei Riikonen (aleksei@iki.fi)
Date: Sat Sep 25 2004 - 15:48:21 MDT


Sebastian Hagen wrote:
> From the perspective of an agent, that is striving to be
> non-eudaemonic,(me) the proposed implementation looks like something
> that could destroy a lot of efficiency at problem-solving. If a
> (renormalized) Collective Volition came to the conclusion that this
> is a good idea I'd respect it (since it would have been made by
> transhuman minds that originally would have been human ones), but
> human-level minds forcing this kind of two-class-society on transhuman
> ones appears like a very bad idea to me.

As an agent striving to be non-eudaemonic, could you elaborate on what are
the things you value? (Non-instrumentally, that is.)

Note that in Bostrom's essay, even consciousness itself was classified as
eudaemonic. (At least in the case where the supposition, that consciousness
isn't necessary for maximizing the efficiency of any optimizing or problem-
solving process, is true.) Assuming that we are all using the same
terminology here, it would seem that consciousness is morally irrelevant to
you. What is it then, that to which you assign value, in a world devoid of
qualia?

I do agree with you on the point that there seems to be no objective
morality. The qualities I value, however, more or less resemble those
mentioned by Bostrom (humor, love, game-playing etc.), and I find his
argumentation in The Future of Human Evolution quite forceful.

The only somewhat relevant drawback with regard to his suggestions that
comes to my mind as of now, is that we would indeed be sacrificing some
problem-solving efficiency by being eudaumonic. This might pose a problem
in some scenarios in which we are competing with external agents presently
unknown to us (e.g. extraterrestrial post-singularity civilizations). It
would seem like quite the non-trivial question, whether the probability
that we are in fact situated in such a scenario, is non-infinitesimal.

So let's strive to build something FAIish and find out ;)

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei
Student of math and CS, University of Helsinki
SIAI supporter - http://intelligence.org
[ Operating by Crocker's Rules (http://sl4.org/crocker.html) ]


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST