From: Metaqualia (email@example.com)
Date: Fri Jun 18 2004 - 10:05:08 MDT
> Why not just write some kind of happiness-maximization algorithm?
Any of the principles that have been mentioned on the list can be connected
back to qualia. Nobody has proposed a single mechanism of morality that (if
implemented/enforced correctly) will have any chance of producing more
negative qualia than positive ones.
Let's sum up:
There are some important things we wish to see through and after the
- survival (continuing to function as conscious beings rather than not
- freedom (we don't want to be forced to act against our will)
- equality (we don't want small segments of the population to be less free)
- happiness (laughter, Joy, reaching the goals determined by our free will)
- growth (not being stuck in the same configuration endlessly)
according to some, a correct interpretation of the Quran
and so forth.
I think everyone agrees on these, and yet Elizier pointed out that
hardcoding them would be dangerous. In fact I think that is true, the system
should be as simple as possible, but not simpler than that. Elizier also
points out there may be other things in the future we may consider to be
more important yet. I agree.
All of the items above, in my view, can be traced back to achieving a good
balance between positive and negative qualia.
Survival is essential to continue to have qualia.
Freedom has always been associated with the ability of carrying out one's
wishes which are supposed to increase positive qualia and decrease negative
Equality comes in where we start considering the qualia produced by other
people's brains and not just our own.
Happiness is positive qualia, no more no less.
Growth is seen by us as positive because we get used to things. We don't
want to get bored, we feel that being stuck in one point is evil because
traditionally this meant no further progress could be made. Additionally
positive qualia soon fade after we have achieved something. But this is
evolutionary. A positive quale does not need to fade. We can sustain it
indeterminately and it is no less wonderful after you have been in it for a
million years! YET, we can strike a compromise here and say that the
_Variety_ of positive qualia is also important, therefore we account for
growth. More intelligence, bigger brains, more complex and interesting
Religious convictions and the kind of social order they impose are seen as
positive by those who hold them only because they stimulate positive qualia
in these people, and they are wired in such a way that removing these
convictions would create emotional distress (negative qualia).
Using qualia as a measuring stick gets beyond fudley's argument about
evolution (since qualia exist due to physical law, evolved wetware only
stimulates these patterns).
Using qualia as a measuring stick we reconcile all our individual morality
assessments including why Hitler was evil, why we are justified in forcing
our children not to jump from the window thereby limiting their freedom at
times, why a paperclip universe sucks, and so forth.
About Elizier's argument (do not hardcode):
If in the future we discover element X which becomes in our opinion more
important than freedom, more important than happiness, and so forth, it will
be because it stimulated positive qualia with greater strength or because it
avoids suffering where freedom and happiness do not. Or, IF in the future we
discover some _other_ state of existence which is more intense than human
qualia, then we can dump qualia and pass on (so we must allow for such a
discovery in the FAI). Keep in mind that we now call "quale" anything that
gives us a flash of experience with a certain flavour. So anything _like_
that even of a completely different magnitude will still be a quale, you can
use the same word, no matter if it is more complex.
"What if there's something _better_ than qualia and how to plan for it" is
still an open topic for me (meaning that I will be giving it thought), but I
think that a favorable variety and balance of qualia is the most important
thing and there is no doubt in my mind about this now. I'd take a Biggest
Gamble, for 10 billion years of happiness of all sorts. If you screw it, you
miss out on the other stuff that doesn't make you happy, but you're still
there in a sort of transhuman heaven! I wouldn't take a Biggest Gamble on
some mind extrapolating machine though.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT