Re: qualia, once and for all

From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Sat Jun 19 2004 - 03:36:05 MDT


Metaqualia wrote:
>[...]
> Any of the principles that have been mentioned on the list can be connected
> back to qualia. Nobody has proposed a single mechanism of morality that (if
> implemented/enforced correctly) will have any chance of producing more
> negative qualia than positive ones.
This has been suggested. There are quite a few sl4 wiki pages on the topic, e.g.
http://sl4.org/bin/wiki.pl?action=browse&id=QualiaBasedAltruisticUtilitarianism
which describes the basic idea, and there is some more speculation on
http://sl4.org/bin/wiki.pl?ControversialPages/QualiaAndVolition
http://sl4.org/bin/wiki.pl?ControversialPages/QualiaPartTwo
http://sl4.org/bin/wiki.pl?ControversialPages/QualiaPartThree
To quote the latter:
'
One obvious example of the difference between "pure QBAU" and volitional
morality is that an AI whose goal system was based on "pure QBAU" would very
likely to immediately start converting all of the matter in the universe into
computronium on which to run "happy minds", painlessly killing all biological
life in the process, possibly without even waiting to upload any humans.
'
> All of the items above, in my view, can be traced back to achieving a good
> balance between positive and negative qualia.
>
> Survival is essential to continue to have qualia.
Please define "survival". The continued existence of any currently active human
minds is almost certainly not necessary for an SI to generate positive qualia.
As a matter of fact, there are probably much more efficient implementations than
even uploaded human minds - the SI could modify uploaded human minds to the
point of being efficient qualia generators, but why? If qualia generation is the
goal it would be easier to just use the code for the most efficient qualia
generator known to the SI at the time.
> Freedom has always been associated with the ability of carrying out one's
> wishes which are supposed to increase positive qualia and decrease negative
> ones.
Imho, it equally applies to the ability of carrying out wishes that are supposed
to achieve other means.
> Equality comes in where we start considering the qualia produced by other
> people's brains and not just our own.
Hmmm. But in the end for a pure QBAU-following mind it wouldn't really matter
where (from which mind, if that distinction is still relevant) the qualia came
from. If asymmetry led to a higher sum of total qualia, it would be a good idea
to implement it according to this morality.
> YET, we can strike a compromise here and say that the
> _Variety_ of positive qualia is also important, therefore we account for
> growth. More intelligence, bigger brains, more complex and interesting
> positive qualia.
Why would we want to do that if the overall positiveness of qualia is really all
we cared about?
> Using qualia as a measuring stick we reconcile all our individual morality
> assessments including why Hitler was evil, why we are justified in forcing
> our children not to jump from the window thereby limiting their freedom at
> times, why a paperclip universe sucks, and so forth.
I don't think that justifies making a basic assumption as strong as the one that
qualia represent objective morality.
> About Elizier's argument (do not hardcode):
> If in the future we discover element X which becomes in our opinion more
> important than freedom, more important than happiness, and so forth, it will
> be because it stimulated positive qualia with greater strength or because it
> avoids suffering where freedom and happiness do not.
Hmmm. If I understood this correctly you assume that any sufficiently
intelligent mind would see "perceiving qualia with as positive a sum as
possible" as a justified highest-level goal. Can you offer any proof of that?
I'm merely a human, but this is not consistent with my current goal system.
> "What if there's something _better_ than qualia and how to plan for it" is
> still an open topic for me (meaning that I will be giving it thought), but I
> think that a favorable variety and balance of qualia is the most important
> thing and there is no doubt in my mind about this now. I'd take a Biggest
> Gamble, for 10 billion years of happiness of all sorts. If you screw it, you
> miss out on the other stuff that doesn't make you happy, but you're still
> there in a sort of transhuman heaven! I wouldn't take a Biggest Gamble on
> some mind extrapolating machine though.
"the other stuff that doesn't make [me] happy" is in my opinion likely to be
much more important than positive qualia. I don't know whether I'll even exist
long enough to experience positive qualia if a mind with an QBAU-based morality
reaches SI level (it seems more likely to me that the SI will instantly discard
my mind in favor of efficient qualia-generating code), but that doesn't really
matter - in the end, me and everyone else spending practically all of their time
perceiving positive qualia in wireheaded-mode is not a future I deem desirable
according to my current goal system.
Until I read about some very strong supporting evidence that qualia-based
morality is objectively a good idea, I'll continue considering hardwiring a
qualia-based morality into any AI something that is very likely to cause a lot
of negative utility.

Sebastian Hagen



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT