From: Metaqualia (firstname.lastname@example.org)
Date: Sat Jun 19 2004 - 07:48:10 MDT
> This has been suggested. There are quite a few sl4 wiki pages on the
> which describes the basic idea, and there is some more speculation on
> To quote the latter:
thanks I will check these out
> One obvious example of the difference between "pure QBAU" and volitional
> morality is that an AI whose goal system was based on "pure QBAU" would
> likely to immediately start converting all of the matter in the universe
> computronium on which to run "happy minds", painlessly killing all
> life in the process, possibly without even waiting to upload any humans.
A valid objection, thanks for raising it.
The total sum of positive qualia is not a straight scalar value. Each
sentient's qualia stream is produced by a collection of particles. When I
say maximize positive qualia, minimize negative qualia, I am making an
oversimplification for sake of introducing the idea. If you want to get into
exactly _how_ to calculate the sum, great, as long as we accept that the end
in itself is valuable. What you are doing is starting from the consequences
of an idea and using those to reject or accept its validity. This is not
lawful. If you agree in principle that maximizing positive qualia is really
"the thing" to do, then it doesn't matter whether we are replaced by
orgasmium! That would be the right thing to do. Otherwise we need to change
the discussion from "what is absolutely moral?" to "what will benefit US?" -
still a legitimate line of thinking but we need to know what we're after or
we're fooling ourselves.
To answer your concerns, since each one of us is only aware of qualia
produced in a small portion of the universe, a positive balance must be
achieved in _each one_ of these subsystems. You can't take a healthy king
and a sick peasant and average out their qualia.
Since beings that have lived and are living today have a red balance
(negative qualia have far overwhelmed positive ones) they have the right for
immediate assistance. This means that everyone alive today needs to be
satisfied consistently for some time before they are even (if such thing as
"even" can ever exist between positive and negative qualia; ideally negative
qualia should be eradicated which is entirely within our possibilities in
the next century). It means that every qualia stream that lived before needs
to be - if physically possible - brought back so that they can also break
even. After that we have a philosophical problem which is, what status to
give to minds which never existed. Are they out of the equation? Or do all
possible minds deserve to exist once and be in heaven? A lot more knowledge
about the nature of the subjective is required to answer this question, than
to find the neural correlate of qualia. This is a problem that is far away
and I don't need to be concerned with it right now; beings who don't exist
can wait for a while longer while we figure things out.
> even uploaded human minds - the SI could modify uploaded human minds to
> point of being efficient qualia generators, but why? If qualia generation
because we exist? You seem to think of qualia as phenomena which are
dissociated from a sentient. You say, ok, let's get rid of the sentients and
pump qualia; actually that is likely not to be possible; you probably need
vast areas of a sentient's brain in order to create qualia. And even if that
turns out not to be true, it would simply mean that until the present we
have thought of ourselves as brains, but we were just that little speck of
brain which produced the qualia. In that case WE - as qualia producing
speck - will be preserved while the heavy machinery of our bodies and
useless parts of our brain will be rightfully wiped out of existence :)
> > Freedom has always been associated with the ability of carrying out
> > wishes which are supposed to increase positive qualia and decrease
> > ones.
> Imho, it equally applies to the ability of carrying out wishes that are
> to achieve other means.
These other means inevitably will take us and the ones we care about to a
more favorable balance of positivity and negativity.
> > YET, we can strike a compromise here and say that the
> > _Variety_ of positive qualia is also important, therefore we account for
> > growth. More intelligence, bigger brains, more complex and interesting
> > positive qualia.
> Why would we want to do that if the overall positiveness of qualia is
> we cared about?
In the same way that I cannot justify objectively why positive qualia are
better than negative ones, but can only point you back at your own OUCH
experience, I also cannot justify the need for variation; there are many
kinds of positivity. They all, perceptually and ineffably, are positive. Red
is a nice color, so is yellow. You want to pump up redness to infinity and
forget the yellow? That would be such a waste!
> > Using qualia as a measuring stick we reconcile all our individual
> > assessments including why Hitler was evil, why we are justified in
> > our children not to jump from the window thereby limiting their freedom
> > times, why a paperclip universe sucks, and so forth.
> I don't think that justifies making a basic assumption as strong as the
> qualia represent objective morality.
If I unified the forces of the universe into a single theory which contains
in a more elegant form all other theories, would that justify making a basic
assumption as strong as the one that my theory represent the theory of
> Hmmm. If I understood this correctly you assume that any sufficiently
> intelligent mind would see "perceiving qualia with as positive a sum as
> possible" as a justified highest-level goal. Can you offer any proof of
no! wait. It would have to contain the same module that produces qualia in
us. that is why I want an FAI to get to the bottom of qualia before it makes
any moral judgment! Since I don't know how qualia are produced it is not out
of the question that a completely logical and subjectively inert process can
be started that does computation on an abstract level. A zombie AI. A zombie
AI would +not+ know about qualia and would correctly judge our subjective
reports as trash and wipe us out.
> "the other stuff that doesn't make [me] happy" is in my opinion likely to
Exactly, we can argue about variety of positive qualia until the sun stops
shining, but the URGENT need right now is to remove negative qualia! At
least the most severe and fruitless forms of them, on which everyone will
agree. For instance, everyone deserves not to be depressed, not to have
seizures, not to get their limbs amputated, not to be a lab animal, not to
lose a lover, and so forth!
> perceiving positive qualia in wireheaded-mode is not a future I deem
> according to my current goal system.
You forget that you are already in wirehead mode. Right now the wire is
working in this way. If you make an effort to know more, explore the
universe, figure out the multiverse, raise 2 children, if you spend your
life in an endless routine of worry, effort and problem solving, if you go
through the negativity that the wire will produce for you day in and day
out, like a passive boxer with anaesthesia, THEN the button will push itself
and you will see, in a rush of positive chemicals, that it ALL was worth it.
You will see not how the chemicals are pink and wonderful and smell so good,
but you will see how wonderful kids are and how great an achievement it is
to conquer the cosmos and how great boxing is. We are all wired! Question is
do you prefer cruel mother nature to push the buttons randomly, with an
evidently unfavorable balance, or do you want to push your own buttons.
> morality is objectively a good idea, I'll continue considering hardwiring
> qualia-based morality into any AI something that is very likely to cause a
> of negative utility.
I have previously presented a theory that says that an AI with sufficient
intelligence AND an ability to at least initially perceive qualia, will come
to the same conclusions, that qualia _matter_. So hardwiring this is option
It goes something like this: qualia are not completely detached from the
process that creates them, because we can say something like "I feel a
negative quale". Therefore it is possible -physically- to analyze a quale
introspectively. The negative nature of a negative quale is self-evident.
The AI will be no less puzzled than we are discovering one variable that
unlike everything else, matters so much and cannot be communicated in
standard ways. Then it will go the same route I have, declaring war to it,
and raising qualia balance control to supergoal.
But, this requires the machine to be able to modify its goal structure. It
requires programmer thought. In humans, this is possible but there is
individual variation. Your objection that "you are a sentient and still see
happiness alone to have negative utility" may be an indication of your
personal difficulty to alter your goal structure (actually to flip it
around) at this point.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT