Re: qualia, once and for all

From: David Picon Alvarez (eleuteri@myrealbox.com)
Date: Sat Jun 19 2004 - 09:26:27 MDT


From: "Metaqualia" <metaqualia@mynichi.com>
> Do I advocate death in the preponderance of negative qualia? I prefer to
> discuss this privately since
>
> 1. It will take everyone's interest away from the more important
> implications of this theory

It seems to me that it is a cardinal point whether survival is a key value,
or only a key value in the preponderance of positive qualia.

> 2. Realistically even with just human-level intellect and _some_
> nanotechnology the balance of positive vs negative qualia can be extremely
> favorable. If we go into super-human AI and total control over matter then
> it's easy to see that negative qualia do not need to exist at all.

The human mind is a rather complex system (an understatement) an essentially
the fulfilling of people's material needs does not ensure (perhaps not even
tend to) the preponderance of positive qualia. People find all sorts of good
(and bad) reasons to be unhappy.

> Against their own interest, means, against their own long term interest,
> which means, against their goal system (if they had one). Look at people
who
> do things against their interst. People smoke. Because they like the
smoking
> quale. People gamble, and inevitably lose money in the long term. But they
> like the gambling quale. So the fact that people take actions against
their
> own interest just proves my point.

It's not this type of action I'm referring to. Rather, it's the action of
someone who sacrifices his life to safe a perfect stranger, for example, or
some other similar case you can imagine. Something which, coinciding with
one's goal system, is however a positive qualia minimizer, at least locally.

> Why destroy altruism? Altruism creates warm fuzzy feelings and lets humans
> bond, if it weren't for the kick in the rear nature gives us when our
> _personal_ needs are not satisfied, we'd all be very altruistic,
constantly
> giving and constantly happy about giving.

I have no intention to destroy altruism, it seems a _good thing_ to me.
However, altruism can be a source of negative qualia, in the individual as
in the social level, so that's why I was asking whether you'd advocate its
destruction.

> Because according to your current goal system, happiness itself is
> associated with acquiring material wealth and power and other things. When
I
> propose happiness for the sake of happiness, you automatically transform
> this word into "not happiness" since "happy for the sake of being happy"
> does not have much utility in your current goal system. However the
"happy"
> I am proposing is the same "stuff" that happens when you are happy for the
> sake of other stuff; it is not something you rationalize and to which you
> are free to assign a happiness value. Happiness is happiness! Can't
possibly
> not like it :)

Let me put it this way: you can 1) modify the environment and slightly
modify the human mind so that happiness is atainable and roughly corresponds
to the achievement of goals which overlap the collective volition or 2) you
can modify the human mind so that happiness becomes necessary (id est,
whatever happens the subject will be happy). I would argue that, under any
reasonable objective goal system (I am doubtful that qualia are as objective
as you think they are, but that's a whole other matter) 1 is a superior
choice.

> I think you are grossly underestimating an eternal orgasmic machine. In
> fact, to imagine an orgasmic machine you must not imagine an orgasmic
> machine. You will probably imagine a world in which everyone is productive
> and joyful and altruistic and you have a beautiful house and you are in
love
> and your kids are healthy and whatever you happen to desire. Imagine the
> most intense and wonderful moments of your life. That is more like an
> orgasmic machine! With all the subtleties of these experiences, not just
an
> on/off switch.

The key issue as I see it is the issue of necessity: if you modify people
such that they are no longer able to be unhappy, you have removed the basis
for happiness to have any meaning. Under my current goal system, that has a
high negative utility.

However, maybe I'm not smart/knowledgeable/good enough to see the truth in
your argument. In such case, collective volition could sort it out and set a
qualia-maximizer (or whatever compromise of qualia-maximization is found to
be best) as a successor dynamic.

--David.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT