Re: qualia, once and for all

From: Metaqualia (metaqualia@mynichi.com)
Date: Mon Jun 21 2004 - 16:19:44 MDT


> > Still the third person perspective, which does not create an arrow of
> > morality but only bits floating around in trivially existing space.
> You still haven't shown that there is anything wrong with this approach.

The knowledge argument: mary is born in a room with no colored objects. She
studies every physical phenomenon connected with colors and light. Then she
steps out of the room and sees actual colors. Will she learn something new?
If so, the third person interpretation of color is not sufficient to cover
all of the phenomena concerning color.

> Even if you can show this to be the case, I will still not understand how
you
> arrived at your 5-element list for making estimations on the universal
goodness
> balance of qualia. There seem to be significant more assumptions than the
one
> you stated above contained in those statements, which don't appear correct
to me.

Since you asked, I made a first attempt at writing down some guidelines; I
don't expect you to accept those if you don't agree on the basic assertion
that positive and negative qualia are the most important thing. And I
haven't thought about those guideline for long either. Though I think they
are roughly correct.

> I don't think there is anything gained from using a first person
perspective,
> and I see the danger of jumping to a lot of wrong conclusions.

It's not that there isn't anything to be gained; first person perspective is
what constitutes your experience; denying it is taking your own self out of
the equation which makes no sense as far as I am concerned. Subjectivity is
important. Even though it's a strange animal in an otherwise peaceful and
pretty 3rd person model of the universe.

> What's wrong with analyzing anything from the outside? Afaik that's
usually the

that when you analyze something from outside, the subjective world ceases to
exist and you lose part of the information available to you.

> To quote an earlier post: "[...]the URGENT need right now is to remove
negative
> qualia!" - I maintain that the most urgent thing to do right now is to not
make
> things any worse; and I think that trying to implement qualia-based
morality

The most urgent thing right now, depends on what you think is important. So
if we cannot agree on what is important, we won't agree on what's urgent.

> > I just think I'm right that's all.
> Most people probably believe that when they argue honestly; or at least
they
> believe that they are probably more right than what they are arguing
against.

Yes I am aware of that ;-)

> models we are defending would have significant effects on both of us if it
were
> to be implemented in an AI that reaches SI status, agreeing to disagree
doesn't
> work here unless we both accept that the other person doesn't have any
realistic

true.

> > hunger in the past, a million more or so won't make such a big
difference.
> Not really. According to my world model, there isn't really anything left
of
> their mind, so how many have died in the past wouldn't be relevant for
> evaluating how bad X people dieing now is.

I don't follow your reasoning.
There isn't anything left of yesterday's dinner so how long it took for it
to cook isn't relevant for evaluating how long it will take today?

> For this argument to work, you need something of them persisting in
reality,
> like the "qualia streams" you suggested. My world model doesn't suggest
that
> this is the case.

The negative/positive value of qualia streams doesn't get reset to zero once
the qualia stream reaches an end. Is someone who lived a miserable life 1000
years ago any less unfortunate than someone who is living a miserable life
today?

> other AIs close to takeoff around that have significantly different goal
system,
> the first SI will likely either destroy them outright, or limit their
> development as to prevent them from becoming a serious threat.

For the first SI to destroy others would require a very straightforward
implementation of 'utility' and a very low level of friendliness. I would
hope that no matter how primitive the original moral system embedded in
these AIs, they would still think twice about interfering with other (lower
but) massive AIs.

> In any event, it seems likely that the only AI whose morality will have
any
> long-term effects is that of the first SI.

If it doesn't have even rudimentary hard-coded limitations yes that is
likely.

> If it's the one built on earth, we are back to it dominating practically
all of
> space. If it's the other one, what we do is of very little relevance in
the long
> term in any event; planning for it is ineffective.

Unless there is some kind of physical limit on the power of transhuman AIs.
Which could be given by an upper bound on computational speed/power, an
upper bound on memory density, or an upper bound to the kind of matter
control that is possible. Think of the tic tac toe game. A game with two
good players will always end in a draw; because there is a limit to the kind
of moves that can be made. The set of actions in the 3x3 system is the
limiting factor not our actual IQ which is good enough to encode a
never-lose strategy. What if with ultimate control over matter one can build
an impenetrable shield? Then no AI could take over another AI above a
certain level, because no amount of added intelligence could penetrate the
barrier. For instance, think travelling at the speed of light, or creating
new universes which end up being completely autonomous

> > That's what Chalmers wants to do. We will find physical differences in
the
> > processes that create different qualia, hopefully.
> I'd really like to see that kind of results. If qualia can be empirically
shown
> to be based on anything else than ordinary, known physics, we would have
some
> more common ground to discuss your suggestions on morality (never mind
that my
> world model would require significant modification first).

How are you using the word "based"? Qualia are evidently supplemental to
ordinary, known physics, since ordinary physics does not predict redness.
Although there is likely to be a correlation between ordinary, known (or
soon to be known) physics and the details of qualia. The kind of results
expected:
"We found that painful sensations are associated with a cascade reaction
involving progressive inhibition of useful world-knowledge; whenever
knowledge previously available to introspection is suddenly put away the
quality of the subjective experience arising from that process as reported
from the subject is negative. On the other hand, A cascade reaction
involving sudden positive reinforcement of very many interconnected ideas is
perceived as positive".

So in this case we'd have a correlation, which will be useful for detecting
qualia, estimating their quality, etc.

mq



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT