Re: qualia, once and for all

From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Mon Jun 21 2004 - 08:13:52 MDT


Metaqualia wrote:

> Still the third person perspective, which does not create an arrow of
> morality but only bits floating around in trivially existing space.
You still haven't shown that there is anything wrong with this approach.

> I am not saying I have complete understanding of qualia and the "I", no one
> does. The only thing I am saying is that the existence of subjective states,
> whatever they are, creates an arrow of morality.
This statement may seem obvious to you, but I can't accept it unless you can
show that this hypothesis allows you to make more accurate predictions about
reality than any alternatives; or, alternatively, allows you to make exactly as
accurate predictions about reality as the best alternative, and is simpler.

Even if you can show this to be the case, I will still not understand how you
arrived at your 5-element list for making estimations on the universal goodness
balance of qualia. There seem to be significant more assumptions than the one
you stated above contained in those statements, which don't appear correct to me.

> As for the rest of the ideas that you have presented, I think you are
> basically looking at the problem from a third person perspective; I can see
> it from that perspective too and then all you are saying is correct. Just
> qualia are subjective that's the whole point.
I don't think there is anything gained from using a first person perspective,
and I see the danger of jumping to a lot of wrong conclusions.

> Existence is only observed directly. Existence manifests itself as pure
> existence, it is not something you analyze from the outside. Existence
> occurs when you perceive something consciously and it creates a quale.
That looks like a "mythical answer" to me. It doesn't really explain anything
about "existence". From that description, I wouldn't know how to tell it apart
from anything else; let alone predict whether it would be present in a certain
situation.
What's wrong with analyzing anything from the outside? Afaik that's usually the
only effective and reliable way of gaining information about it.
I don't see any reason to treat "existence" in this unexplained, ill-defined
form as anything relevant.

> Who is rushing? We are discussing things here. We're all looking for a way
> not to screw up.
To quote an earlier post: "[...]the URGENT need right now is to remove negative
qualia!" - I maintain that the most urgent thing to do right now is to not make
things any worse; and I think that trying to implement qualia-based morality
would likely make things significantly worse. Treating it as something very
urgent increases the chance of not spending enough time on verifying whether
implementing this solution is a good idea at all.

> I just think I'm right that's all.
Most people probably believe that when they argue honestly; or at least they
believe that they are probably more right than what they are arguing against.

> I think that the
> difference in basic stance between us (I accept and embrace the first person
> perspective while you want to reduce it to 3rd person data) will lead to two
> different currents of thought. And if you keep your stance and I keep mine
> that's the end of it.
If this was a purely theoretical and philosophical discussion, I'd probably
agree and move on (or not even have started discussing). But since each of the
models we are defending would have significant effects on both of us if it were
to be implemented in an AI that reaches SI status, agreeing to disagree doesn't
work here unless we both accept that the other person doesn't have any realistic
chance of getting their idea successfully (successful as in 'the AI takes off',
not successful as in 'the morality then actually works as intended') implemented
first.
Having several ideas of a good method to implement morality in an AI spread out
between leading research teams is furthermore a bad idea in itself because it
increases the chance that each of the teams will view the competitors as a
threat to their goals, giving them justification to rush research on morality,
and increasing the chance that whoever does win the race has taken too many
shortcuts.

> A lot of _independent_ qualia stream are going to be created during that
> time. Your argument is equivalent to : since so many people have died of
> hunger in the past, a million more or so won't make such a big difference.
Not really. According to my world model, there isn't really anything left of
their mind, so how many have died in the past wouldn't be relevant for
evaluating how bad X people dieing now is.
For this argument to work, you need something of them persisting in reality,
like the "qualia streams" you suggested. My world model doesn't suggest that
this is the case.

> Don't TOTALLY agree on the permanent, there may end up being one big (U)FAI
> or there may be many, the known cosmos may end up being dominated by one
> monolithic AI or there may be a diverse fauna of agents.
Perhaps, but it seems very likely to me that the first one that takes off and
reaches SI-status will determine how things go on from there. If there are any
other AIs close to takeoff around that have significantly different goal system,
the first SI will likely either destroy them outright, or limit their
development as to prevent them from becoming a serious threat.
If it didn't, they would continue becoming more intelligent, and might
eventually pose a threat to the first SI, and since they would continue
following their own, differing goals, that would be a strongly negatively valued
future according of the first SI's goal system.
Then again, perhaps the first SI would be content by taking control of most
matter and being the first that can begin to spread out into space. If whatever
matter it leaves to the followers isn't sufficient to allow them to become a
threat to it, it may not need to limit them artificially.
In any event, it seems likely that the only AI whose morality will have any
long-term effects is that of the first SI.

This leaves out the possibility of another SI already being out there, and
expanding quickly towards us. This scenario is of relatively little practical
relevance; when the SIs meet, one of them will most likely be far more powerful.
If it's the one built on earth, we are back to it dominating practically all of
space. If it's the other one, what we do is of very little relevance in the long
term in any event; planning for it is ineffective.

> That's what Chalmers wants to do. We will find physical differences in the
> processes that create different qualia, hopefully.
I'd really like to see that kind of results. If qualia can be empirically shown
to be based on anything else than ordinary, known physics, we would have some
more common ground to discuss your suggestions on morality (never mind that my
world model would require significant modification first).

Sebastian Hagen



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT