Re: qualia, once and for all

From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Tue Jun 22 2004 - 06:49:47 MDT


Metaqualia wrote:

> The knowledge argument: mary is born in a room with no colored objects. She
> studies every physical phenomenon connected with colors and light. Then she
> steps out of the room and sees actual colors. Will she learn something new?
> If so, the third person interpretation of color is not sufficient to cover
> all of the phenomena concerning color.
Assuming that mary is a transhuman and therefore has sufficient abilities to
absorb all of current physics, including everything we know about light, and all
present knowledge about evolution of lifeforms in general and humans in
particular, and that she actually does absorb all of this knowledge before she
leaves the room, then no: she won't really learn anything new.
Studying every physical phenomenon with colors and light on her own is likely to
be insufficient if she merely has human intelligence (it took all of humanity
quite a while to come up with our current understanding), though a sufficiently
intelligent transhuman would probably be able to arrive at the same conclusions
if she has access to the right tools.

> [...]Subjectivity is important.[...]
That statement looks like a very good summary of your basic assumptions that I
see as unnecessary, unhelpful and therefore to be avoided according to Occam's
Razor.

> The most urgent thing right now, depends on what you think is important. So
> if we cannot agree on what is important, we won't agree on what's urgent.
I agree with this statement.

>>Not really. According to my world model, there isn't really anything left
>>of their mind, so how many have died in the past wouldn't be relevant for
>>evaluating how bad X people dieing now is.
>
> I don't follow your reasoning.
> There isn't anything left of yesterday's dinner so how long it took for it
> to cook isn't relevant for evaluating how long it will take today?
No; empirical evidence collected in the past does of course stay relevant. But
roughly speaking, the quality of the dinners you produced in the past doesn't
directly impact the relevance of the quality of the dinner you produce today.

Not a very good example, since there is a significant indirect connection here;
the reaction of the people eating it does of course depend on the quality of
past dinners. I don't think that any similar connection exists between the past
suffering of dead humans and the current suffering of other humans.

>>For this argument to work, you need something of them persisting in
>>reality, like the "qualia streams" you suggested. My world model doesn't
>>suggest that this is the case.
>
> The negative/positive value of qualia streams doesn't get reset to zero once
> the qualia stream reaches an end. Is someone who lived a miserable life 1000
> years ago any less unfortunate than someone who is living a miserable life
> today?
When, today? From some perspective certainly yes, since the dead person doesn't
perceive anything negative anymore, and doesn't remember their negative
perceptions in the past either.
When a person dies, and their body is destroyed (a few are conserved quite well,
and cryogenics might increase that number in the future, but for now it's still
minimal compared to the rest), the information saved in their brains is not
preserved. This includes any memories they had at the time of death, including
any qualia they remembered.
The output they created while they were alive might be preserved in one form or
another, but that obviously doesn't directly include qualia.
I don't really think the question is appropriate, but a relevant and somewhat
simple answer might be: "the past could have gone better, but it didn't, and
worrying about it now is ineffective". Dead individuals whose brain structures
have completely disintegrated are no more existent than hypothetical or
fictional individuals that have never existed, their qualia don't have any more
relevance (the dead don't have a subjective perspective as far as anyone knows
either), and neither do they deserve to exist any more (in general).
One could argue about individuals that are dead but well preserved (cryonics,
again), and I may or may not accept that since the informations constituting
their personality is still present, they have relevance.
But unless you can directly access past-states of space, the self-information
about disintegrated individuals is gone.
(Well, with sufficient computronium (most likely requiring a bigger universe for
storage space) and enough information about the present you could back-calculate
past states of space by essentially applying physical laws backwards. And if you
had the resources for that, you might as well go on and recreate all dead humans
in the form they died in while you're at it. In comparison to your backward
running universe-simulation, the resources required for that would be trivial.
But this hypothesis isn't really relevant here.)

Well, assuming that my traditional physics approach isn't horribly flawed, of
course.

> For the first SI to destroy others would require a very straightforward
> implementation of 'utility' and a very low level of friendliness.
What if it is friendly, but the other AIs have very simple and unfriendly goal
systems and therefore can't be persuaded to change their ways without direct
manipulation (i.e. attack)? The first SI might just isolate them, but that would
cause a continuing resource drain, and have to be justified somehow.
Besides, would your estimation also apply in your opinion if the other AIs
continued causing negative qualia for some reason?

> I would hope that no matter how primitive the original moral system embedded in
> these AIs, they would still think twice about interfering with other (lower
> but) massive AIs.
I don't think a paperclip optimizer would necessarily think more than once about
it, and the one time they do it they won't give any inherent relevance to this
structure over others.

> If it doesn't have even rudimentary hard-coded limitations yes that is
> likely.
I highly doubt that hard-coded limitations would stand any chance in a
self-modifying system. For a transhuman AI with full introspection to continue
following a rule, it has to be part of the highest level of its main goal
system. There have been plenty of discussions about this in the past.

> Unless there is some kind of physical limit on the power of transhuman AIs.
> Which could be given by an upper bound on computational speed/power, an
> upper bound on memory density, or an upper bound to the kind of matter
> control that is possible.
These apply for a given region of space and given resources, but if one of the
AIs occupies significantly more space it would be able to eventually surround
the other AI and conquer the area by sheer attrition.

> What if with ultimate control over matter one can build
> an impenetrable shield? Then no AI could take over another AI above a
> certain level, because no amount of added intelligence could penetrate the
> barrier. For instance, think travelling at the speed of light, or creating
> new universes which end up being completely autonomous
True, that may be the case, or it may not; I have no idea at all. We certainly
shouldn't assume that it is the case in our plannings, though.

>>I'd really like to see that kind of results. If qualia can be empirically
>>shown to be based on anything else than ordinary, known physics, we would have
>
> How are you using the word "based"?
I meant: "What we refer to as qualia can in principle be entirely explained and
accurately predicted by ordinary, known physics."

> Qualia are evidently supplemental to
> ordinary, known physics, since ordinary physics does not predict redness.
I don't know about that. It hasn't specifically predicted it in simulations so
far, but we don't have nearly enough computing power to completely simulate a
human brain simply by simulating the physical processes it uses, so whether
known physics could predict them is an open question.
My bet is on yes; I don't see anything about Qualia being evidently
non-physical, and using Ocamm's Razor won't assume this being the case without
any reason.

> Although there is likely to be a correlation between ordinary, known (or
> soon to be known) physics and the details of qualia. The kind of results
> expected:
> "We found that painful sensations are associated with a cascade reaction
> involving progressive inhibition of useful world-knowledge; whenever
> knowledge previously available to introspection is suddenly put away the
> quality of the subjective experience arising from that process as reported
> from the subject is negative. On the other hand, A cascade reaction
> involving sudden positive reinforcement of very many interconnected ideas is
> perceived as positive".
A result like that wouldn't show that qualia are based on anything but known
physics. "as reported from the subject" - the reports from the subject are imho
in all likelihood based on known physics.
If the model of qualia turns out to be useful, perhaps this kind of results will
have some practical applicability. But it doesn't say anything about the nature
of Qualia, or their relevance for morality.

Sebastian Hagen



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT