From: David Picon Alvarez (firstname.lastname@example.org)
Date: Sat Jun 19 2004 - 23:56:26 MDT
From: "Metaqualia" <email@example.com>
> I can't really define qualia as being or not being part of objective
> reality. They do have a strong connection with objective reality otherwise
> we wouldn't be able to talk about them. And yet something unique about
> prevents us from analyzing them in the 3rd person. My argument was simply
> that if you have a purely bayesian logical inference machine, which does
> have the same kind of architecture as the human brain, and IF this
> architecture is required in order to experience qualia, then the machine
> would _not_ experience them, therefore it would be detached from their
> existence as a person without a visual cortex is detached from everything
> regarding visual perception and cannot understand visual concepts. When
> happens, we would have no logical moral standing for the machine, and it
> would be rationally perfectly justified to nuke us away into oblivion.
As any blind person can tell you, just because one doesn't have a certain
modality it doesn't mean they have to refuse the information offered by it.
What I mean is the following: if someone builds a CV machine and the
extrapolation works properly (otherwise all bets are off) it will
(correctly) find out that we value subjective experience, and that whether
qualia exist or not as primitives is not so very important, but what matters
is our volition about them. This is why CV is compatible with
positive-qualia-maximization, assuming that's what we really want, which I
> Allowing the machine to understand qualia may be the only way to prevent
> from eventually rearranging matter in the universe for more useful
Making it non-sentient (thus presumably not self-centered) and making it
follow the CV of humans is a safer bet, since I doubt very much humans want
their matter rearranged into "more useful" patterns for now.
> We? can probably tolerate? Who? Is this you typing at the end of the
> keyboard? or kids in the street with dirty feet and disease trying to
> survive until tomorrow? or is it laboratory animals who are cut up alive?
> is it people with chronic depression or other mental problems who
> live in hell? There is still a food chain out there. We fortunately
> out of it when we developed the neocortex, but animals still live with the
> reality of having their flesh torn apart and eaten. The world is a nasty,
> nasty place unless you are an upper class human with good mental health.
As far as I'm concerned, we have no reason to believe that animal suffering
(if that's not a misnoma) is at all important in the big scheme of things,
certainly not compared to, say, survival under a runaway self-centered SAI.
> Another century of the biosphere creating massive amounts of negative
> qualia? Utterly immoral.
An end to consciousness except by a self-centered SAI, or even an
unacceptable chance of such? Utterly immoral too.
> But it is really easy to switch them off just suicide. As for the
> aspects of life, once you stop being aware of them, who cares??
I do care. I must say I quite agree.
> Positive qualia are not a knob.They are multidimensional fractal type
> borealis-looking multicolor states of reality with a complex and beautiful
> structure :)
OK, we've seen your capacity for mystic-sounding statements. We are now
awed. You must really know what you're talking about.
> you are talking about the third person interpretation. The first person
> interpretation is what I am concerned with.
And it is substantially less important.
> not sure CV can produce what I am suggesting, not without understanding
> experiencing qualia itself, which Elizier wants to avoid (since he claims
> want to create a non-sentient AI).
A blind person doesn't need to see in order to understand electromagnetism,
or how vision works. A CV optimizing-process doesn't need to experience
qualia in order to understand their place in the collective volition it
> As has been said before, it's a gamble anyway. But if I screw up we are
> still left with paradise, it's a good plan B :)
For those who find that kind of thing acceptable.
> Some qualia are mostly neutral or only take on a value if associated with
> other qualia. For example you can have very pleasant sensations of color
> when you look at something beautiful. But yellow in itself... slightly
> positive I guess, not _that_ positive.
You guess. Interesting... I can't imagine an algorithm that could
appropriately correlate qualia to external reality and to
positivity/negativity, and you'd need to code that. If we go the CV route I
don't need to be able to be smart enough to work that out by myself.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT