From: Metaqualia (email@example.com)
Date: Mon Jan 12 2004 - 00:01:55 MST
> Ben, I want to take the other side to this. Technical competence in at
> least some fields and at least a little effort to familiarize oneself
> with the background to sl1-4 is presumed on this list.
I did not know that there was a specific background I had to familiarize
myself with; I think I am well versed in a few of the key areas usually
discussed here, but if you can be more specific I will make sure I get
myself up to speed with this background knowledge. Especially if there is
some kind of doctrine, for instance something like "we intend Friendliness
to be Friendliness to humans this is a settled argument so we do not need
any more feedback or alternative ideas".
> should be), but pointing out atrocious arguments about moral philosophy
> without mincing words is not a bad thing.
Valid argument, if and when it can be agreed that my argument was atrocious.
Actually it wasn't atrocious and it wasn't my argument. An extreme
ramification of an argument was singled out and attacked, but the theory
itself has not been touched much.
You do make some claims below that I think concern the actual theory, so I
will answer these. Keep in mind I am proposing a theory. I am not here to
convince anyone, I am not here to say that it is the only way that it could
possibly be. I just tried to propose this on the mailing list where there
should be the least possible level of anthropocentrism, the maximum possible
level of ability to operate outside of human evolutionary programming and
tackle ideas as purely abstract entities. In other words, I chose to see
what the smart people thought about the theory (since I had mostly "huh"s
and "don't know"s from other lists). If it's crap, and everyone agrees on
it, then that will surely make me think twice about it, just to check if god
forbid, I am the one who is wrong.
If after this reply we want to just do a raise of hands, who thinks the
theory is valid, who thinks it's crap, and not talk about it again, I think
that will be one positive outcome, since I think I did have time to put down
all the main aspects and ramifications of my proposal and I am getting
And maybe sl4 was never the place to have such discussion, maybe this topic
should have its own list.
> First, it completely begs the question it set out to answer. Instead of
> showing how moral arguments can be objectively justified, it offered a
> moral theory that makes judgments by counting purportedly objective
Your point being, maximize positive qualia/minimize negative ones, is just
as good a theory as maximize pebbles, since the choice of positive/negative
qualia is arbitrary. I see where your reasoning is coming from. The only way
I can justify the choice of qualia over pebbles is that I have a particular
interface to the universe, one that goes beyond information and pattern and
yet is strictly related to it; this interface lets me experience the world
in a direct way where some things appear as self evident and completely
outside the realm of true/false, they just ARE, exist. These are qualia.
This direct experience of existing tells me what is right and what is wrong,
because the way it feels, IS beyond any reasoning, right or wrong. Outside
of the realm of qualia, it is as you say. Pebbles, furry balls, maximize
what you will, that is not morality it is just a goal that was programmed
and now is trying to be fulfilled. To me the only real moral is coupled with
qualia. But of course do we know about qualia? No. And you ask me to know
about these before I propose my argument.
Therefore I respond: I cannot give a justification for why qualia are better
than pebbles in such a way that it will be satisfying to the rigorous
reader. I know what pebbles are and they aren't worth much. I don't know
what qualia are but they feel like they are worth a lot. I wish that in the
future we will be able to find out what qualia are and how they behave, and
THEN I will give you a very good reason why they are more important than
pebbles. In the meanwhile I must appeal to _your_ direct interface to
reality and try to decrease the amount of pebbles in the universe by one,
then decrease the amount of positive qualia you will be able to experience
by one, and tell me which one is more valuable. There is no good way to
write this argument down so that all information necessary will be included.
It still requires your brain to produce your qualia and see that the
argument is right. This is why I was suggesting that the AI should have
qualia before it worries about developing its own morality; you can't write
it, you have to experience it.
> Second, his theory does not achieve objectivity of even the lesser sort
> because "positive" and "negative" are not objective terms.
Granted, positive and negative are subjective, but a law that minimizes
subjective negativity and maximizes subjective positivity for every possible
observer is objectively positive. No? At least that's what it's trying to
do; there may be situations in which "every possible observer" is not an
option. Those are the challenging situations, the hard decisions to make,
and when it would be helpful to have a theory that is not only trying to
satisfy the judge's personal interest.
> Unfortunately, when pressed, Metaqualia wrote that he did not have a
> more elaborate account of what makes a qualia positive.
Behavioristically you can say that positive qualia create in the being the
need of getting more of the same by repeating the physical circumstances
that caused the qualia. I don't think "what is a positive qualia, what is a
negative one" is under discussion, hopefully we all know from our
experience, and we are also able to look at a different life form and guess
if something will feel good or bad based on what its response usually is to
> of the stream of consciousness as a qualia stream is also far outside
> the standard usage, because the stream of consciousness includes a lot
> of things other than sense impressions.
The two are interconnected, there is a qualia for seeing the red spot, one
for thinking "hey that's an apple" one for thinking this very sentence and
> secondary effects aside, killing the most unhappy could improve the
> ratio. Utilitarians refined their theory to avoid such arguments, and
> Metaqualia presumably will too, if he is pressed.
I have absolutely no personal gain whether the theory is accepted or not; so
I won't revise it for political reasons or to please some individual or
group. Of course it is open to revision for good reasons.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT