From: Metaqualia (firstname.lastname@example.org)
Date: Sat Jan 10 2004 - 00:06:59 MST
> Be very careful here! The easiest way to reduce undesirable qualia is to
> kill off everyone who has the potential for experiencing them.
I want someone who is superintelligent, and that takes my basic premises as
temporary truths, and who recursively improves himself, and who understands
qualia in and out, to decide whether everyone should be killed. If you
consider this eventuality (global extermination) and rule it out based on
your current beliefs and intelligence, you are not being modest in front of
massive superintelligence. I do not rule out that killing everyone off could
be a good idea. Death is morally neutral. Only suffering is evil. Of course
a transhuman ai could do better than that by keeping everyone alive and
happy, which will reduce negative qualia and also create huge positive ones,
so I do have good hopes that we won't be killed. What if the universe was
really an evil machine and there was no way of reversing this truth? What
if, in every process you care to imagine, all interpretations of the process
in which conscious observers were contained, were real to these observers
just like the physical world is real to us? What if there existed infinite
hells where ultrasentient ultrasensitive beings were kept enslaved without
the possibility to die? Is this not one universe that can be simulated and
by virtue of this interpreted out of any sufficiently complex process (or
simpler processes: read moravec's simulation/consciousness/existence)?
I take the moral law I have chosen to its logical extreme, and won't take it
back when it starts feeling uncomfortable. If the universe is evil overall
and unfixable, it must be destroyed together with everything it contains.
I'd need very good proof of this obviously but i do not discount the
> It seems to me that a person's method for determining the desireable
> morality is based partially on instincts, partially on training, and
we are talking about different things, i have answered this previously.
> ... are you sure about that? Just how heavily do you want the AI to
> weigh it's self interest? Do you want it to be able to justify
its self interest? at zero obviously, other than the fact that the universe
is likely to contain a lot more positive qualia than negative ones if the
moral transhuman AI stays alive, so in the end its own survival would be
more important than the survival of humans, if you consider the million
worlds with biologically evolved beings that may be out there and in need of
salvation. So at a certain point the best it could do morally to work toward
the goals we have agreed could be exactly exterminating humans.
> >Remember, friendliness isn't Friendliness. The former would involve
> >like making an AI friend, the latter is nothing like it. Where he says
> >"Friendliness should be the supergoal" it means something more like
> >is really right should be the supergoal". Friendliness is an external
Is Friendliness creating a machine that wouldn't do something we wouldn't
like? Or is Friendliness creating a machine that wouldn't do something we
wouldn't like if we were as intelligent and altruistic as it is?
> This is assuming that "right" has some absolute meaning, but this is
> only true in the context of a certain set of axioms (call them
I am proposing qualia as universal parameters to which every sentient (at
least evolved ones) can relate. That was the whole purpose, so we don't get
into this "relativity" argument which seems to justify things that I am not
ready to accept because they just feel very wrong at a level of
introspection that is as close as it could be to reality and cannot be
further decomposed (negative qualia).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT