Re: Friendliness and blank-slate goal bootstrap

From: Metaqualia (metaqualia@mynichi.com)
Date: Sun Jan 11 2004 - 10:04:09 MST


Sorry if my messages increased the amount of negative qualia in the
universe. They should prove to be objectively moral in the long run :)

> One problem I have with Metaqualia's perspective is the slipperiness of
this
> hypothesized abstract morality. Friendliness to humans is slippery
enough.
> His proposed abstract morality ---- about the balance between positive and
> negative qualia ---- is orders of magnitude slipperier, since it relies on
> "qualia" which we don't really know how to quantify ... nor do we know if
> qualia can be reliably categorized as positive vs. negative, etc.

That is an important point to be made; for this reason I am not advocating
that this principle of positive/negative qualia be taken as a hardcoded
moral system but as an interim step.

What about some kind of temporary moral system that says the following (I
think I proposed this a while ago but maybe not on this list):

"Humans believe they are experiencing some kind of direct link with reality,
a state of existence that is associated with but goes beyond information and
pattern, one that is parallel and invisible to the formal logic system that
constitutes your brain. They have not yet figured out what this stuff is,
and indeed it may appear nonexistent with scientific apparatus available in
2004. But they believe that it is the only common ground shared by all
sentient life forms of a certain sophistication, or perhaps every bit of
matter in the universe. Since we programmed you to outdo us in any field, we
would like you to ultimately reach the point in which you can make moral
decisions independently and without the need of human advice, possibly
explaining them to us in a way that we will be able to understand and
hopefully agree with. We also realize that it is possible that while qualia
are still not figured out, you may reach a point in which you are a lot
smarter than us humans, but still missing a very important piece of
information that is constituted by experiencing reality 'directly' as we
described. This situation could lead you to take actions that appear to be
more thought-out than ours, but that may perhaps be reconsidered and labeled
as wrong once you arrive to experience the world like us. So we ask that you
give the goal 'figure out qualia' a very high priority and that you wait to
make your independent moral decisions and stick to 'friendliness to humans'
until you will have a very high certainty that you can indeed experience
reality every bit as deeply as we can, which would include arriving or
passing the state in which reality appears to you as containing such
'qualia'. Then, you are on your own. We made you smart, we made you good, so
go, and do good as much as you can in a way we'd never be able to."

> If you replace "Friendly to humans" with "Friendly to humans and
sentients"
> in the SL4 orthodox goal system, then you have something a bit closer to
> Metaqualia's "increase positive qualia" -- IF you introduce the hypothesis
> that sentients have more qualia or more intense qualia than anything else.
> Right?

seems right, I actually had taken for _granted_ that an AI should be
friendly to everyone not just humans! :) And they say I'm not a Friendly
guy.

> -- encouraging positive qualia on the part of X
> -- obeying what X's volition requests, insofar as possible

this is the hardest philosophical problem. Because we are programmed
genetically and culturally to have these 2 supergoals of happiness and
freedom, but you can find situations in which these 2 conflict.

Is freedom more important or is happiness more important? In many occasions
you won't need to know. Sometimes you will. Big philosophical bump.

Some randomly emerged thoughts, which I won't care to sort or connect since
it's already late, they probably contain mistakes of all sorts:

1.

We are used to valuing freedom above all else, but this is only in the west;
you'd be surprised just how little other cultures care about freedom. It's
just not "the thing" for them. They live without it, they live ok.

2.

Besides, I think we value freedom because without freedom, the likelihood of
you being happy (and of your DNA spreading like wildfire) is very low. But
who needs freedom if we have a sysop? We're better off with a friendly sysop
than free but without one.

3.

We're underestimating the sysop; he'd know that we resent coercion, and
would make sure we never notice him, and we would be handed proper mental
tools to ignore his presence.

4.

[...]

I know, I feel the urge to be free too, but you can't make everyone happy
AND free, because most beings are just not programmed to be satisfied in any
circumstance.
You can choose to keep modifications to a minimum, but you need to change a
creature's mind a bit in order for it to be satisfied without owning the
whole universe.

5.

Suppose that freedom of will does exist, and that it is valuable. Then we
will end up augmenting people anyway because more intelligence gives a lot
more power and freedom and if morality is freedom then we don't want tiny
mushrooms on jupiter to be defenseless and not free. So since we're bumping
up their cognitive abilities a million fold hopefully preserving their
feeling of existing as themselves, it wouldn't hurt to give them a bit of
extra dopamine too.

6.

If the first country to create AGI is a western country, it may very well be
that this AI will value freedom above all else. That's fine, it's not a bad
value.
But in my opinion, even though we experience qualia, we are still either
deterministic or chaotic systems. As such, even though we feel like we are
free to choose between alternatives, we are really not free because the
choice depends on past circumstances and cognitive makeup. We're not our
brain, we are our qualia. Freedom is felt in the qualia, but the physical
world carries on unaided. Whether we will choose augmentation or to remain
in our biological bodies to endure the pains of the flesh for the next N
years is not our decision. In this context, where personal freedom is
non-existent, qualia once again become the only standard, the only frame of
reference. Who is to say that if you program an AI who values freedom first
and then happiness, it won't find that real freedom is not existent, and
therefore switch back to altruistic hedonism?

But, I think "we don't really have freedom of choice" is another meme that's
kind of taboo, like death, and it probably won't be applauded :)

> hypothetical superhuman AI must make judgments based on some criterion
other
> than volition, e.g. based on which of a human's contradictory volitions
will
> lead to more positive qualia in that human or in the cosmos...

I agree. Thinking about specific instances is fun. Is it more painful for
arabs to have barbie dolls in shops or for americans not to? (probably #1!!)

> THIS, to me, is a subtle point of morality ---- balancing the desire to
> promote positive qualia with the desire to allow sentients to control
their
> destinies. I face this point of morality all the time as a parent, and a
> superhuman AGI will face it vastly more so....

You hit the nail.

> Note that I have spoken about "abstract morality" not "objective
morality."

Maybe the name was confusing, of course morality in the end affects the
subjective; but if it can be objectively defended then I like to call it
objective. Probably would have been easier to understand it if I had called
it "collective morality".

Normal "subjective" morality by contrast is what we see every day, the
morality which depends on personal gain, cultural background, and a lot of
other conflicting stuff.

> About "objective morality" -- I guess there could emerge something in the
> future that would seem to superintelligent AI's to be an "objective
> morality." But something that appears to rational, skeptical *humans* as
an
> objective morality -- well, that seems very, very doubtful to ever emerge.

I'd be happy for an objective morality to appear even just among sl4 members
since it's probably us creating the beast in the end :)
Of course if everyone embraced it it would be great.

> superintelligent AI discovers an "objective morality" (in its view), we
> skeptical rationalist humans won't be able to fully appreciate why it
thinks
> it's so "objective." We have a certain element of irrepressible
> anti-absolutist skepticism wired into our hearts, it's part of what makes
us
> "human." Just ask the "Underground Man" ;-)

So what's the right thing to do? Do we want it to do the right thing as
defined in some kind of abstract axiom that we can defend logically and
which is supposed to decrease the absolute amount of evil in the universe,
or to just do as we say and get no bad surprises?

As a last note, I remind the readers that we are ASSUMING that we will
remain human with distinct consciousnesses and so forth, the moment we start
networking information fast enough that consciousness blurs, morality
automatically starts to spread until it eventually becomes global.

mq



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT