Re: All sentient have to be observer-centered! My theory of FAI morality

From: Tommy McCabe (rocketjet314@yahoo.com)
Date: Fri Feb 27 2004 - 04:43:49 MST


You say that moralities 'consistent' with each other
don't have to be identical. They do. Morality isn't
mathematics. In order for them to be consistent, they
have to give the same result in every situation, in
other words, they must be identical. 'I like X' isn't
really a consistent morality with 'Do not kill', since
given the former, one would kill to get X. I don't
like the idea of an AI acting like a human, ie, of
having heuristics of 'Coke is better tha Pepsi' for no
good reason. Of course, if their is a good reason, a
Yudkowskian FAI would have that anyway. You may take
the 'personal component of morality is necessary'
thing as an axiom, but I don't and I need to see some
proof.

"Well yeah true, a Yudkowskian FAI would of course
refuse requests to hurt other people. But it would
aim to fulfil ALL requests consistent with volition.
(All requests which don't involve violating other
peoples right)."

And that's a bad thing? You really don't want an AI
deciding not to fulfill Pepsi requests because it
thinks Coke is better for no good reason- that leads
to an AI not wanting to fulfill Singularity requests
because suffering is better.

"For instance, 'I want to go ice skating', 'I want a
Pepsi', 'I want some mountain climbing qquipment' and
so on and so on. A Yudkowskian FAI can't draw any
distinctions between these, and would see all of them
as equally 'good'."

It wouldn't- at all. A Yudkowskian FAI, especially a
transhuman one, could easily apply Bayes' Theorem and
such, and see what the possible outcomes are, and
their porbabilities, for each event. They certainly
aren't identical!

"But an FAI with a 'Personal Morality' component,
would
not neccesserily fulfil all of these requests. For
instance an FAI that had a personal morality component
'Coke is good, Pepsi is evil' would refuse to fulfil a
request for Pepsi."

That is a bad thing!!! AIs shouldn't arbitrarily
decide to refuse Pepsi- eventually the AI is then
going to arbitrarily refuse survival. And yes, it is
arbitrary, because if it isn't arbitrary than the
Yudkowskian FAI would have it in the first place!

"The 'Personal morality' component
would tell an FAI what it SHOULD do, the 'Universal
morality' componanet is concerned with what an FAI
SHOULDN'T do. A Yudkowskian FAI would be unable to
draw this distinction, since it would have no
'Personal Morality' (Remember a Yudkowskian FAI is
entirely non-observer centerd, and so it could only
have Universal Morality)."

Quite wrong. Even Eurisko could tell the difference
between "Don't do A" and "Do A". And check your
spelling.

"You could say that a
Yudkowskian FAI just views everything that doesn't
hurt others as equal, where as an FAI with an extra
oberver centered component would have some extra
personal principles."

1. No one ever said that. Straw man.
2. Arbitrary principles thrown in with morality are
bad things.

"Yeah, yeah, true, but an FAI with a 'Personal
Morality' would have some additional goals on top of
this. A Yudkowskian FAI does of course have the goals
'aim to do things that help with the fulfilment of
sentient requests'. But that's all. An FAI with an
additional 'Personal Morality' component, would also
have the Yudkowskian goals, but it would have some
additional goals. For instance the additinal personal
morality 'Coke is good, Pepsi is evil' would lead the
FAI to personally support 'Coke' goals (provided such
goals did not contradict the Yudkowskian goals)."

It isn't a good thing to arbitarily stick moralities
and goals into goal systems without justification. If
there was justification, then it would be present in a
Yudkowskian FAI. And 'Coke' goals would contradict
Yudkowskian goals every time someone asked for a
Pepsi.

"I've given the general solution to the problem of FAI
morality. We don't know that 'Personal Morality' set
to unity would be stable. Therefore we have to
consider the case where FAI's have to have a
non-trival 'Personal Morality' component."

Non sequitur. That's like saying "We don't know if car
A will be stable with 100% certainty, so we have to
take a look at car B that has large heaps of trash on
it for no good reason"

__________________________________
Do you Yahoo!?
Get better spam protection with Yahoo! Mail.
http://antispam.yahoo.com/tools



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT