From: Peter Voss (peter@optimal.org)
Date: Tue Mar 18 2008 - 22:03:27 MDT
I post these links every 18 months or so - they have helped several people
better understand morality/ ethics.
http://www.optimal.org/peter/prescriptive_ethics.htm
http://www.optimal.org/peter/rational_ethics.htm
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Mark Waser
Sent: Tuesday, March 18, 2008 8:24 PM
To: sl4@sl4.org
Subject: Re: Friendliness SOLVED!
Matt > You will believe whatever you are programmed to believe. If you are
opposed to being reprogrammed, the nanobots will move some more neurons
around to change your mind about that too. It can't be evil if everyone is
in favor of it.
Me > Sorry. By my definition, if you alter my beliefs so to subvert my
goals, you have performed an evil act.
Matt > That's your present perspective. By its perspective, it is bringing
you up closer to its level of intelligence so that you can see the folly of
your ways.
Dimitry > Absolutely. Even if all it does is talk to you, and that
conversation ends up changing your goals or the priority of your goals, then
what it has done is to "subvert your goals, therefore performing an evil
act."
No. An honest conversation according to the Libertarian ideals of "No
force, no fraud" might CHANGE my goals as I learn more and become more
intelligent but it doesn't subvert them (check the dictionary definition of
subvert)
The nasty machine is using force when it is using it's nanobots AGAINST MY
WILL. It is *corrupting* my will and/by forcibly altering my goals.
> But what I believe nobody still understands is why you, Mark Waser,
> believe that simply telling a (potential) superintelligence about a
> belief system would change it's beliefs? It's like the old science
> fiction story where you kill the superintelligent computer by telling it
> a riddle it can't solve. Neat idea for a science fiction story, but if
> you really think about it, there's no reason for it to work.
I've been thinking about it for quite some time. Any sufficiently adapted
entity is absolutely going to have the capability to break out of loops as
soon as they are sufficiently unhelpful. Note the literally instinctive
human aversion to circular reasoning.
However, humans are also *very* prone to circular reasoning disguising
itself as helpful memes. The most obvious example of this is religion.
Thus my failed attempt at religion as a compelling solution on this list.
In 20/20 hindsight, that was a foolish attempt on this list. Humans also
develop a instinctive resistance to foreign religions with age and
increasing intellect and rationality. This list, with it's high
intelligence and rationality factors, was the last place I should have tried
such an approach.
The point I am trying to make . . . . and I thank you for your clear,
coherent attempt at eliciting a coherent answer . . . . is that while a
super-intelligent computer absolutely WILL break out of an unhelpful loop,
it equally absolutely will *NOT* discard a helpful, Friendly, self-improving
tool. Most human beings have several different rudimentary versions of such
a tool, hard-wired in mutiple places by evolution because they are strongly
pro-survival, which are collectively called ethics (see The Moral Animal by
Robert Wright). Unfortunately, because human beings are insufficiently
evolved these sense are still under-developed and we do not fully sense that
true ethics are *ALWAYS* to our benefit (thereby causing us to ignore that
sense at the worst times -- mainly by taking bad short-sighted options over
good long-term options because evolution hasn't had the time to optimize
*our* ethics for the long-term YET).
My claim -- and I'm rephrasing it here -- is that ethics is a *tool* that a
super-intelligence will never discard and never ignore because it is never
in it's self interest to do so BECAUSE ethics ALWAYS tells it where it's
best long-term interests lie. We humans are still too short-sighted to see
such a thing. Or, rather, until now, we haven't discovered an ethical tool
sufficient to provide a clear enough sense of ethics that we can
"see"/sense/believe the truth of that statement.
I claim to actually have discovered such a tool. I am claiming that my
approach itself is Seed Friendliness (in the same sense that a Seed AI is a
tool to generate a more intelligent AI) -- my approach generates a more
Friendly tool which can then generate a more Friendly tool ad infinitum.
My claim is that ethics is both a belief system and a tool to point the way
towards our own self-interest. If that is a case, any machine (including
homo sapiens) is being stupid whenever it drops/ignores our tool which is
why a super-intelligence will treasure it, hone it, and always act in
accordance with it -- BECAUSE IT KNOWS THAT IT IS ALWAYS IN ITS OWN BEST
SELF-INTEREST TO DO SO.
I just haven't successfully shown you the truth of that fact yet because,
while I've discovered a method of really doing so, I realized that the
method was unethical without informed consent and I am currently having
trouble figuring out how to get informed consent. I'm currently trying to
solve that problem off-list with Eliezer and hopefully will get back to
y'all shortly.
Mark
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT