From: justin corwin (outlawpoet@gmail.com)
Date: Wed Aug 17 2005 - 01:05:29 MDT
On 8/16/05, Richard Loosemore <rpwl@lightlink.com> wrote:
> All I can say is that you are not talking about the issue that I raised:
> what happens when a cognitive system is designed with a thinking part
> on top, and, driving it from underneath, a motivation part?
>
> You've taken aim from the vantage point of pure philosophy, ignoring the
> cognitive systems perspective that I tried to introduce ... and my goal
> was to get us out of the philosophy so we could start talking practical
> details.
Rather the opposite. I too have little patience with philosophy. I
spoke from a practical perspective. If, like you say, a sentience
deserving of the title will always choose to become more moral, but
never less moral, why do humans occasionally move from moral actions,
to immoral actions? This occurs, even to very intelligent people. It
is a fact.
First, and most promising, is the idea that humans are insufficiently
intelligent to 'really know what they want'. This line of reasoning
holds that more intelligent being will tend on the whole to converge
towards niceness. This is supported weakly by local experience in
smart people being gentle and nice and understanding, and stupid
people being ignorant and hateful. Unfortunately, it's not an
unbreakable trend, because we have two very significant outliers.
The first, as you may anticipate, is sociopathy. Sociopaths can be
very intelligent, and occasionally relatively effective in most of the
life tasks we would consider challenging. What can this human outlier
tell us? More intelligent sociopaths do not tend to be more moral than
unintelligent sociopaths. In fact, the more intelligent sociopaths are
simply more destructive, difficult to detect, and problematic to
contain. And this is a very very small change in human design, as is
obvious by their proximity to human baseline in all respects, save a
few(empathetic modelling, affective association, etc). Their
intelligence does not increase their moral ability, because they have
no reason to apply it in such a direction. A sociopath sees no benefit
in considering the cost of certain actions, because he lacks the
affective associations which color those actions for normal humans.
The second is human organizations. Human organizations can very
loosely be considered organisms, in that they make decisions, have
reflective structure, can change themselves in a weak way, and can be
said to have goals. Do organizations with increasing ability and power
converge to altruism? Rather not. Why is this? Clearly they face the
same pressures as a very intelligent, powerful person. They have
intellectual capacity, diffuse as it is, they have 'motivational
structure' inasmuch as they must survive and achieve their goals, and
they must exist in the world of men. So why don't they increasingly
respect individuals? Well, simply, because they have no need of it. As
Mr. Wilson pointed out before, a company like GM can be seen as
maximizing simply what it wants, which is money. So if building banana
republics on near slavery is cost effective, then it is what is done.
These are both human examples. An AI can be much much stranger. You
express doubt that a simple utility maximizer could generate self
improvement. This is not the case. In fact, a utility maximizer could
likely self improve much easier than a complicated inconsistent
structure like a human. Paperclip++ is a lot easier to translate into
new and interesting forms without loss. A utility maximizer is a scary
thing. You are probably imagining a program which can only think about
paperclips, and is thus uninteresting. Unfortunately, a utility
calculation 'concerning' paperclips can contain arbitrary data. You
could for example, compare two cognitive designs based on which would
produce the most paperclips. Or whether turning left at St. Georges
Street will get you to the paperclip factory faster which will allow
you to take control of the assembly line faster, which will lead to
increased certainty and control over paperclip production which will
lead to more paperclips. And so on.
I dislike the paperclip example because it sounds stupid and seems to
turn people's brains off. The first example of this problem I heard,
was the Riemann Hypothesis Catastrophe. Suppose you build a giant AI,
and ask it to solve the Riemann Hypothesis, and then it promptly
disassembles the solar system to use as calculating elements, and
solves it. This is a perfectly valid chain of actions, given the sole
goal of solving the Riemann Hypothesis, is it not? When exactly, and
why, would the AI stop at any point?
Humans have a complex soup of 'sort of' goals and motivations, and
have a major problem of attempting to divine other human's
motivations. So we have this lovely ability to self-decieve, to hold
inconsistent thoughts, and other such nastiness. Thus, a human can
spend fruitful time dissecting his thoughts, and derive satisfaction
from 'changing his goals' via inspection and thought. But do not make
the mistake of thinking you have actually changed your motivational
structure. You *started* with the goal of revising your stated
motivations to be more convincing, so you could live in a society of
deceptive and sincere beings. You *started* with a weak push towards
exhibiting altruism, because it's an effective strategy in near-peer
competing groups of humans. Your motivational structure already
includes all these things. Do not assume that an AI will even include
that much.
You do mention some physical differences between good and bad in your
response to Ben. Things like "low-entropy", conserving, etc. Consider
that humans are giant meatbags that need to kill plants and animals to
live(perhaps just plants, for vegetarians), and generate massive
amounts of entropy in our endeavors. It's clear that we're not the
conservative option. Allowing us to continue in our ways wastes a
great deal of entropy. (Nothing compared to stars and black holes, of
course, but we must start somewhere). The amount of information
contained in the human race could be easily stored in a much more
environmentally conscious format, some optical media, perhaps.
It's not neccesary to invent an ethical dilemma, because any 'dilemma'
has to be framed in terms of motivations that the AI *already has* in
order for the issue to be even interesting to the AI.
Now if the AI's job is to implement some "Citizen's Bill of Rights",
or be a respectful human-like morality, or to preserve us for zoo
display, then certainly, it would have to be in a very very strange
situation to use our molecules for something else.
-- Justin Corwin outlawpoet@hell.com http://outlawpoet.blogspot.com http://www.adaptiveai.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT