RE: Ben what are your views and concerns

From: Ben Goertzel (ben@intelligenesis.net)
Date: Mon Oct 02 2000 - 05:50:30 MDT


> Ben, thanks for posting the excerpt from your book. It raises month' worth
> of discussion; let me comment on just two issues:
>
> I'm not sure why you would choose 'compassion' as your primary social
> virtue. I would think that honesty & fairness ...

I guess this gets into fundamental philosophical/political taste

Since Eliezer is affiliated with the Extropians, I'll guess that some of you
are
libertarian in political philosophy, whereas I tend more toward the
democratic-socialist;
this difference appears to have something to do with whether "compassion" or
"fairness"
is taken as a fundamental value. I know that compassion has a weak
reputation among
libertarians. Sasha Chislenko, a good friend of mine and a libertarian,
used to talk about how the air
should be metered out for a price, and those who couldn't afford it should
fairly suffocate ;>
(BTW, I am strongly in favor of liberties like the right to take drugs and
to modify one's
own body, etc. -- I don't want to diverge the conversation into the
political arena, but ethics
and politics are not that different...)

But there's a more direct answer too. Honesty is a good value but doesn't
get you very far.
So, Webminds and other AI's should be honest, but, if a computer system is
honest with me about
the fact that it really wants to kill me, I still won't be very happy with
it. Thus, I view honesty
as a valuable virtue, but not foundational.

Even very good people are sometimes tactically dishonest --
"white lies." I used to believe that this was a terrible practice that
should be stomped out, but I
have since learned its benefits.... Certainly there have been situations
where telling one of my kids a white
lie saves them a lot of purposeless pain.

But this doesn't mean that honesty isn't a virtuous value...
in fact, it can easily be programmed into Webmind, as a hard-wired Feeling
which negatively increments the
happiness feeling when what is uttered contradicts what is known. This will
cause the system to have
a discomfort with dishonesty, but not to be unable to be dishonest when the
situation calls for it

The reason I focused on compassion is that I believe this emotion is the
foundation of social
life. Without compassion, families and friendships would not exist, and
societies would continually
be like giant Melrose Place episodes.... Compassion is the "glue" that
holds societies together.
I can't prove this mathematically -- the right kind of math doesn't exist --
but I suppose this can
be verified/falsified one day using experiments with AI systems (of course
this would take vast
amounts of hardware to do systematically)

Fairness seems to me to follow logically from
        a) compassion.
        b) a "golden rule" inference in which a system models others by analogy to
itself

If an organism seeks to maximize the happiness of itself and all others in a
transaction, using the
golden rule inference heuristic, it will arrive at the strategy of
fairness...

> However, a more fundamental problem is that you seem to assume that all
> intelligent actors (like Webmind) will have roughly the same
> amount of power
> and intelligence - else cooperation would not be the likely
> outcome.

No, I don't assume this, but it's the simplest case to analyze. Otherwise
one has
yet more complex dynamics. If one has actors with different degrees of
power then one
has to study coalition formation and breakdown ... the outcome you describe,
one dictator
coming to power, is only one among many possible outcomes, and I don't know
how to estimate
the probability of this outcome.

I guess you suspect that one system is going to have the secret to
exponential intelligence growth
(once it attains the ability to rewrite its own code). I don't think it's
going to be quite so easy.
I suspect that the first portion of the exponential intelligence growth
curve isn't going to have
that high of an exponent -- that, even once an Ai system starts
self-rewriting, it'll still have a lot
to gain from human programmers' intervention. And, once someone does attain
a generally acknowledged
"real AI" system, others will observe its behavior and reverse-engineer it,
pouring vast amounts of
resources into playing "catch-up." In short, I have a different intuition
from you about the amount
of first-mover advantage that anyone will have. (And this is not motivated
by ego in any way, since
I suspect my Webmind system is going to be the first mover....)

Great thoughts, thanks!!

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT