RE: Ben what are your views and concerns

From: Peter Voss (peter@optimal.org)
Date: Mon Oct 02 2000 - 11:05:11 MDT


-----Original Message-----
From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com] On Behalf
Of Ben Goertzel

I guess this gets into fundamental philosophical/political taste....I'll
guess that some of you are libertarian in political philosophy....I don't
want to diverge the conversation into the political arena, but ethics and
politics are not that different...

You are right, we have somewhat different political views. However, I would
not characterize mine as 'taste', rather logical conclusions. It is not so
much that politics and ethics are 'not that different', but rationally held
political positions are a consequence of moral beliefs (which in turn,
usually depends on epistemology, and that on ontology). But I agree that we
should perhaps not get into this here...

...Honesty is a good value but doesn't get you very far. So, Webminds and
other AI's should be honest, but, if a computer system is honest with me
about the fact that it really wants to kill me, I still won't be very happy
with it. Thus, I view honesty as a valuable virtue, but not foundational.

I would rather have honesty, and know my AI's intentions rather than it
'compassionately' lying to me 'for my own good'. I think that both people
and AIs function better that way. (There is a whole complex philosophical,
psychological theory behind this view). Basically, I think that
self-ownership (authority over self, and responsibility for one's actions)
is key to flourishing of societies. Honesty is crucial both as a personal
value (not lying to yourself) and as a social value (knowing what to expect
from others, plus getting valuable feedback).

Even very good people are sometimes tactically dishonest -- "white
lies."....

An interesting, and tricky, issue. Have you read James Halperin's novel 'The
Truth Machine' ? Great exploration of this and other honesty issues.

But this doesn't mean that honesty isn't a virtuous value... in fact, it can
easily be programmed into Webmind, as a hard-wired Feeling which negatively
increments the happiness feeling when what is uttered contradicts what is
known. This will cause the system to have a discomfort with dishonesty, but
not to be unable to be dishonest when the situation calls for it

Yes, this is important. There are higher values than honesty. You don't tell
a child-molester where the kids are...

The reason I focused on compassion is that I believe this emotion is the
foundation of social life. Without compassion, families and friendships
would not exist...

I think that it is the (self-interested) value that others represent to us
that is the best foundation for relationships. Do you have a definition for
'compassion'? I see it as a byproduct of valuing others.

Fairness seems to me to follow logically from: a) compassion. b) a "golden
rule" inference in which a system models others by analogy to itself. If an
organism seeks to maximize the happiness of itself and all others in a
transaction, using the golden rule inference heuristic, it will arrive at
the strategy of fairness...

I'm sure you are aware of the serious problems with 'the Golden Rule' as a
moral principle. All of this gets us deep into moral philosophy. I have
written quite a bit about this.

I guess you suspect that one system is going to have the secret to
exponential intelligence growth (once it attains the ability to rewrite its
own code). I don't think it's going to be quite so easy. I suspect that the
first portion of the exponential intelligence growth curve isn't going to
have that high of an exponent -- that, even once an Ai system starts
self-rewriting, it'll still have a lot to gain from human programmers'
intervention. And, once someone does attain a generally acknowledged "real
AI" system, others will observe its behavior and reverse-engineer it,
pouring vast amounts of resources into playing "catch-up." In short, I have
a different intuition from you about the amount of first-mover advantage
that anyone will have. (And this is not motivated by ego in any way, since
I suspect my Webmind system is going to be the first mover....)

I agree with you, that here we are in 'intuition' territory. My own approach
to AI design leads me to believe that at a certain point of intelligence
there will be enough of an exponential burst for one system to dominate. I
don't think that hardware will be a major limiting factor. On the other
hand, perhaps each type of intelligence has its own upper theoretical limit.
If so, I haven't yet identified it.

All the best,

Peter

peter@optimal.org www.optimal.org



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT