From: Samantha Atkins (samantha@objectent.com)
Date: Mon Jan 29 2001 - 20:23:09 MST
"Eliezer S. Yudkowsky" wrote:
>
> Ben Goertzel wrote:
> >
> > > Right. So I reified the warmth, love & compassion into a philosophy of
> > > symmetrical moral valuation of sentient entities, used the philosophy to
> > > take cognitive potshots at all the emotions that didn't look
> > > sentient-symmetrical, and it worked. How is this different from a
> > > Friendly AI maintaining Friendship in the face of any
> > > sentient-asymmetrical emergent forces that may pop up?
> >
> > It's different in two ways
> >
> > 1) Humans are fighting more negative emotions and intrinsic aggression, etc.
> > than AI's will (as you've shown me)
> >
> > 2) Humans have more intrinsic warmth, compassion & passion toward other Ai's
> > than AI's will
> >
> > So, compared to an AI, where friendliness is concerned you've got things
> > going for you & things going against you...
>
> My point is that, without benefit of self-modification, I routinely
> maintain my declarative cognitive supergoals against evolutionary tensions
> that run *far* higher in a human than they would in a Friendly AI.
>
But this are declaritive cognitive supergoals that you yourself, after a
lot of consideration, chose. Since you seem to be saying you will
establish Friendliness as the Supergoal in the SI by fiat, it is not the
same situation. In order to be fully intelligent an AI has to be able
to question its own goals and adjust as necessary. I don't see that you
can establish and hope to keep a Friendliness or any other supergoal for
all time without cripping the intellectual and self-evaluative power of
the AI.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT