RE: Beyond evolution

From: Ben Goertzel (ben@webmind.com)
Date: Sun Jan 28 2001 - 13:51:30 MST


> Right. So I reified the warmth, love & compassion into a philosophy of
> symmetrical moral valuation of sentient entities, used the philosophy to
> take cognitive potshots at all the emotions that didn't look
> sentient-symmetrical, and it worked. How is this different from a
> Friendly AI maintaining Friendship in the face of any
> sentient-asymmetrical emergent forces that may pop up?

It's different in two ways

1) Humans are fighting more negative emotions and intrinsic aggression, etc.
than AI's well
        (as you've shown me)

2) Humans have more intrinsic warmth, compassion & passion toward other Ai's
than AI's will

So, compared to an AI, where friendliness is concerned you've got things
going for you & things
going against you...

> I prefer to
> think of humans-turned-transhuman and their boon-drinking-companion AIs
> doing all the interesting things Out There within the embrace/OS/API of a
> Friendly Sysop.
>

Well, hey. I would hate to deprive you of your pleasant delusion. Dream on
;-D

ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT