Re: Seed AI (was: How hard a Singularity?)

From: Lee Corbin (lcorbin@tsoft.com)
Date: Mon Jul 01 2002 - 21:21:24 MDT


I wrote on Saturday

>> What about first
>> getting the program to the point that it appears to be able
>> to understand literature, and then having it come to its
>> own conclusions after having read 200,000 books and plays
>> from every culture that touch on issues of morality? I
>> would think that if the machine is able to hold decent
>> conversations with us at that point, that it will have
>> become a veritable fountainhead of knowledge about what
>> is and isn't moral.

Ben answered

> Different people can read the world's literature (or big samples
> thereof) and come to very different conclusions, based on their
> "initial conditions" (their initial belief systems) and their
> cognitive biases.

But I agree instead with what Gordon just wrote:

> Empathy is hard wired in the sense that the potential for empathy to
> arise is hard wired...

> That said, a mind in general would not by default be predisposed to
> empathize. With training, though, I think it would be possible to
> teach empathy content and then empathy skills. Short of that, empathy
> could always be programmed in (and in fact empathy may arise in an
> self-improving AI out of it's ability to understand itself in order
> to improve itself, but that may depend on the specifics of the
> architecture).

Now *that* I understand! So wouldn't it make sense to *first* have
an AI *understand* what many people have believed to be "moral" and
"ethical" behavior---by reading the world's extensive literature---
before attempting to engage in moral behavior, or even to try to
empathize?

Lee

P.S. Sorry that I have not been at all able to keep up with
the numerous messages here lately.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT