From: Lee Corbin (lcorbin@tsoft.com)
Date: Sat Jun 29 2002 - 18:32:50 MDT
Ben wrote
> But I am not sure how easy it will be to define a generic transhumanist
> ethics, either. For instance, we have seen that among the different
> transhumanists on this list, there are very different attitudes toward
> warfare. My strong preference is to have the AGI believe killing is wrong
> in almost all circumstances. Evidently some transhumanists disagree.
> Should I then refrain from teaching the AGI that killing is wrong in almost
> all circumstances, because by doing so, I am teaching it Ben Goertzel ethics
> instead of genetic transhumanist ethics?
>
> I am not sure it's possible to teach a system ethics as a set of abstract
> principles, only. Maybe most of the teaching has to be teaching by
> example. And if it is teaching by example, there's going to be a fair bit
> of individual bias in the selection of examples....
I put out on this list some time ago (or on Extropians) what
has to be a very old idea to you folks: what about first
getting the program to the point that it appears to be able
to understand literature, and then having it come to its
own conclusions after having read 200,000 books and plays
from every culture that touch on issues of morality? I
would think that if the machine is able to hold decent
conversations with us at that point, that it will have
become a veritable fountainhead of knowledge about what
is and isn't moral.
So I agree with you about the unlikeliness of teaching ethics
solely as a set of abstract principles (though I have not
studied Eliezer's proposals), but the suggestion of having
the AI read everything overcomes your point about bias.
Lee
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT