From: Joshua Fox (firstname.lastname@example.org)
Date: Thu Nov 22 2007 - 08:04:23 MST
2007/11/19, Eliezer S. Yudkowsky <email@example.com>:
> Joshua Fox wrote:
> > I don't have one. CEV seems to be an attempt [at defining morality]
> It's not, any more than a camera is an attempted painting.
Yes, CEV, it I understand it, is a meta-definition of morality, a tool for
achieving an implicit definition of morality. Ben Goertzel's "Growth,
Choice, and Joy" is another attempt at a definition of morality.
It seems that there is a lot of discussion on utility functions in general
and little on morality. I appreciate that the first problem in FAI is that
it not destroy humanity while achieving its utility function, but still, the
distinction between utility functions like "crush your enemies, see them
driven before you, and hear the lamentations of the women" and "everybody
get together now, smile on your brother" seems worth investigating.
Nonetheless, my proposal for a morality simulator is not immediately
AGI-focused, nor is it specialized on any one morality-based utility
function. It's just a tool for playing with the concept of morality with the
technologies at hand.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT