From: Martin Striz (email@example.com)
Date: Fri Jul 02 2004 - 14:33:26 MDT
--- Sebastian Hagen <firstname.lastname@example.org> wrote:
> > If something doesn't feel like something, then it is irrelevant.
> That's one of the assumptions of the kind of morality you personally believe
> be correct (one based on qualia). This assumption shouldn't be made for
> discussions of moralities in general since it is not shared between all
But it IS in the nature of human beings to live out the happiness principle.
You make every decision based on whether it makes you more happy, or less
unhappy. And you fabricate (ethical) philosophical rationalizations as to why
you do it, but in the end the solution is simple: understand your nature,
understand what things make you HAPPY, and maximize those. However, I agree
with you that that only applies to humans, and the possibilies for AI ethics
are wider. But, as long as we're talking about what AIs should do FOR humans,
how they should fulfill our volition, etc., to that extent, their morality
should be the same.
P.S. I tend not to get into the qualia debate, as the jury is still out for me.
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT