From: Sebastian Hagen (sebastian_hagen@gmx.de)
Date: Sun Sep 26 2004 - 16:06:31 MDT
Eliezer Yudkowsky wrote:
> In the TMOL FAQ you've got an algorithm that searches for
> a solution and has no criterion for recognizing a solution.
I see.
> If you claim that you know absolutely nothing about objective morality,
> how would you look at an AI, or any other well-defined process, and
> claim that it did (or for that matter, did not) compute an objective
> morality?
I can't, really. More intelligence isn't likely to help in finding an
answer if my question is entirely meaningless.
> Why would any answer you recognized as reasonable be non-eudaemonic?
I wasn't certain that it would be; eudaemonia just lacked justification
imo. I regarded it as a possible answer, but didn't consider it any more
likely than, say, 'maximize the number of paperclips in the universe'.
But considering that 'objective morality' is ill-defined, I suppose
expecting any objective justification was unreasonable.
Thank you for clearing that up.
Sebastian Hagen
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT