Re: ethics

From: Samantha Atkins (samantha@objectent.com)
Date: Thu May 20 2004 - 00:27:52 MDT


On May 19, 2004, at 3:56 PM, Eliezer S. Yudkowsky wrote:
> Similarly, FAI doesn't require that I understand an existing
> biological system, or that I understand an arbitrarily selected
> nonhuman system, but that I build a system with the property of
> understandability. Or to be more precise, that I build an
> understandable system with the property of predictable
> niceness/Friendliness, for a well-specified abstract predicate
> thereof. Just *any* system that's understandable wouldn't be enough.
>

You propose to give this system constrained to be understandable by
yourself the power to control the immediate space-time area in service
of its understandable goals? That is a lot of power to hand
something that is not really a mind or particularly self-aware or
reflective. If I understand you correctly I am not at all sure I can
support such a project. It smacks of a glorified all-powerful
mindless coercion for "our own good".

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT