From: Vladimir Nesov (robotact@gmail.com)
Date: Tue Jun 10 2008 - 22:26:48 MDT
On Wed, Jun 11, 2008 at 4:54 AM, Stathis Papaioannou <stathisp@gmail.com> wrote:
> 2008/6/11 Vladimir Nesov <robotact@gmail.com>:
>
>> Such axioms are too crude, and will break down when situates becomes
>> more complex. Asking an AI to decide which actions are ethical is a
>> complex wish ( http://www.overcomingbias.com/2007/11/complex-wishes.html
>> ), and it's easy to run into a situation where a simple set of
>> "ethical axioms" break down.
>
> If you're trying to simulate human ethics, that's true, mainly because
> ethical axioms, such as they are, are vague and subject to constant
> revision. But it's easy enough to model an ethical system with
> well-defined, fixed axioms which are then applied as a judge applies
> statute law. Of course, the law may have consequences that were not
> foreseen by the original legislators.
>
Laws kinda work, because you have a judge in the loop that can
interpret them within limits, and lawmakers who fix the bugs over
time. If ethics would be so simple, at least you'd expect laws to be
simple, and you won't need a judge.
-- Vladimir Nesov robotact@gmail.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT