Re: [sl4] Weakening morality

From: Vladimir Nesov (robotact@gmail.com)
Date: Tue Feb 10 2009 - 07:09:54 MST


On Tue, Feb 10, 2009 at 4:32 PM, Petter Wingren-Rasmussen
<petterwr@gmail.com> wrote:
> On 2/10/09, Johnicholas Hines <johnicholas.hines@gmail.com> wrote:
>
>> If I'm parsing the various speakers correctly, Petter
>> Wingren-Rasmussen made a positive statement something like: "Any such
>> AI will suffer such and so."
>
> At the very least you understand me correctly ;)
>
> I didnt intend to sound nihilistic about the whole thing Vladimir.
> My intention was to show the weaknesses of hardcoded laws, and I
> intend to show an alternative thats more reliable in the long term and
> more efficient.
> IŽll get back to this in a few days.
>

Now this only reinforces my original interpretation. Any hardcoded
laws that can't be unrolled are part of AI's morals, you can't
substitute them with anything, there is nothing better from AI's
perspective, there are by definition no weaknesses. Saying that there
is something better assumes external evaluation, in which case AI
should be nearly perfectly optimal, no eternal crutches, or you are
paperclipped.

-- 
Vladimir Nesov
http://causalityrelay.wordpress.com/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT