Re: [sl4] Potential CEV Problem

From: Mike Dougherty (msd001@gmail.com)
Date: Thu Oct 23 2008 - 20:22:48 MDT


On Thu, Oct 23, 2008 at 7:52 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> --- On Thu, 10/23/08, Toby Weston <lordlobster@yahoo.com> wrote:
>> Just in case we do, deep down, want to kill all humans.
>> Perhaps we should add a hardcoded caveat to the friendliness
>> function, that puts all basline, pre-posthuman, homo sapiens
>> off limits to the AGI god's meddling. Let the Amish live
>> whatever happens.
>
> Wouldn't it be easier (or at least, have a higher probability of getting the expected result) if we just ban AI?

To clarify - is the "expected result" to kill all humans or not? I
thought we wanted AI to be smart enough to protect us from other
eventual AI as well as the myriad non-AI ways humanity could wipe
itself out.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT