Re: [sl4] Friendly AI and Enterprise Resource Management

From: Miriam Leis (m.transhumanist@gmail.com)
Date: Thu Oct 07 2010 - 12:42:21 MDT


... I am thinking about the following: why do some humans genuinely care
about those weaker and disadvantaged relative to themselves and invest time,
money and love to help them, although chances are low that the helper will
gain any (material) benefit? One reason obviously seems to be religion
(altlhough non-believers can act altruistically), but this brought me to the
question: should we build some religiousity/spirituality into an AGI - and
if, would the AGI buy it? Or what other measures can be implemented to make
the AGI unselfish?

Why do humans protect animals and care for them although they could easily
kill them?

Cheers,

Miriam

On 7 Oct 2010 19:02, "John K Clark" <johnkclark@fastmail.fm> wrote:

On Tue, 5 Oct 2010 "Mindaugas Indriunas" <inyuki@gmail.com> said:

> It might be that one of the b...
In other words, you're going to try to convince the AI that "objective
good" means being good to a human being not to an AI like itself, even
though the AI is objectively superior to the human by any measure you
care to name. Pushing the virtues of such a slave mentality (sorry, I
believe the politically correct term is friendly) to a being much
smarter than you are is going to be a very hard sell.

 John K Clark

PS: SL4 has been dead for so long I almost feel like I'm talking to
myself.

--
 John K Clark
 johnkclark@fastmail.fm


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT