Re: ethics

From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Sat May 15 2004 - 05:19:00 MDT


Hi Philip,

I think we've discussed this issue before. Using the
happiness of all humans as AI values equate AI values
with human values. This creates a symbiotic system
that will value non-human sentients because humans do.
But by mediating this care through human values, there
should be no danger to humans caused by conflicts
between human interests and the interests of other
sentients.

Cheers,
Bill

On Sat, 15 May 2004, Philip Sutton wrote:

> Hi Bill,
>
> > I favor values for the happiness of all humans.
>
> Humans are not the only life on Earth, let alone in the universe as a
> whole. So any super-advanced AI is likely to move beyond the bounds
> of Earth pretty soon - so whatever values we try to lock in need to be
> relevant for relating in environments beyonds the earth....possibly well
> beyond.
>
> And even on Earth super-advanced AIs are going to have to relate to
> other super-advanced AI and with modified humans that may not
> register anymore as 'human'.
>
> Human-centredness is too limited in time and space to be *the* core
> ethic.
>
> Friendliness to humans should be just one specific (very important)
> outcome of a more fundamental or general friendliness ethic.
>
> Cheers, Philip
>
>



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST