Re: ethics

From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Sat May 15 2004 - 10:25:10 MDT


Hi Bill,

> I think we've discussed this issue before.

We have. :)

> Using the happiness of all humans as AI values equate AI values with
> human values. This creates a symbiotic system that will value non-human
> sentients because humans do. But by mediating this care through human
> values, there should be no danger to humans caused by conflicts between
> human interests and the interests of other sentients.

I can see this human centred system working in a few ways - one literal
and several non-literal.

The literal mechanism would require that any AIs that left the Earth
would have to keep checking back with the real human population
about new situations that it faces in various parts of the universe. I
think this would get increasingly unworkable as time goes on.

Or the AI would have to simulate humans and take a copy of 'all'
humans with them as they range the universe and get the simulation to
do a 'take' on what is the right ethical stance for all new situations. This
seems like a pretty impractical thing to do.

Or the AIs might try to synethesise a values algorithm that
approximates the values of all humans at some date or other. That's
pretty tricky too.

It seems to me that it might just be easier (and more effective) to try to
figure out a values set more consciously.

Anyway I guess my final feeling thoght is that your method could work
while AIs and humans are confined to the Earth but it is likely to break
down once this is not the case - especially if the AIs proliferate through
the universe. How would you see you method working in such a
'geographically' non-constrained situation?

Maybe humans and advanced AIs could work together on a conscious
prime values set prior to AIs getting the freedom of the universe? In
the meantime your idea could be used?

Cheers, Philip



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST