From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sat May 15 2004 - 12:03:56 MDT
Philip Sutton wrote:
> Or the AI would have to simulate humans and take a copy of 'all'
> humans with them as they range the universe and get the simulation to
> do a 'take' on what is the right ethical stance for all new situations.
> This seems like a pretty impractical thing to do.
I wish this was true, as I would be significantly less troubled by
existential paranoia if it was.
> It seems to me that it might just be easier (and more effective) to
> try to figure out a values set more consciously.
Extracting extant human value sets is trivial for Powers, even without
doing anything invasive. Forwards extrapolation and getting a transhuman
AI to want to do this in the first place is really hard (for humans, and
that's all we've got right now).
> Maybe humans and advanced AIs could work together on a conscious
> prime values set prior to AIs getting the freedom of the universe?
To do this you need a way of expressing values that grounds in physics
and doesn't instantiate (i.e. simulate) any volitional sentients every
time you need to make an ethical judgement. This appears to be possible,
but the details are tricky.
* Michael Wilson
.
____________________________________________________________
Yahoo! Messenger - Communicate instantly..."Ping"
your friends today! Download Messenger Now
http://uk.messenger.yahoo.com/download/index.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT