From: Cliff Stabbert (cps46@earthlink.net)
Date: Sat Nov 30 2002 - 10:04:10 MST
In various discussions of super-AIs arguments get made that imply a
good or even perfect ability to predict human or other sentient
behaviour.
If such prediction is based on simulation, at what level of detail
does it cross an ethical line? That is to say, if the simulation is
fine-grained enough to accurately simulate a number of sentients,
"are" those sentients "really" aware?
This may sound like a philosophical question as semantically empty as
the tree-in-a-forest-making-sound one. But I would assume that many
(most?) on this list believe that a "perfect" simulation would indeed
have consciousness, i.e. that awareness is irrelevant of the hardware
substrate it's running on, that John Searle's Chinese Room thought
experiment is fatally flawed, etc.
But if that's the case, and if it's the case that for a simulation to
be accurate enough, it must approach emulation (because of the initial
condition sensitivity / chaotic dynamics involved), then for the AI to
predict/calculate the moral or ethical implications of its actions
becomes in itself an action with moral and ethical implications.
"Would Action X cause suffering?" would require running a simulation
that could cause that suffering to the sentients within the
simulation.
IIRC, Stanislaw Lem touches upon this in one of the stories in _The
Cyberiad_, although from a different perspective.
Perhaps I'm missing something. What are the alternate paths towards
prediction and analysis? Has this issue been discussed here before?
-- Cliff
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT