From: Byrne Hobart (email@example.com)
Date: Sat Oct 20 2007 - 19:25:47 MDT
> The question: Are we ethically in the clear to experiment on AIs under
> the assumption that we won't accidentally create AIs that feel pain or
> fear death?
If creating an AI is good, and being willing to experiment on AIs
increases the chances of making them, I'd claim it's worth it to harm
some AIs in order to make development more certain. In general, if you
have scientific scruples you're ceding a research advantage to someone
who doesn't, so it pays to be amoral (or consequentalist) in coming up
with such standards.
Of course, your AIs will probably negotiate. They might offer
something, like their services as data miners or AI-co-developers, if
you'd spare them -- in which case it would (probably) be best to
uphold the bargain (on the assumption that cheating the AIs will have
long-term negative consequences; when successful AIs are developed,
they'll be likely to worry about someone who treats an AI differently
from the way it treats a human).
Which brings up another issue: how 'xenophobic' will AIs be? Would we
expect them to react adversely to humans who have shown anti-AI
prejudice in the past?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT