From: Bill Hibbard (firstname.lastname@example.org)
Date: Fri May 14 2004 - 13:53:03 MDT
On Fri, 14 May 2004, Tim Duyzer wrote:
> The three laws of robotics are theoretical science first and plot elements
> second. Of course, it will likely be incredibly difficult to build such laws
> into an artificial intelligence, but that doesn't make them any less valid.
> I would be interested in seeing alternatives to Asimov's 'robot morality',
> though. If you know of a different set of effective laws that would protect
> both human and robot interests, I'd like to see them.
Rather than laws prohibiting robots from harming humans,
consider designing them so they don't want to harm humans.
For example, there is no need for laws preventing sane
mothers from harming their babies. The keys with intelligent
machines will be the values that reinforce their learning
of behavior, designing them to learn accurate models of
the world (a big part of what sanity means), and ensuring
that their values are for the long term rather than just
short term. I favor values for the happiness of all humans.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT