From: justin corwin (outlawpoet@gmail.com)
Date: Tue Aug 16 2005 - 17:58:16 MDT
'Dangerous!' cried Gandalf. 'And so am I, very dangerous: more
dangerous than anything you will ever meet, unless you are brought
alive before the seat of the Dark Lord. And Aragorn is dangerous, and
Legolas is dangerous. You are beset with Dangers, Gimli son of Gloin;
for you are dangerous yourself, in your own fashion....'
Dangerous is a very poor indicator of desireability. Anything of any
capability whatsoever is dangerous, often in relationship to it's
capability in some respects.
It's tempting to imagine that the space of things an Intelligence
might do is limited merely to human-relative moral decisions.
Unfortunately, this isn't true, even for humans. We can occasionally
make decisions vastly removed from moral choices that have
consequences we might call moral, and make many moral decisions which
have no such consequences.
This is probably the point where someone comes in and mentions
paperclips and the conversation devolves.
AIs need not have human-like goals to make human-relative moral
mistakes. It helps to have nasty things like xenophobic discounting of
the value of those different than you, and other nice human mental
corruptions, but they are not neccesary to do bad things.
Ben also makes a good point here on the complexity of moral decisions.
Any standard moral puzzler could be posed here in it's place as well.
Your concept of a one-way filter is naive. Many people have fallen
into depravity when moral context was removed or changed. The choice
is not between 'being good' and 'being bad', but rather at every
junction, a complex evaluation of the state of the world, the actions
you may take, and the goals you have (assuming a singleton action
model, of course). One of these goals may well be 'be good', but that
makes the evaluation no simpler. Actions must be chosen, and sadly, we
may not simply filter all that do not fall into the 'good' bin.
I suspect there is some theory of behavior, objectively extractable
from the semantics of goal-oriented agents interacting that we might
call a moral calculus. But short of demonstrating such a system and
it's direct benefits, no one can appeal to the universality of morals,
any more than an angry libertine can use the Constitution to 'prove'
government troops will not go to unpleasant places with unpleasant
agendas.
-- Justin Corwin outlawpoet@hell.com http://outlawpoet.blogspot.com http://www.adaptiveai.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST