Re: Problems with AI-boxing

From: Chris Paget (ivegotta@tombom.co.uk)
Date: Sat Aug 27 2005 - 19:32:40 MDT


Daniel Radetsky wrote:
> So give me a hint:
> why do you believe that prior friendliness is not good evidence of future
> friendliness?

To be clear: I'm saying that past friendliness is no indication of
future friendliness if intelligence is increasing over time. I also
agree that the box is irrelevant to this statement, which is why I
believe boxes serve very little purpose.

Any moral system evolves over time as complexity is introduced and new
capabilities are found. The best example of this is the legal system.
Every country has a different system of laws, which define what is good
and what is bad. None of these sets of laws are ever complete; they are
constantly evolving as technology develops and new examples of
lawbreaking emerge that are beyond the scope of the existing laws. As
the complexity of the system increases, the laws that previously defined
right and wrong do not always apply.

It is widely recognised that a child may be incapable of distinguishing
the difference between right and wrong since they do not understand
morality itself in sufficient depth. Why should a limited AI be treated
any differently? If we make the assumption that an intelligence-limited
AI is a reasonable approximation of a child, then limiting it is
pointless; the AI must either be designed from scratch to have
super-human morals to match its super-human intelligence, kept in a box
for its entire existence, or never built in the first place. Anything
else and you take your chances.

Chris

-- 
Chris Paget
ivegotta@tombom.co.uk


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT