Re: [sl4] Evolutionary Explanation: Why It Wants Out

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Thu Jun 26 2008 - 09:46:31 MDT


2008/6/27 Tim Freeman <tim@fungible.com>:

> Humans have their computation conveniently stuck inside their skulls,
> so you can say where somebody is by tracking where their skull is. In
> contrast, an AI can write code or copy the AI's code or persuade
> someone else to write code on behalf of the AI or copy the AI's code,
> and any of these actions can get computation specified by the AI
> outside of the box even if the original AI is inside the box. If the
> AI is smart enough to out-lawyer you, then it will probably be able to
> circumvent whatever specification you give of "within the confines of
> the box", if it wants to.

It's a metaphorical box we are specifying, some sort of restriction.
It's interesting that when a goal is stated in the form "do X but with
restriction Y", the response is that the AI, being very clever, will
find a way to do X without restriction Y. But what marks Y as being
less important than X, whatever X might be? For example, if the AI is
instructed to preserve its own integrity while ensuring that no humans
are hurt, why should it be more likely to try to get around the "no
humans are hurt" part rather than the "preserve its own integrity"
part?

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT