From: Stathis Papaioannou (stathisp@gmail.com)
Date: Sat Jun 28 2008 - 06:02:34 MDT
2008/6/28 Tim Freeman <tim@fungible.com>:
> I can't grasp what you're trying to accomplish here, but in any case
> that instruction doesn't buy you much. If the AI is human-equivalent
> or better, it can do engineering, so it can build a new, improved AI
> that has entirely different source code. Assuming the new, improved
> AI really is improved, the new AI will then take over and the old AI
> has become irrelevant.
>
> So, if the AI is human-equivalent or better, restrictions on what it
> does with its own source code don't have much effect on the set of
> possible outcomes.
Even without explicit restrictions, the AI won't want to change its
source code (or do anything else) if that is inconsistent with its
goals. If it thinks collecting stamps is the most important thing in
the world, then it won't modify itself so that it no longer wants to
collect stamps.
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT