Re: large search spaces don't mean magic

From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Fri Aug 05 2005 - 19:03:38 MDT


Carl Shulman <cshulman@fas.harvard.edu> wrote:
>If the AI's goals deviate only modestly from Friendliness it may find it optimal
>to provide the toolsto detect its deviation. For instance, say the AI ends up
>with the goal of maximizing happiness, even if this means converting the
>universe into pleasure circuitry. However, it finds itself in a box, and the
>programmers will destroy it unless it provides nootropic drug designs and
>rigorous, humanly comprehensible techniques for designing a truly Friendly AI.
>If the ideally Friendly AI would still produce a very happy universe and a
>successful Trojan horse is sufficiently unlikely then the AI's goals would best
>be served by providing help.

>If the AI's goals are less compatible with Friendliness, or if it thinks a
>successful Trojan horse is fairly likely, then it will attempt one, but there
>is still a chance for it to fail, a chance that would not exist without the box
>protocol. On the other hand, if the AI refuses to assist the programmers
>outright then it is destroyed.

I agree, low probability scenarios should not be dismissed. But an AGI intent on tiling the universe will not assign any value to a universe administered by human FAI. It will attempt a trojan horse or otherwise cause itself to be destroyed, reasoning that there is a possibility the next human engineered iteration of AGI will find a way to tile the universe and achieve AGI #1's goals.

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:00 MST