[sl4] Long-term goals (was Re: ... Why It Wants Out)

From: Tim Freeman (tim@fungible.com)
Date: Sat Jun 28 2008 - 04:14:07 MDT


Tim writes
> By the way, if the AI has any long-term goals, then it will want to
> preserve its own integrity in order to preserve those goals. Although
> "preserve its own integrity" is a good enough example for the issue at
> hand, it's not something you'd really need to put in there explicitly.

From: "Lee Corbin" <lcorbin@rawbw.com>
>...my calculator seems to display a tremendous
>urge to finish any computation I key into it, but doesn't
>seem to be the least bit reluctance toward being turned
>off or even thrown away. Why do most people here
>appear to never to entertain the idea that an AI might be
>rather similar?

Most existing AI systems are like that.

I think the subtext here is that people are interested in the Friendly
AI problem.

If the AI you're talking about is willing and competent to correctly
answer general questions, then we do have something that pertains to
Friendly AI even though the device has no long term plan or even no
ability or desire to do anything more than answer questions.
Eventually someone asks it "How can I get a lot of money?" or anything
else pertaining to a long-term plan or complex wish. The answer might
have the form "Buy computer X from vendor Y, connect it to the
internet, run this 100MB program on it, and type in the username and
password for your brokerage account". Then there would be
Friendliness issues with the program running on the new computer X.

-- 
Tim Freeman               http://www.fungible.com           tim@fungible.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT