Re: Building a friendly AI from a "just do what I tell you" AI

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Nov 20 2007 - 02:40:04 MST


On 20/11/2007, Robin Lee Powell <rlpowell@digitalkingdom.org> wrote:

> Well, high up the list, anyways. Let us know when you have a nice
> mathematical proof that your AI will continue to "understand you"
> even in the face of self-improvement.

I was going to say that the AI won't deliberately rewrite its
supergoal, but I can see that this is getting into basic questions
which would have been thrashed out in detail many times on this list,
so I'll stop here and go look at the archive and Eliezer's writings.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT