Re: SIAI's flawed friendliness analysis

From: Michael Roy Ames (
Date: Fri May 09 2003 - 18:25:57 MDT

Ben wrote:

> However, Bill is correct that Eliezer's plans do not give much detail
> on the crucial early stages of AI-moral-instruction. Without more
> explicit detail in this regard, one is left relying on the FAI
> programmer/teacher's judgment, and Bill's point is that he doesn't
> have that much faith in anyone's personal judgment, so he would
> rather see a much more explicit moral-education programme spelled out.

As would I, of course. I am hopeful that the FAI specification will be
filled out in more detail, and (I would hope) the greater the detail the
greater my comfort level.

I should point out though, that a large part of FAI's spec. is to enable
a given AI to reach a good moral result despite the programmer/teacher's
judgement - good or bad. It is okay to 'not know all the answers' if
your design can find out the right answers itself when it come to

Michael Roy Ames

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT