Re: [agi] Re: SIAI's flawed friendliness analysis

From: Eliezer S. Yudkowsky (
Date: Sat May 17 2003 - 07:46:32 MDT

Ben Goertzel wrote:
> However, I agree that if the teacher/programmer of the seed AI is actively
> hostile to the notion of Friendliness, the odds of getting an FAI are
> significantly lower. And, even though you raised it mainly as a rhetorical
> point, I think that creating advanced lie detector tests to filter out truly
> hostile and blatantly deceitful teacher/programmers, is a pretty interesting
> and potentially good idea.

No, I did not raise it as a rhetorical point. That is something I've
thought seriously about, and I think it's a pretty interesting and
potentially good idea.

Good intentions fall a few thousand lightyears short of being sufficient.
  But they are necessary.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT