RE: [agi] Re: SIAI's flawed friendliness analysis

From: Ben Goertzel (ben@goertzel.org)
Date: Sat May 17 2003 - 07:15:12 MDT


Yes, I understood the intent of your response.

I agree that solving the enforcement problem given a hostile programmer is
difficult.

However, the point I was trying to make -- too elliptically -- was that
solving the enforcement problem given a programmer who says or thinks he's
"Friendliness-friendly" is also difficult. Because how do you know the
programmer doesn't have some subtle twist in his psychology that is going to
come out in his testing/teaching/coding and inadvertently cause the end of
life as we know it? We don't have the knowledge to create the right
psychological test to prevent this, any more than we have the knowledge to
create a guaranteed-Friendly self-modifying AI whose end-state is guaranteed
not to depend sensitively on the psychology of its teachers/trainers.

However, I agree that if the teacher/programmer of the seed AI is actively
hostile to the notion of Friendliness, the odds of getting an FAI are
significantly lower. And, even though you raised it mainly as a rhetorical
point, I think that creating advanced lie detector tests to filter out truly
hostile and blatantly deceitful teacher/programmers, is a pretty interesting
and potentially good idea. Unfortunately though, as you know, this doesn't
solve the real problem of FAI ... which you have not solved, and Hibbard has
not solved, and I have not solved, and even George W. Bush has not solved
... yet ...

-- Ben G

> Ben Goertzel wrote:
> >
> >> I will be ecstatic if the problem of Friendliness is solvable by
> >> someone who genuinely wants to solve it. The problem given a hostile
> >> programmer is... how can I put this... INSANE. If you want
> >> technological advancement on the problem of trust, build a better lie
> >> detector using advances in neuroimaging, signal processing, and
> >> pattern recognition, and run the programmers through it. I hereby
> >> volunteer.
> >
> > But Eliezer, what questions are you going to ask the programmer under
> > the lie detector? To know what questions to ask you need to have your
> > values precisely formulated. Your suggestion is a good one, but it
> > simply pushed the issue of value-articulation into the process or
> > programmer-quizzing....
>
> That wasn't an attempted answer to the issue of
> value-articulation; it was
> an attempted answer to Bill Hibbard's problem of "Friendliness
> enforcement" on hostile programmers. I don't know how to solve the
> enforcement problem given a hostile programmer. Presently, it looks
> impossible. That's the only answer I can give.
>
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT