Re: SIAI's flawed friendliness analysis

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu May 29 2003 - 18:46:47 MDT


Philip Sutton wrote:
> Dear Eliezer>
>
>> Nature is not obligated to make her problems easy enough for
>> intelligent, informed non-mathematicians to understand them. But of
>> course people get quite indignant when told that a problem may be too
>> difficult for them.
>
> The world is not going to be inhabited by large percentages of super
> intelligent mathematicians and is not going to be run by the same
> anytime this side of the singularity.
>
> So if you want to ensure that people generally don't do stupid things
> re the development of AGIs - including applying inappropriate
> regulation - then you or someone is going to have to explain things to
> the public and the regulators in a non-arrogant way so that they
> grapple with the issue intellgently and effectively.

As near as I can tell, this is unlikely to the point of being impossible.
  We can probably take Barkley Vowk as representative of the majority opinion.

Sure, I have ideas. Sure, I try. But if Earth were not my home planet
and I was looking it over and dispassionately estimating the odds, the
odds would be pretty damn dim.

> There are many ways to do this even if the issue involved requires at
> least someone to possess some arcane knowledge or understanding.
>
> People regularly rely on experts to advise them on things that are
> beyond their generalist knowledge or understanding. And the successful
> advisors are the ones that go to the greatest lengths to help the
> advised to understand the issue maximally and then the advisors
> establish a state of trust so that the very particular bits of the
> argument that the advised cannot understand for themselves are accepted
> on the basis of that trust. Then the advised can go on an make lots
> of good decisions.

No, I think the most successful "advisors" are the ones who willingly sell
the illusion of understanding.

> It seems to me that this document shows that with a little bit of
> thought and reflection and advice from experts even AGI programmers can
> reduce the chances that an AGI will go into idiot savant mode and
> destroy the planet/universe or whatever.

Showing several clear problem-solution pairs doesn't demonstrate that
"with a little bit of thought and reflection", AGI programmers can
substantially improve their chances. It shows that *those particular*
problems, the ones in *that particular* document, may be foreseeable and
perhaps preventable.

If I wanted to be a really successful advisor, I could write the
problem-solution pairs for simple cases. People would read it and say:
"Gee! I understand Friendly AI!" And then they would have an illusion of
understanding that would lead them to listen to me, the expert advisor who
led them to so easily understand this wonderful complex thing. This I
regard as unethical. But it does, I think, form the basis of most
successful "advice".

> It seems to me that a two-way contructive dialogue between you and Bill
> Hibbard on this subject could get to this point of mutual
> understanding fairly fast (relative to the present onrush to the
> singularity) - taking even a week or two of patient discussion wouldn't
> be too long in this context.

We could perhaps get to an understanding of why some given schema fails.
To explain how to do it *right*, sufficiently so that Hibbard could build
his own AI without getting killed; that is something that cannot be done
in a week or two of patient discussion.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT