From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Thu May 29 2003 - 18:10:06 MDT
Dear Eliezer> 
> Nature is not obligated to make her problems easy enough for
> intelligent, informed non-mathematicians to understand them.  But of
> course people get quite indignant when told that a problem may be too
> difficult for them. 
The world is not going to be inhabited by large percentages of super 
intelligent mathematicians and is not going to be run by the same  
anytime this side of the singularity.  
So if you want to ensure that people generally don't do stupid things re 
the development of AGIs - including applying inappropriate regulation - 
then you or someone is going to have to explain things to the public 
and the regulators in a non-arrogant way so that they grapple with the 
issue intellgently and effectively.
There are many ways to do this even if the issue involved requires at 
least someone to possess some arcane knowledge or understanding.
People regularly rely on experts to advise them on things that are 
beyond their generalist knowledge or understanding.  And the 
successful advisors are the ones that go to the greatest lengths to help 
the advised to understand the issue maximally and then the advisors 
establish a state of trust so that the very particular bits of the argument 
that the advised cannot understand for themselves are accepted on the 
basis of that trust.  Then the advised can go on an make lots of good 
decisions.
So the existence of a problem that has elements that can only be 
understood by ultra-experts is not a basis for abandoning democracy or 
real dialogue.  This is because such problems come up all the time in 
almost every aspect of modern complex human society.
> Now, bearing that in mind, you might start at:
> http://intelligence.org/CFAI/design/structure/why.html 
Thanks for  this reference.  I've just read it.
It seems to me that this document shows that with a little bit of thought 
and reflection and advice from experts even AGI programmers can 
reduce the chances that an AGI will go into idiot savant mode and 
destroy the planet/universe or whatever.  
It seems to me that a two-way contructive dialogue between you and 
Bill Hibbard on this subject could get to this point of mutual 
understanding fairly fast (relative to the present onrush to the 
singularity) - taking even a week or two of patient discussion wouldn't 
be too long in this context.
And this being so it should be possible for people like Bill, who I 
suspect would have a better chance of talking contructively with the 
public and potential regulators, could ensure that the public and the 
potential regulators in turn are brought up to speed and able to make 
reasonably sensible decisions.
I don't think it's very hard to come up with a two pronged strategy 
something like:
-   self-regulation by AGI development teams, in a mature and
    responsible way, is the first line of defence against an AGI-led
    disaster
-   but bearing in mind that some AGI development teams might not
    self-regulate responsibly, then there needs to be a second layer of
    regulation - this time imposed from outside for the collective good.
    But if there are responsible AGI teams in existence then the highly
    expert members of these teams, given that they will have striven
    mightily hard themselves to come up with effective means of
    generating friendly AGI, will be in a good position to be expert
    advisors to the regulators on how to go about the regulation task in
    an intelligent and effective way.
Cheers, Philip
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT