From: Samantha Atkins (firstname.lastname@example.org)
Date: Tue Jun 03 2003 - 02:36:46 MDT
On Thursday 29 May 2003 05:10 pm, Philip Sutton wrote:
> Dear Eliezer>
> > Nature is not obligated to make her problems easy enough for
> > intelligent, informed non-mathematicians to understand them. But
> > of course people get quite indignant when told that a problem may
> > be too difficult for them.
> The world is not going to be inhabited by large percentages of
> super intelligent mathematicians and is not going to be run by the
> same anytime this side of the singularity.
So, one possible conclusion is that arriving at a singularity as
safely (hopefully) as possible will not be acheived by "the world"
doing its business as usual with the majority of non-super
intelligent brains. If the goal cannot be acheived by any but the
very few then it certainly seems obvious that they will be the ones
who acheive it if it is done at all and that the job of the world is
to not kill them or their progress and hopefully help them.
The task will not be easier than it is just because it would be more
palatable to the majority if it was easier to understand and follow.
So obviously, the trick is to make the task doable at all without
depending on the help of the normal world politicial and business
mechanisms more than is really (vs. by fiat) necessary.
> So if you want to ensure that people generally don't do stupid
> things re the development of AGIs - including applying
> inappropriate regulation - then you or someone is going to have to
> explain things to the public and the regulators in a non-arrogant
> way so that they grapple with the issue intellgently and
Or the regulators and those who support them will need to simply
stand down in this area. Of course these parties never ever admit
the level of their incompetence so I am not terribly hopeful.
Why assume that those capable of building a FAI should give a rat's
ass whether the politicians understand what they are doing or not?
Why should they have to? I am overstating this point as
counterbalance to the bland assumption that everyone must be
involved, informed, and in control.
> There are many ways to do this even if the issue involved requires
> at least someone to possess some arcane knowledge or understanding.
> People regularly rely on experts to advise them on things that are
> beyond their generalist knowledge or understanding. And the
> successful advisors are the ones that go to the greatest lengths to
> help the advised to understand the issue maximally and then the
> advisors establish a state of trust so that the very particular
> bits of the argument that the advised cannot understand for
> themselves are accepted on the basis of that trust. Then the
> advised can go on an make lots of good decisions.
Nope. The process is actually that the adviser receives a level of
trust/respect such that they can get on with the work with funding
and [relatively] unmolested. In real life the actual work is
complex enough that the advised should certainly not know the day to
day or even much of the detail of major challenges. The advisers
only need the advised to the extent of getting/keeping
funding/resources and not being overly interefered with. Only those
actually capable are involved or courted beyond that.
> So the existence of a problem that has elements that can only be
> understood by ultra-experts is not a basis for abandoning democracy
> or real dialogue. This is because such problems come up all the
> time in almost every aspect of modern complex human society.
Most of the dialogue in democracies today about most far simpler
technologies is not remotely "real". Or hadn't you noticed?
Processes that can't even deal with stem cell research surely will be
utterly inadequate for AGI much less FAI. Ironically, their very
inadequacies are one of the motivations for building the AGI/FAI that
some are attempting to persuade us these inadequate processes should
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT