RE: SIAI's flawed friendliness analysis

From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Fri May 23 2003 - 10:18:00 MDT


On Wed, 21 May 2003, Rafal Smigrodzki wrote:

> . . .
> I have a feeling, though, that you are less interested in using the
> government's carrot, but rather you would rely on the stick.

You shouldn't speculate about my motives.

> You mention
> making it more difficult for those who want to develop unsafe AI. There are
> some methods which seem to be the first to come to the mind of
> government-oriented people - burly men with guns, codes, statutes, secrecy
> and prisons. They seem easy, but their long-term effects are complex, and
> frequently counterproductive, which is why I want to use them only if I have
> absolutely no inkling of any better ideas.

Most people are well intentioned. For them the carrot
is fine. But there are people with a strong commitment
to implement their bad intentions. For them, the burly
men (and women) with guns are necessary. I don't envy
the police their jobs, but would not want to live in a
society without them.

> What exact means do you want to use for the purpose of "making things
> difficult", without interfering with the efforts I described above?

Computing power matters to intelligence, so to develop
an unsafe AI that is competitive with the smartest safe
AIs, the developers will need to obtain significant
computing hardware. This will make them easier to detect.
They will need a group of highly talented programmers,
and might be detected through them (outlaw communities
are riddled with informers). In order to educate their
AI, they may need significant access to the network and
the world in general, and be detected through it. They
may have other resource needs that can be used to detect
them. The need to keep their resource use hidden will
weaken their efforts.

On the other hand, developers of safe AIs will be able
to get resources openly. If there are regulations
governing intelligent machines with aggressive
enforcement, then most wealthy institutions like
corporations, universities and government will
cooperate (key word: aggressive enforcement) and
their large resources will go into safe AIs.

Of course, these forces are not fool-proof, but they
will generally favor safe AI development.

> An approach that slows FAI more than UnFAI, should *not* be tried no matter
> how easy it appears.
>
> --------------------------------------
>
> >
> > It really comes down to who you trust.
>
> ### Yes, it comes down to whether you trust the stick, or the carrot. I
> prefer the latter, with only very limited uses for the former, and not when
> applied to FAI.

I've learned a lot about the effectiveness of the carrot as
the author of several widely used open source visualization
systems. The carrot is very effective with well intentioned
people, which is most people. It is less effective with
badly intentioned people. We'll need the stick for those
few people who see the singularity as their tool for world
domination. History has lots of examples of people with
bad intentions who get control of large resources. Even
mildly selfish intentions may cause some powerful people
to fatally compromise the safety of their AI projects.

Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@demedici.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT