Re: SIAI's flawed friendliness analysis

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed May 21 2003 - 12:15:29 MDT


Rafal Smigrodzki wrote:
>
> ### Here we definitely agree - a huge government-funded AI program would be
> a great idea. Since the FAI may be interpreted as the ultimate public good,
> a boon for all, yet profit for nobody, a good case can be made for public
> funding of this endeavor, just like the basic science that gave us modern
> medicine. This program, if open to all competent researchers, with results
> directly available to all humans, could vastly accelerate the building of
> the FAI.

How can you get FAI by throwing money at the problem? *Unfortunately* I
can see how throwing sufficiently vast amounts of money at AI might result
in advances on AI. But how do you get advances in Friendliness? It seems
to me that the optimal political scenario for ensuring FAI over UFAI calls
for all researchers to have *exactly* equal resources and funding, so that
the smartest researchers have an advantage. Why is this the optimal
political scenario? Because there is absolutely no way in heaven or Earth
that the political process is capable of distinguishing competent
Friendliness researchers from noncompetent ones. Any non-maxentropy
*political* resource distribution will probably be pragmatically worse
than an even distribution. Furthermore, you don't want absurd quantities
of resources, either, as otherwise you may push research into the
territory where brute-forcing AI becomes possible.

The more you look into the problem, the more you realize how hard it is to
find forces that genuinely *improve* our prospects, rather than making
things even worse.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT