From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Tue Oct 04 2005 - 06:06:53 MDT
On Mon, 3 Oct 2005, David Massoglia wrote:
> I am a huge fan of Ray Kurzweil and believe him to be extraordinarily
> gifted. I would be curious to hear from the many intelligent people on
> this board if they have any "significant" differences from Kurzweil's
> opinions and projections on the Singularity or other significant matters
> and why. Again, I am only interested in "significant" differences and
> will leave that to your judgment.
Here's a review I posted on Amazon:
4 out of a possible 5 stars
title: a Good Book, but Fails to Adequately Address the Dangers of AI
In "The Singularity is Near" Ray Kurzweil continues his
role as the primary advocate and educator for the coming
technological revolution in intelligence. As he describes,
this and other related new technologies promise enormous
benefits to humans, such as indefinite life span and
greatly increased intelligence. However, artificial
intelligence (AI) also poses serious threats, and Kurzweil
does not adequately address those threats and the possible
ways to defend against them. This is in contrast to his
detailed descriptions of threats from genetic engineering
and nanotechnology, and possible defenses against them.
Kurzweil says one of his fundamental principles is "respect
for human consciousness", on page 374. But if AI develops
without any regulation it will just extend human
competition (military, economic, etc). This will continue
to amplify the gap between winners and losers, as the
technological revolution is already doing. If human society
evolves to a state in which the intelligence gaps between
humans are greater than the gaps between current humans and
their pets, this decision should be made consciously by an
informed public. As the primary educator on intelligence
technology, Kurzweil has a responsibility to explain this.
On page 470 he quotes Leon Furth, former National Security
Advisor to Vice President Gore, as saying that Americans
will not 'simply sit still' for the AI revolution. It is
encouraging that people in such powerful positions are aware
of the issues.
In his section on "... and Dangers", pages 397-400, Kurzweil
discusses dangerous scenarios for genetic engineering and
nanobots but not for AI, which is puzzling.
He does later address the issue briefly for AI, on page 420,
where he says "As such, it [AI] will reflect our values
because it will be us." But which of us? The wealthy and
powerful initially, and the rest of us later if they
allow it. You could make the same argument for lack of
regulation over nuclear bombs, because they are built and
controlled by "us", and hence control over them reflects
our values. But nuclear bombs are subject to collectively
agreed values, at least in democratic countries, and with
some effort to extend that internationally. AI will be more
dangerous than nuclear bombs and should also be subject to
collectively agreed regulation.
On page 424, of efforts "to deal with the danger from
pathological R (strong AI)" Kurzweil says "But there is no
purely technical strategy that is workable in this area,
because greater intelligence will always find a way to
circumvent measures that are the product of a lesser
intelligence." This is not true if our strategy is to
design greater intelligence to not want to circumvent
protective measures. I discuss this at length at:
http://www.ssec.wisc.edu/~billh/g/mi.html
There will be those who design machines with values that
don't comply with regulation, but this threat is best met
by putting resources into the development of complying
AIs that can help detect and eliminate non-complying AIs.
This is very similar to Kurzweil's own prescription for
accelerating development of defensive technologies for
genetic engineering and nanotechnology. He makes clear
that such defenses are very difficult but that the
problem must be solved to avoid a catastrophe. The same
logic applies to defenses against pathological AI: very
difficult but necessary.
In his "Response to Critics" chapter, Kurzweil addresses
the issue of government regulation, but only whether it
will slow down or stop technological progress. He does not
address here the question of whether AI should be regulated.
Kurzweil ends (except for the notes and other back matter)
on a very good note in "Human Centrality". He rebuts the
claim of Stephen Jay Gould that all scientific revolutions
reduce the stature of humans in the universe, by asserting
that human brains and the successors they create are the
main drivers of the universe.
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:04 MST