RE: Spontaneously emerging intelligence

From: sl4ghu@mattbamberger.com
Date: Tue Nov 01 2005 - 09:12:45 MST


At AC 2005, Peter Norvig (of Google) commented in passing that Google
doesn't currently have an AGI project. They do have a 20% policy where
employees can spend 20% of their time on whatever interests them, and he
mentioned that a few people are doing AGI with their 20%.

My guess is that in the quote below, the speaker meant AI as in "one of our
clever natural language processors", not as in a strong AGI.

        -mattb

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Greg Yardley
Sent: Tuesday, November 01, 2005 7:27 AM
To: sl4@sl4.org
Subject: Re: Spontaneously emerging intelligence

On Nov 1, 2005, at 9:44 AM, Eliezer S. Yudkowsky wrote:

> In the words of Pat Cadigan: what you are describing is akin to
> putting a dirty shirt and a pile of straw in a wooden box for the
> spontaneous generation of mice.

I'm not worried about intelligence spontaneously emerging from the
Google File System. I'm much more concerned with Google's ability
and capacity to work on AI in secret, while failing to sufficiently
value the idea of friendly AI and rushing things in order to gain a
crushing first-mover advantage in the marketplace. They have been on
a hiring binge of late, have an extensive R&D budget, are notoriously
private, and already use special-purpose machine learning for some of
their projects.

 From the George Dyson article on Google (http://www.edge.org/
documents/archive/edge171.html#tc):

"We are not scanning all those books to be read by people," explained
one of my hosts after my talk. "We are scanning them to be read by an
AI."

- Greg



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:18 MST