From: Eugen Leitl (eugen@leitl.org)
Date: Mon Jul 19 2004 - 12:11:34 MDT
----- Forwarded message from Fred Hapgood <fhapgood@pobox.com> -----
From: "Fred Hapgood" <fhapgood@pobox.com>
Date: Mon, 19 Jul 2004 13:29:07 -0400
To: nsg@polymathy.org
Cc:
Subject: [nsg] Meeting Announcement
X-Mailer: MIME::Lite 1.4 (F2.72; T1.001; A1.62; B3.01; Q3.01)
Meeting notice: The 04.July.20 meeting will be held at 7:30 P.M. at
the Royal East (782 Main St., Cambridge), a block down from the corner
of Main St. and Mass Ave. If you're new and can't recognize us, ask
the manager. He'll probably know where we are. More details below.
Suggested topic of the week: Strong AI and where is it?
Half of what we talk about -- at least when we get within shouting
distance of our raison detre -- depends on the development of
something called 'strong AI'.
The term was invented by the philosopher John Searle to refer to
computers with minds or consciousnesses. The meaning we intend when
we use the term was (I think) articulated by Ray Solomonoff: the
ability to do science, math, and engineering as well as a reasonably
competent human. (At least I first heard this usage from Ray.)
Note that within this definition a strong AI doesn't have to perform
better than humans or even as well as the best human. For Solomonoff
the test is generality: a strong AI has to be able to find its
bearings and operate competently in a very wide range of fields, from
physics to engineering, from biotech to electrical, from protein
folding to biosynthesis. Once trained to a field, it has to be able
to handle the same range of problems over the same variety of contexts
as a professional engineer does during his or her work day, without
taking any more time to do so than that human would.
I think a reasonable but not definitive argument can be made that over
the past thirty years progress on this issue -- generality -- has been
zero to negative. (You get negative progress when people give up and
leave the field.) What progress has been achieved on the AI agenda has
come from evading the problem. Chess programs compensate for their
inability to recognize strong and weak positions as well as even
middling players by doing much better at calculating lookaheads.
Speech recognizers work by compiling huge databases of phoneme
patterns instead of recognizing semantics. Machine vision is making
some progress (with very simple applications, like license plate
recognition) because installations can be lit with LEDs, thus
eliminating a source of variation that humans are mostly unaware of
but which was killing the technology.
Agenda items whose variability can not be evaded, like Go or Natural
Language Recognition or for that matter lots of machine vision apps,
like counting the number of people in a crowd or extracting spatial
measurements from a photo, are going nowhere, or nowhere fast,
assuming the standard of reasonably competent human performance. Not
that that standard can't be lowered. Train a pigeon on a leaf and it
will recognize leaves of that kind in all kinds of orientations, light
levels, distances, life cycle stages, and even disrepair. Even
defined down like this, such accomplishments are still way out of
reach for our machines.
Any solution to the generality problem might cascade overnight into an
immensely powerful technology. I see no clear way of distinguishing
between the variability that must be mastered in decoding a scene
visually, handling objects, and solving low-level problems like
tolerances and structural integrity and manufactureability, and
managing the high- level variations involved with moving from
electrical to mechanical engineering or from engineering to physics. A
solution to the variation problem on one level might work on all of
them.
On the other hand, perhaps our failure to date is pointing to
something deep about the difference between digital and analog
systems, though Lord knows what.
In any event, if we can't build machines that can solve reasonably
hard engineering problems on their own, unattended, all manner of
complex engineering systems will be out of reach for a long time, from
the "Scientist's Assistant" (an automated first-year grad student) to
most of the high- end nanotech apps, with the assembler first on the
list.
<-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><->
In twenty years half the population of Europe will have visited the
moon.
-- Jules Verne, 1865
<-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><->
Announcement Archive: http://www.pobox.com/~fhapgood/nsgpage.html.
<-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><-><->
Legend:
"NSG" expands to Nanotechnology Study Group. The Group meets on the
first and third Tuesdays of each month at the above address, which
refers to a restaurant located in Cambridge, Massachusetts.
The NSG mailing list carries announcements of these meetings and little
else. If you wish to subscribe to this list (perhaps having received a
sample via a forward) send the string 'subscribe nsg' to
majordomo@polymathy.org. Unsubs follow the same model.
Comments, petitions, and suggestions re list management to:
nsg@pobox.com.
www.pobox.com/~fhapgood
www.pobox.com/~fhapgood
_______________________________________________
Nsg mailing list
Nsg@polymathy.org
http://mail.polymathy.org/mailman/listinfo/nsg_polymathy.org
----- End forwarded message -----
-- Eugen* Leitl leitl ______________________________________________________________ ICBM: 48.07078, 11.61144 http://www.leitl.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE http://moleculardevices.org http://nanomachines.net
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT