From: Ben Goertzel (ben@goertzel.org)
Date: Fri Jan 18 2002 - 19:38:54 MST
> I find this extremely doubtful. I have several
> friends who have gotten their hard-earned Ph.D's in AI
> and every single one of them says that creating an AI
> through experimental software methods alone and not
> aided by a thorough knowledge of brain science will be
> doomed to failure.
Well, I got my hard-earned PHD in math, but I've been an
AI researcher for many years, and I teach AI in the university.
(see the syllabus for my current course at
www.goertzel.org/unm/ai/ ). I even supervise PhD students
getting their hard-earned PhD's in AI.
So while my opinion is different than that of many other
experts, this difference does not automatically render me
a non-expert...
It is true that the current mainstream approaches to AI are not really
aimed at artificial general intelligence (AGI). For a summary of
various contemporary approaches to AGI, go to
www.goertzel.org/realaibook/ and follow the link to the Prospectus.
If genuine AGI is achieved within the next decade, it will not
be the first time in the history of science and technology that
a radical breakthrough will have proved the majority of experts
to be shortsighted and narrowminded.
What would most experts have said about quantum theory in 1895, or
about airplanes 10 years before the Wright Brothers? Beware of
forming strong opinions based on the ideas of "experts."
Brain science of course can be highly inspirational for AI. But
since our current computer hardware is so un-brain-like, I really
doubt that closely emulating the brain is going to be a workable
approach to implementing AI for a good long while.
> In addition they also all agree
> that unlimited computer speed alone with do nothing to
> advance the state of AI Research.
Well, actually, *unlimited* computer
speed would allow true AI with a very simple program (this is
known as Solomonoff Induction). But mere orders of magnitude
improvements won't do that trick. What they will do, however,
is make it easier to implement and experiment with powerful
approaches to AI.
> One of my friends
> recently told me that the biggest flaw with several AI
> pathways is their overestimation of their scalability.
> He compares them to a person who, climbing a tree and
> seeing that they are moving ever upward, concluding
> that if they continue they will eventually reach the
> moon. Truer words have never been spoken.
Yes, the majority of AI approaches have severe scalability
problems. This fact does not imply that severe scalability
problems necessarily plague all non-brain-based approaches to
AI. AI approaches tend to get optimized for small problems because
this is how researchers can get results quickly and easily and
hence publish papers or make saleable products. To optimize an
AI approach for large problems is a much harder and more time-
consuming project with fewer interim rewards. For instance,
neural net algorithms
come in more and less scalable varieties. Hebbian learning is
fairly well scalable but works poorly with small networks. Backpropagation
is terribly unscaleable (esp. in its recurrent variety) but works
well with small networks. Which one gets all the attention?
I think some AI researchers are so bullish on brain science partly
because they know the difficulties of AI so well, and are more
ignorant of the difficulties of brain science. The grass is
always greener.... In my view one will need a pretty sophisticated
AI just to make sense of all the data that advanced brain scanners
generate. Making sense of EEG and MEG and PET and fMRI data is
already too hard for feeble human brains, in many cases... though
there have of course been some interesting results derived...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT