From: Ben Goertzel (ben@goertzel.org)
Date: Tue Jun 25 2002 - 11:23:42 MDT
> > I didn't work on the moonshot or the Genome Project, but I think AI will
> > take substantially less money and substantially more intelligence
> > to solve.
Actually I'm not sure what you mean when you say "AGI will require
substantially more intelligence than any prior breakthrough in science."
These past scientific breakthroughs involved the collaboration of many
genius-level humans over long periods of time. We don't have access to
smarter people than were involved in these breakthroughs. As smart as you
or I might be, we're obviously in the same ballpark as loads of other
scientists in the past....
Are you referring to
A) the intelligence that will be required by the team that actually builds
the AGI directly?
or
B) the *total* collective intelligence of all people whose work will
contribute directly or indirectly to AGI?
If A, then your statement implies that AGI engineering is almost surely
unsolvable by humans... I don't think any human AGI team can deploy
"significantly more intelligence" than the teams underlying great
scientific/engineering achievements of the past.
If B, then you could be right, and AGI could still be achievable. There are
more & more scientists on Earth, and better and better technologies for
information dissemination. Modern scientific discoveries generally deploy
far more *collective* intelligence than scientific discoveries from decades
ago did.
In terms of the amount of $$ spent on AGI, if AGI can be achieved on a low
budget, it's obviously because of the huge amount of $$ put into related
commercial technologies like computer hardware.... One of the nice things
about AGI work is that it can be done on commodity equipment (computers),
not requiring rare and costly equipment. This makes it easier for the
deployment of broadly based collective intelligence toward solving the
problem.
In any event, as I said before, I think we humans tend to overestimate the
difficulty of the AGI problem due to an emotional attachment to our own
intelligence. It is a very very hard problem, but I doubt it is so much
harder than other very very hard problems humans have cracked in the past.
But only time will tell ;)
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT