From: Ben Goertzel (ben@goertzel.org)
Date: Fri Mar 05 2004 - 19:49:53 MST
Hi,
It's certainly not true that experts are always overoptimistic about
progress.
Rather, what history shows is that progress is damn hard to predict,
even for experts. Some things go much faster than projected, other
things go much slower.
Marc, you seem to make the assumption that we need to understand 100%
about mind/brain in order to create an AGI. This seems like a poor
assumption. We have made many useful devices utilizing quantum
phenomena, yet we certainly do not have a good understanding of these
phenomena yet. Creating an AGI requires a particular sort of partial
understanding.
My own feeling is, I agree with you that visions of a Singularity in 5
years are basically fantasy. 5 years is close enough that it's
reasonable to make projections based on current technology. We could
potentially have a roughly human-level AGI in 5 years, if research goes
very well -- but even if we get there that fast, solving the issues of
Friendliness and reasonably-predictable self-modification is going to
take some time.
But regarding the state of the world 20 years out or even 10 years out,
I think it's much harder to say anything definite.
Kurzweil's projections of a Singularity in 2030-2060 seem reasonable and
conservative to me. A breakthrough could make things happen sooner ...
though almost certainly not within 5 years ... and a turn for the worse
in the world political/economic situation could make things happen
later...
I'll make a more controversial statement, though: I think that if the
world WANTED a Singularity in, say, 5-8 years, it could quite possibly
get it. A Manhattan-project-style attack on the AGI problem could
potentially yield dramatic results in a 2-3 year period.... However,
the powers that be do not seem to have a belief-system oriented in this
direction at all, so this statement of mine is basically irrelevant in a
practical sense.
-- Ben G
-- Ben
Marc wrote:
> >Some thoughts on the future: If you're commited to
> >the task of Creating FAI, you need to realize that
> >you're in it for the long haul. Utopian visions of a
> >Singularity in only 5-20 years are almost certainly
> >fantasy (although I'd love to be proved wrong).
> >People at the forefront of research are almost always
> >far too over optimistic about how long it would take
> >to achieve major breakthroughs. Fundamental knowledge
> >about general intelligence is still missing. It is
> >estimated that only about 2% of everything there is to
> >know about cognitive science is known. Even with
> >exponential progress, 100% knowledge wouldn't be
> >reached for another 40-50 years.
Keith Henson wrote;
> I am reminded of Xanadu hypertext, a project I was around for more
than a
> decade and involved with at the programming details level at the very
end.
>
> In spite of involving some really smart guys (including Eric Drexler)
for
> a
> number of years, they never did solve all the problems of building a
> perfect hypertext engine. It is possible that the full set of
problems as
> they stated them were impossible instead of just being very, very
hard.
>
> But, as it turned out, it didn't matter. The Internet and the World
Wide
> Web came along. With URL links and search engines they did most of
what
> Xanadu was trying to do *without* solving the hard problems.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT