Re: Which first: Molecular nanotechnology or artificial intelligence?

From: Slawomir Paliwoda (velvethum@hotmail.com)
Date: Sat Feb 19 2005 - 21:43:00 MST


> Yeah, I know this question sucks.

No, it doesn't. It's fun to speculate.

>But I'm curious. Which do you think has the higher probability of coming
>into being first - real molecular nanotechnology or real artificial
>intelligence?

Robert Freitas and Ralph Merkle have proposed a $5 million 5-year project
whose "goal is to describe a complete set of molecular tools/reactions,
validated using appropriate computational chemistry software," which "would
provide a powerful basis for NNI and other funders to consider molecular
factories as worthy of mainstream funding." From what I understand, this
type of project is required to precede any future project centered around
creating real nanomachines.

Even if a project of the type proposed by Freitas and Merkle starts in 5
years, they estimate it will take 5 more years to complete it. That's ten
years from now until real funding allows advanced MNT research possible,
which might take next 5-25 years. In total, real nanotech could happen in
15-35 years from now. BTW, I'm pulling the first 5 years and the last 5-25
years almost exclusively out of thin air.

And there's Chris Phoenix from CRN who says that, given sufficient funding,
real MNT could happen in 10 years.

Weighing this all in, plus other things which I didn't mention, my
optimistic guess at this time is that we should see real nanotech around
2027.

I used to think that MNT was AI's enabling technology. The argument was that
*only* nanomachines could provide researchers with sufficiently detailed
structure of human brain which was *absolutely necessary* for computer
scientists to later emulate intelligent behavior inside a computer. I didn't
have much faith in cognitive science then.

What enables AI is the knowledge about how intelligence works. Derivation of
that knowledge doesn't necessarily depend on a single technology. In theory,
then, it is possible to derive knowledge about intelligence without
resorting to MNT, which means that AI could happen well before MNT. It is
possible that a combination of direct and indirect techniques from cognitive
science could provide sufficient knowledge to learn how intelligence works.
Incidentally, Eliezer goes much further along this path by claiming that
there's already more than enough existing evidence in the field to infer the
correct structure of intelligence. I don't know how much confidence he
assigns to that statement, but when I asked him earlier this week if he
really thought intelligence was just an "engineering" problem for him, he
chose not to answer which suggests to me that he may not be as confident
about his knowledge of how intelligence works, as I thought he was. If you
contribute, or plan contributing, any money to his project, better hope he
was merely too bored to answer. :)

Also, throwing massive amounts of computing power at the problem might
produce artificial intelligence. Since blind evolution created intelligence,
in theory, some evolutionary algorithm, powered by massive amounts of
computing power, could replicate similar feat inside a computer, i.e., if
one believes that a mind is a type of computer, and I do.
Considering such an evolutionary algorithm will almost certainly be simpler
to write than any intelligence-producing algorithm inspired by cognitive
science, it is possible that an evolutionary-produced-AI approach might
succeed well before other attempts at building real AI. One might argue that
evolutionary approach is currently scheduled to succeed sooner or later,
considering inexorable growth in computing power. In fact, I would like to
have this evolutionary approach to build AI officially declared as the most
dangerous existential threat to humankind today, just after the threats
posed by biological and nuclear weapons.

Even though it is impossible to predict when AI and MNT will happen, we have
the ability to at least track which technologies *could* happen first or, in
other words, to estimate potential of a particular technology to happen
first in relation to other technologies. In my estimation, the potential of
AI to happen before MNT is greater than MNT's potential to happen before AI,
and I would like to illustrate this with an example:

In order for MNT to happen, the researchers need tools to build more
complicated nanomachines. By the time that happens, the science will have
reached the ability to investigate molecular structures. At that point, even
though MNT should still have much more growing up to do before it could have
any impact on our world, cognitive science should immediately benefit from
techniques allowing study of matter at the molecular level. Cognitive
scientists could apply these techniques to copy, emulate and study
structures and mechanisms found in human brain in software, with the purpose
of filling all the missing gaps in maturing science of intelligence. AGI
researchers could use even partial, but growing, knowledge about
intelligence to build real AI well before cognitive scientists learn
*everything* about how human minds work.

This is the most conservative scenario as for when AI should happen. This is
when AI should happen at the latest. The scenario completely ignores
advances in evolutionary approaches to AI, imaging technologies, cognitive
science, and more progress in AGI research, all of which could accelerate
the advent of real AI. Considering all this, technically, AI should happen
before MNT. However, a lack of funding for AI and research into safe AI
frameworks might delay real AI well past the MNT revolution.

Slawomir



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT