From: Richard Loosemore (rpwl@lightlink.com)
Date: Sat Jun 24 2006 - 09:07:43 MDT
Michael Vassar wrote:
>
> Pretty much.
>
>
>> Okay, Michael, I'll take the bait: what is the message we should
>> derive from these parables?
>>
>> That someone is using intuitions to conclude that certain facets of
>> AGI [which ones, exactly?] are impossible, whereas someone else is
>> trying to insist that such intuitions [of impossibility] usually turn
>> out to be unreliable?
>>
>> I am not sure which aspect of the debate you mean this to apply to.
>>
>> Richard Loosemore
>
>
>
Well, in that case you (and Scott Aaronson) seem to be harboring an
irrational bias of precisely the sort complained of so often by the
Rationalists: a selection bias.
*Most* claims of impossibility usually turn out to be unreliable?
What about the impossibility of finding tractable, analytic,
non-experimental solutions to nonlinear systems and (by extension)
self-organizing systems?
In his book "Sync" Stephen Strogatz says:
"The mathematician Stanislaw Ulam once said that calling a problem
nonlinear was like going to the zoo and talking about all the
interesting non-elephant animals you see there. His point was that most
animals are not elephants, and most equations are not linear. [...] In
any situation where the whole is not equal to the sum of the parts,
where things are cooperating or competing, not just adding up their
separate contributions, you can be sure that nonlinearity is present.
Biology uses it everywhere. Our nervous system is built from nonlinear
components. [...] And human psychology is absolutely nonlinear. [...]
Every major unsolved problem in science, from consciousness to cancer to
the collective craziness of the economy, is nonlinear."
The space of systems (by which I mean equations, systems of equations,
and the larger algorithmic entities that are usually just called
"systems") is a vast space, and in one tiny corner of that space you
will find an embarrassingly small little pile of analytically tractable
systems - and this little pile is virtually ALL of mathematics. Outside
of this little corner, there is nothing but darkness: and we conjecture
(we have intuitions) that there will never be tractable analytic
solutions to *most* of the contents of that Outer Darkness. We have
conjectured this for decades, if not centuries, and so far our
conjecture has turned out to be rock-solid, because the amount of
progress that has been made in converting those intractable problems
into tractable ones has been unthinkably negligible.
If you believe, and if Eliezer believes, that "most claims of
impossibility usually turn out to be unreliable" then you have to
include in your count this enormous space of unsolvable systems (which,
as Strogatz points out, includes systems that self-organize, with
elements that compete and cooperate ... like for example, intelligent
systems), and by that count you are completely wrong. Most of those
systems are still not analytically soluble.
You are so flagrantly wrong, in fact, that it is difficult to understand
how any rational person could make that statement, unless they were the
victim of a monstrous form of selection bias: chosing the evidence to
support their prior belief, and ignoring a massive body of evidence that
disconfirms that prior belief.
I believe this self-delusion is what is precisely what is happening.
There is a large community of people who collectively agree to not look
at that embarrassing fact about the prevalence of nonlinearity, and when
a community of people pull the wool over their own eyes like that, they
eventually convince themselves that black is white and white is black
and they are telling the truth to themselves. Collective delusion.
I wouldn't care so much, except that these people have a stranglehold on
Artificial Intelligence research. This is the real reason why AI
research has been dead in the water these last few decades.
Richard Loosemore.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT