Re: Reason, intuition, and AI (was: Metarationality)

From: James Rogers (jamesr@best.com)
Date: Sat Aug 24 2002 - 19:14:26 MDT


On 8/24/02 3:25 PM, "Cliff Stabbert" <cps46@earthlink.net> wrote:
> Saturday, August 24, 2002, 4:40:26 PM, James Rogers wrote:
> JR> My objection to this is primarily that most intuition
> JR> that is worth anything CAN be resolved through
> JR> introspection and thought. People can usually piece
> JR> together the reasons for their intuition if they think
> JR> about it hard enough.
>
> Nonsense. If this were even close to the truth, creating AI would be
> dead simple.

<shrug>You are projecting your own limitations on me. I do this type of
thing all the time as a matter of course for algorithm design, deciphering
human reflex and intuition into atomic, generalized computational steps.
The hard part is that nobody thinks they are doing what they are actually
doing when their mind is at work, at least not at first glance. There is a
level of abstraction that seems to escape many people.

Quite frankly, this has nothing to do with the problem of creating AI. It
is relevant, but not particularly important. The problem of AI rests in the
architecture of the computing machinery (in an abstract theoretical sense),
not the arbitrary algorithm that runs on it. Algorithms are expressible on
any old piece of computing machinery, but many problems may be intractable
depending on the specific computing machine used to run a given algorithm.
Hence why figuring out the computing architecture is arguably more important
than reverse-engineering a specific algorithm (which a lot of AI research is
actually spent on).

 
> For a nice example of an intuitive process that /was/ eventually broken
> down through introspection and thought, see
> http://www.gladwell.com/2002/2002_08_05_a_face.htm
>
> Note the incredible amount of work involved with such analysis.

The point being that it was deconstructed into an algorithm that anyone
could have uncovered if they had spent the time analyzing it. Some systems
are inherently more difficult to analyze than others (e.g. interactions
between humans being a moderately complex problem), but they are all
solvable. In the case above, the man with the ability apparently had a good
implementation of an algorithm running in his brain that he utilized without
understanding it. But then most people drive cars without knowing how the
engines actually work; not caring doesn't mean that the average person
couldn't figure it out if they expended a little effort.

I actually have a similar type of "autistic" type of intuition that is quite
unusual. When there is a subtle bug in a system that the programmers can't
locate, they will give it to me to scan over because even if I have never
seen the code before I can intuit the flaw down to a code block of a dozen
or two lines merely by glancing over the reams of code. Its weird because
it may take me another fifteen minutes to locate the actual bug in that
dozen or two lines; I can only intuit that the flaw is there, not the
specifics of it. It is an anomaly that people find useful. Over the years,
I've observed myself as I scan a couple thousands of lines of foreign code
in a minute and figured out what I'm actually doing. My mental process may
be unusual, but I didn't have much of a problem figuring out what my mental
process was either with careful analysis.

I self-observe (and analyze) my thought processes almost constantly; it
happens naturally. It makes even my most obscure intuitions fairly
transparent, later if not right away. I assume that most other people are
doing something analogous, or at least have the ability to.

-James Rogers
 jamesr@best.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT