From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sun Feb 06 2005 - 23:22:51 MST
> Don't under estimate pure self reflection.
AI is fundamentally an engineering problem, though some of the
subtasks of working out exactly what we want to achieve and
dispelling the persistent misconceptions that block progress may
be classified as philosophy. Useful AI design is extremely
demanding on the ability to compose, manipulate, link and ground
formal systems, as well as the ability to move up and down
abstraction hierarchies (you must be able to do this effortlessly
and correctly to have a chance of being able to view the problem
reflectively and see how abstraction and reference work). A
successful AGI designer must be able to abstract themes from
details, merge them while maintaining strict information-theoretic
validity, and then translate the result back into a valid
engineering specification that will really achieve what the
designer wants it to. In this respect AI as a discipline combines
the demands of programming, architecture, mathematics, cognitive
science and novel writing.
> I think ancient philosophers *could in principle* have up with
> virtually all of modern transhumanist philosophy.
The human mind cannot support that many unsupported inferential
steps even without the need to independently dismiss numerous
plausible-seeming fallacies along the way.
> Using intense self-reflection alone, I'm confident I've
> managed to 'punch my way' well past the current empirical data.
You've been playing word games for a long time. Over time the
words acquired particular sympathies and antipathies in your mind
that cause them to fit together in certain structures. But those
structures were not built by careful combination of objective
axioms and empirical evidence and show no (apparent) concern for
plausibility as a predictive, descriptive model of a real world
intelligent agent. Your attempts to formalise them amount to
rationalisation of the wild guess that managed to gather enough
cognitive support for itself over repeated mental shufflings.
This isn't a problem you can solve by connecting a few formal
models with an overarching grid of fuzzy, ungrounded concepts
that forms a cool-sounding pattern. The problem must be solved
by large scale combination of those formal models into a vast,
intricate, multilayered yet consistent pattern. There is no a
priori requirement that the pattern look plausible or
comprehensible to untrained perception, and indeed given the
history we should expect this result. The solution /does/ have
a kind beauty to it, but what legions of AI researchers missed
is that you can't produce a rose without massive amounts of
intertwined biochemical complexity (on top of the basic
principles of molecular physics and natural selection).
> P.S I wouldn't be so sure that Bayesian Reasoning is
> the ultimate epistemology if I were you.
Probabilistic reasoning is a formal model created in a space
very different from reality. Application of it requires a lot
of human cognitive effort to dynamically create maps that are
both descriptive of the target problem and compliant with the
logical requirements of probability theory, followed by
further effort to manipulate the model in useful ways. Clearly
any constructive and complete theory of AI based on Bayesian
principles must account for how this activity is accomplished.
Epistemologists tend to forget that reality does not contain
'knowledge' any more than in contains 'flowers'; in fact in
practice 'knowledge' is a much more difficult class of
regularity to define.
> You should have realized that by looking at my 8-level
> intelligence schematic. It would be nonsense if Bayes
> really was the last word)
I will keep it in mind in case I ever need to license your
Gibberish Matrix Technology (TM) for use in producing
meaningless yet superficially impressive press statements.
Meanwhile I have to wonder if your real goal is to cement
yourself as /the/ definitive, canonical crackpot that will
be remembered as such for the rest of human history.
Seriously, you've used just about the most indirectly
grounded and hence fuzzily defined concepts in the human
cognitive repertoire, both in the table and the supporting
text. It's a standard human tendency to assume that the
most abstract concepts have the most descriptive power, but
as as several posters have pointed out /engineering, and AI
in particular, does not work like that/. Your output to
date hardly constrains the design space of implementations
at all; I suspect it would take very little additional
rationalisation to characterise any sufficiently complex,
superficially plausible AGI architecture as obeying these
principles (should one so desire). Nor does it make any
detailed predictions on the behavior of AGI or proto-AGI
systems, and as such it is worthless as a constructive or
predictive theory.
> All this and I haven't even really bothered to 'hit
> the books' yet.
I suspect that your preconceptions will prevent you from
extracting anything of value; you need a delicate
combination of open mindedness, rigorous filtering for
clue and creativity within logical constraints to elicit
a constructive, predictive understanding of AGI from the
AI literature.
> It's my philosophical intuition versus Sing Inst's
> super-geniuses. I love it ;)
Entertainment value aside, I suppose we can both hope that
it will serve as an object lesson in the value of philosophical
intuition in the future, albeit from perfectly opposed
motives.
* Michael Wilson
http://www.sl4.org/wiki/Starglider
.
___________________________________________________________
ALL-NEW Yahoo! Messenger - all new features - even more fun! http://uk.messenger.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT