From: justin corwin (firstname.lastname@example.org)
Date: Tue Jun 27 2006 - 21:01:19 MDT
I know you dislike being quoted, but this is a little too much to let pass.
On 6/27/06, Richard Loosemore <email@example.com> wrote:
> If you cannot think of anything constructive to say on a topic, or if
> you cannot understand the argument, the honest thing to do is to keep
> your peace and resist the temptation to make calculated, gratuitous insults.
Your post that caught my attention, which Mr. Vassar is probably
referring to, contained a great deal of inflammatory language, which
is difficult to rationalize as constructive.
In it, you claim that the Michael is "...so flagrantly wrong, in fact,
that it is difficult to understand how any rational person could make
that statement", and that in fact the AI community at large is in a
situation where: "... a community of people pull the wool over their
own eyes like that, they eventually convince themselves that black is
white and white is black and they are telling the truth to themselves.
I personally have no problem with strong language, and allowed this to
set the tone of my emails as well. It is tiresome, though, to be
chided on this point, given the situation.
If you want to call an entire community deluded, irrational, and make
very sweeping statements about science, mathematics, AI, and other
large topics, it rings very hollow when you then jump to complaining
about your opponents saying negative things about you.
This is a pattern I see, wherein you make very strong statements, and
then when threatened or challenged, focus on the speaker, the terms of
the argument, some irrelevant subpoints of the speakers response...
anything , in fact, but what the main thrust of the argument is.
Predictably, you chose to ignore my questions, and in an aside to
someone else characterize my response as "mindless, kneejerk response
that copies my entire post and sprinkles it with "This is all just
stupid, fuzzy thinking"." I'm sorry that I attempted to address your
missive in detail. If you read my comments, you'd note that I actually
did ask a few questions, and use a few more arguments than that.
My annoyance at this avoidance can only be described as 'expected',
given our past exchanges. But I am curious, what is so difficult about
answering questions, or even just outlining your specific objections
or list the grand problems you reference as 'obvious' and 'clear as
day'? You say your theory predicts the failures of AI research. How?
I differ from Vassar and Eliezer in not believing that falsifiable
predictions are neccesary in order to engage in scientific discussion,
but I can't seem to get you to provide any grounds for a discussion.
1. Your opening points were about nonlinearity, why is this an
indictment of any current AI theory? No current theory since the time
of SHRDLU has proposed that an AI could use a world-complete logical
system. The majority of AI technology currently in use is built on the
assumption of approximate solutions.
2. You claim a theory or group has a stranglehold on AI research. Who
is this group, or what is that theory? As far as I can tell, AI
research occurs in an extremely heterogeneous fashion. In fact, I'm
hard-pressed, sometimes, to find ANY theoretical shared ground when
speaking to DARPA researchers I know. Finally and most interestingly,
how is this 'stranglehold' implemented? Why haven't I encountered it?
3. Finally, your comments constantly imply that AI research is doomed
to failure if it doesn't recognize some essential fact. This fact has
seemingly changed as you've discussed it on SL4, or at least you are
describing it differently. Is there a positive component to your
thoughts, or are they largely analytic of extant theory? Is there some
capsule description of what or how AI research would have to change?
I find myself constantly responding to your messages when I had
previously resolved not to, precisely because I've had no luck, and
seemingly neither has anyone else, at getting you to define what
you're actually talking about. It's frustrating, because on first
read, so many of your messages seem like you're referencing something
with significant internal complexity, but I don't know what it is.
-- Justin Corwin firstname.lastname@example.org http://outlawpoet.blogspot.com http://www.adaptiveai.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT