Re: analytical rigor

From: justin corwin (
Date: Sat Jun 24 2006 - 10:32:38 MDT

Your email is full of characterizations and rather low on specific
claims. You say there is a vast "Outer Darkness" of "insoluble
problems". That progress has been "negligible". That people have
conjectured the impossibility of solving 'most' of them(for "decades",
no less). That there is a large group of people convinced that there
isn't a prevalence of nonlinear systems, and that these people ignore
'massive' evidence to the contrary.

I don't like fuzzy characterizations, and I especially don't like
anonymized attacks. Are you claiming that the SL4 list imagines the
world is a linear place? Did you think that the statement "most claims
of impossibility usually turn out to be unreliable" applied largely to
mathematical claims of intractability(which it doesn't, as far as I
can tell, it refers to specific technological achievement, deriving
from the older quote "If a respected elder scientist says something is
possible, he is very likely right, if he says something is impossible,
he is very likely wrong", deriving from older embarassing anecdotes
about Lord Kelvin, who was very prone to declarative statements.)

In short, your email is very passionate, but fails to persuade on the
account of it containing no facts and no specific claims.

And this last:

On 6/24/06, Richard Loosemore <> wrote:
> I wouldn't care so much, except that these people have a stranglehold on
> Artificial Intelligence research. This is the real reason why AI
> research has been dead in the water these last few decades.

This is an example of reasoning that many have about science which is
absolutely wrong. There is no such thing as a conspiracy of scientists
keeping new science or technology down. They don't care about you,
what you do, or what you think. The vast majority of scientists
believe, in an abstract way, that diversity of research is a good
thing, and they might even applaud you, while privately thinking your
research is doomed to failure. What they won't do, is be convinced, or
give you money.

That does not constitute a stranglehold. You are still free to do
whatever you want. In fact, the majority of interesting AI work in the
last few years has been outside of academia anyway (with a few shining
exceptions, like AIXI), so that particularly speaks against your ideas
of "strangleholds" and consensus opinion.

The opinion of other scientists does not affect how your experiments
turn out. I'm sorry you don't like what most scientists are doing, so

Justin Corwin

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT