From: Mark Waser (MWaser@cox.net)
Date: Wed Aug 30 2006 - 17:55:55 MDT

> The single worst sin on John Baez's Crackpot Index: "50 points
> for claiming you have a revolutionary theory but giving no concrete
> testable predictions." Also, "20 points for talking about how great
> your theory is, but never actually explaining it"

Funny. I had absolutely no problem getting details and a highly rational workplan from Richard a while back by asking nicely. Maybe it had something to do with the fact that every time he tried to get started, you successfully suckered him into derailing onto trivialities.

His ideas had a lot of the flavor of Hofstadter and were isomorphic with a number of the higher level details of Novamente (which I don't think he's seen) with a re-framing that suggested solutions to a number of what I perceive to be Novamente's shortcomings (though, I suspect that this is going over your head since you've also never had the courtesy to really get into Novamente either, despite debating it all the time). His workplan is certainly an approach that I haven't seen in the literature. His biggest shortcoming is that he feels that he needs to defend his dismissal of the current approaches before proposing his approach and that's how you suckered him in.

> Your butt is banninated from SL4.

And so, in my opinion, burns the last shreds of credibility for Eliezer, this mailing list, and the Singularity Institute -- in the fires of Eliezer's emotions. I'll be hanging out over at the AGIRI mailing list (http://www.agiri.org/email/ to sign up). You Singularity Institute folk will also notice that one of your regular anonymous donors is about to disappear (maybe you should consider reining in the public behavior of your chief spokesman).

I've previously had issues with the ruthless suppression of ideas contrary to the edicts of The Great Eliezer (including any argument that Friendliness isn't so difficult that it is beyond the comprehension of, much less solution by, anyone less than The Great Eliezer) but now you've made it quite clear that no one (who might be a threat) who doesn't bow down to your edicts (or, at least, pay careful homage to your greatness) will be tolerated.

Eliezer, maybe you should take a hint when
  1.. the (almost only but certainly) most noteworthy AGI researcher (i.e. Ben Goertzel) indicates clearly that you're going too far,
  2.. *numerous* other people also gently speak up,
  3.. NO ONE except a couple of your Singularity Institute syncophants seem to be willing to support you, and
  4.. even your List Snipers won't touch it.
I'ma outa here . . . .

I'd consider returning and resuming my support if Richard is re-instated and treated kindly but until then . . . .

    Good luck to all,


----- Original Message -----
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
To: <sl4@sl4.org>
Sent: Tuesday, August 29, 2006 10:38 PM
Subject: Cutting Loosemore: bye bye Richard

> Richard Loosemore wrote:
> >
> > This is important, because I *knew* this perfectly well when I wrote
> > what I wrote: I knew it because I am a cognitive scientist steeped in
> > the experimental and theoretical results of this field, and I have known
> > it for a long time. When I wrote my words in that first message, I was
> > generous enough to give Eliezer credit for also knowing the field, and
> > so I did not hand-hold him through all the tedious details, assuming
> > that he was smart enough to know them already.
> Richard Loosemore previously wrote:
>> Human minds are designed for immensely sophisticated forms of cognitive
>> processing, and one of these is the ability to interpret questions that
>> do not contain enough information to be fully defined (pragmatics). One
>> aspect of this process is the use of collected information about the
>> kinds of questions that are asked, including the particular kinds of
>> information left out in certain situations. Thus, in common-or-garden
>> nontechnical discourse, the question:
>> Which of the following is more probable:
>> 1) Linda is a bank teller and is active in the feminist movement.
>> 2) Linda is a bank teller.
>> Would quite likely be interpreted as
>> Which of the following is more probable:
>> 1) Linda is a bank teller and is active in the feminist movement.
>> 2) Linda is a bank teller and NOT active in the feminist movement.
>> It just so happens that this question-form is more likely than the form
>> that follows the strict logical conjunction. In fact, the strict
>> logical form is quite bizarre in normal discourse, and if we intended it
>> to actually ask it, we would probably qualify our question in the
>> following way:
>> Which of the following is more probable:
>> 1) Linda is a bank teller and is active in the feminist movement.
>> 2) Linda is a bank teller, and she might be active in the feminist
>> movement or she might not be - we don't know either way.
>> We would make this qualification precisely because we do not want the
>> questioner to bring in the big guns of their cognitive machinery to do a
>> reading-between-the-lines job on our question.
>> It might seem that this analysis of the Tversky and Kahneman studies
>> does not apply to one of your other examples:
>>> Please rate the probability that the following event will occur in
>>> 1983...
>>> [Version 1]: A massive flood somewhere in North America in 1983,
>>> in which more than 1,000 people drown.
>>> [Version 2]: An earthquake in California sometime in 1983,
>>> causing a flood in which more than 1,000 people drown.
>>> Two independent groups of UBC undergraduates were respectively asked
>>> to rate the probability of Version 1 and Version 2 of the event. The
>>> group asked to rate Version 2 responded with significantly higher
>>> probabilities.
>> These are two independent groups, so neither sees the other question and
>> therefore they cannot read between the lines and infer that the
>> questioner might be leaving out some information. On the face if it,
>> this seems like good evidence of the Conjunction Fallacy.
>> But is it? The two groups have to separately visualize the scenarios.
>> What are the detailed scenarios that they visualize? It seems prima
>> facie quite reasonable that it never even occurred to the first group
>> that a flood could be a side effect of a massive earthquake: they just
>> tried to judge the likelihood of a flood for other reasons, and in their
>> experience they did not recall any "ordinary" floods that caused that
>> many fatalities, so they respond with low probability.
>> The other group, however, have had the idea put into their head that an
>> earthquake might occur, and that (by the way) this might lead to flood
>> fatalities. They have no idea whether this connection (earthquake leads
>> to flood) is valid, but that issue seems not to be the question (indeed,
>> it is NOT the question) so they take it as something of a given, and
>> then simply fall back on their estimate of an earthquake probability
>> with many fatalities. Whether they are correct in their estimate or
>> not, they rate *that* probability quite highly. Higher than "flood but
>> no earthquake".
>> So, once again, the experimental design is effectively comparing
>> incompatible processes going on inside these people's heads. Or, to be
>> more precise, it *could* be doing this (my suggested interpretation
>> would have to be tested: I am only giving an existence proof for an
>> alternative explanation -- I could do the experiment, or maybe somebody
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> already did do the experiment.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> In other words, at the time Richard Loosemore wrote this post, he did
> not know someone had already done the experiment. And if he was aware
> that the "misreading" explanation of the experiment had already been
> thoroughly refuted - why, it sure as hell doesn't sound like it, given
> what he wrote. He sounds just like someone encountering the field for
> the first time. If not, it certainly is a violation of academic ethics
> to claim for your very own 'my suggested interpretation' what was
> suggested years ago, without any hint that someone else might have
> suggested it first.
> But now - *after*, please note, I post a link to a research paper that
> describes various experimental tests that refuted "misreading" as an
> alternative hypothesis - up jumps Richard and says: "Oh, well, of
> course 'I am a cognitive scientist who is thoroughly steeped in the
> experimental and theoretical results of this field', who was familiar
> years and years ago with these results refuting 'my suggested
> interpretation', and I just meant it was a weak effect that contributed
> to the conjunction fallacy without being solely accountable for it -"
> You really needn't bother at this point.
> Your butt is banninated from SL4. You may post one final response.
> After that, goodbye.
> --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT