From: Michael Wilson (firstname.lastname@example.org)
Date: Fri Sep 16 2005 - 00:43:56 MDT
Ben Goertzel wrote:
> OK Michael, so now you're going to argue that because I sometimes
> make mistakes I'm incompetent to create AGI?
Not directly. Of course everyone makes mistakes, large and small, and
as long as they're caught and corrected it's not a problem. The question
is one of attitude. Most people react to evidence that they have made a
mistake by trying to downplay its significence. For scientists and
engineers this is most often a claim that they just got some trivial
implementation detail wrong, and that their basic ideas are sound. This
isn't surprising; it's instinctual because it saves face and it's
superficially rational because it preserves the perceived investment of
time and energy in the 'big ideas'. Generously the person may be so
passionate about developing/realising their concepts that they don't
want to get sidetracked by fixing 'implementation bugs'; less generously
they may be blind to the progressive accumulation of individually minor
evidence that their theories are broken.
In AI it is so easy to be subtly and irretrievably wrong, and there are
so many traps to fall into, I do not believe it is possible to be
successful without strongly checking this tendency. This is a question
of attitude, the way in which the problem is approached, not one of any
specific design choice. Unfortunately I don't think it's likely that
your project will produce an AGI Ben, not so much because of what little
I know about your design, but because your research methology does not
seem to be rigorous enough to consistently cut through the dross and
misunderstandings in search of the Right Thing. Unfortunately while your
actual design has changed, this meta-level approach appears to have
remained constant over the time that you have been publishing your ideas.
I don't think that constitutes an insult, and I certainly don't take it
as such when people call my approach hopeless; no one has managed to
produce an AGI yet, so if that's your standard of competence then no one
is competent yet. I would think that your past successes in narrow AI
and publishing record easily establish a high level of competence
relative to other researchers as a whole, which is certainly more than
I or Eliezer have done.
> I could more easily argue that anyone who would design a programming
> language as misguided as Flare is incompetent to have anything to do
> with software...!
Eliezer's actual mistake would not be designing a broken language, which
is something most people who take a class on compiler theory do. The
actual mistake would be seriously attempting to design a major language
without having much experience of major application development,
compiler development or indeed a wide range of existing languages. I
agree that back in 1999 Eliezer's weighting on raw intelligence versus
experience was broken.
> Anyway, I don't want this to turn into a flame war in which I waste
> my time posting a long list of every major conceptual error Eli or
> other SIAI people have ever posted on this list.
Please do continue to point out fresh ones.
> There have certainly been plenty, as indicated by the frequency with
> which Eli has shifted his perspective on important issues.
No, this is a /good/ thing. Each perspective shift occured when Eliezer
realised that his model was conceptually flawed, threw it away and went
in search of another one. It takes courage and self-discipline to do
that; too many people would just keep trying to patch up their broken
model and consequently drift quietly into an isolated dead-end. Yes in
principle it would've been better to start from first principles, build
carefully and avoid the adoption of such flawed models in the first
place, but unfortunately humans just don't come equipped to do that out
of the box.
> I suggest we put these "ad hominem" attacks aside and move on...
I never intend to attack people, only ideas. Sometimes I misinterpret
people's ideas, use overly strong wording, or people just get too
attached to their ideas and take criticism of them personally. If this
is an instance of the first or second cases, I apologise. However I
draw your attention to the SL4 FAQ, which states that flamewars are
fine as long as they are intelligent and on-topic. Personally I'd
prefer an intelligent flamewar to a paucity of critical analysis.
* Michael Wilson
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue May 21 2013 - 04:00:48 MDT