From: Ben Goertzel (ben@goertzel.org)
Date: Thu Sep 15 2005 - 21:25:06 MDT
>You seem to be complacent with making 'small
> mistakes' here and there because you believe that it's better to
> concentrate on getting the 'big idea' right, but I think that in AGI
> those kind of small mistakes are indicative of reasoning flaws that have
> a /cummulative/ affect on the probability of building something that
> works. Which is to say, it doesn't take many 'small mistakes' before your
> success probability goes to effectively zero. I suppose the point is that
> the philosophical and psychological legacy surrounding AI can make it
> seem like social science or even art, but it turns out that it's actually
> more like physics; AGI demands conceptual and mathematical rigour.
>
> * Michael Wilson
OK Michael, so now you're going to argue that because I sometimes make
mistakes
I'm incompetent to create AGI?
I could more easily argue that anyone who would design a programming
language
as misguided as Flare is incompetent to have anything to do with
software...!
But this kind of argument is foolish. It indicates a lack of understanding
of human psychology. Just because someone makes a lot of errors when they
are in
a certain state of mind or a certain situation doesn't mean they always
make that many errors. Humans are complex systems with complex behaviors.
Anyway, I don't want this to turn into a flame war in which I waste my time
posting a long list of every major conceptual error Eli or other SIAI people
have
ever posted on this list. There have certainly been plenty, as indicated by
the frequency with which Eli has shifted his perspective on important
issues.
I suggest we put these "ad hominem" attacks aside and move on...
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT