From: Ben Goertzel (ben@goertzel.org)
Date: Thu Mar 17 2005 - 12:00:47 MST
I believe what Marc really wants to say here is NOT that Bayes theorem is
"broken" (clearly it's correct math), but rather that explicitly applying
Bayesian inference is not a computationally feasible strategy in most cases.
So it's the idea that "intelligence should be achieved primarily via
explicit application of Bayes Theorem" that is broken.
IMO, explicit application of Bayes Theorem can play a role in intelligence
but it certainly can't be the ONLY tool used by an intelligence to figure
out how to achieve its goals (because of aforementioned computational
feasibility problems).
-- Ben G
>
> Marc Geddes wrote:
> > Any AGI worth its salt would be absorbing knowledge
> > far faster than it could apply Bayes theorem to. It
> > would quickly run into computational intractibility
> > and have to apply ever greater ingenuity in order to
> > find approximations and short-cuts in order to carry
> > on reasoning.
> > ...
> > It is not human reasoning that is broken. It is Bayes
> > theorem that is broken.
>
>
> What the hell are you talking about? These are not real arguments or
> technical points, for the nth time, and piping /dev/random through
> Babelfish and sending it to sl4 does not help the signal-to-noise ratio.
>
> If there are compelling reasons for making these assertions -- and
> intuition is not a compelling reason -- then please remember to attach
> them next time. Throw us a bone here; few of us can hope to share your
> natural insight into these matters.
>
>
> j. andrew rogers
>
>
>
>
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:55 MST