From: Ben Goertzel (ben@goertzel.org)
Date: Sun Aug 25 2002 - 13:29:01 MDT
I'd like to enlarge a little on a point I made earlier.
At the end of this e-mail is an example of a mathematical theorem from
probabilistic inference. This is one of my own, which I've proved myself,
which will appear in a technical appendix of our forthcoming book on
Novamente. It expresses to a rule for doing probabilistic deduction, using
Bayes' Theorem among other probability-theoretic rules. The key thing I
want to point out about this theorem is that it assumes one starts out with
a set of probabilistic estimates that are relative to a given universal set
U. U is basically a set containing "everything that is." All other
theorems in probability theory and probabilistic inference share this
property. [It's for this reason that my colleague Pei Wang believes
probability theory is an unsound foundation for modeling human or
computational inference. He has proposed a different kind of uncertain
reasoning, which does not require the positing of a universal set U;
unfortunately, I think his inference rules are not correct ;) ]
The point I tried to make to Eli is that when a mind wants to apply
probabilistic reasoning, nothing tells it a priori how to set this
particular parameter (U), which makes a big difference in the results that
reasoning gives.
Using probabilistic reasoning to set the parameter simply leads to a
regress..
It could be that there is some basic heuristic for setting U, wired into a
mind biologically. But I don't think so, I think a mind actually adapts its
U as time goes on. It basically has to, because it has to add new things it
discovers in to U. I think that processes that are not "rational" in any
typical sense are involved in setting the contexts used in probabilistic
reasoning.
The tricky issue is that a mind has to add, not only things it directly
experiences, but also things it *just hears about*, to U. For instance, if
it hears about Beijing it has to add Beijing to U even though it never
experienced Beijing. Eli suggested that probabilistic inference is used to
mediate the process of adding Beijing to U. But *that very probabilistic
inference* must also take place within some universal set U, so this kind
of regress does not solve the problem.
All this is about probabilistic inference within a given mind, however
(carried out explicitly or implicitly).
Stepping outside the mind, and taking an objectivist perspective, one can
ask whether a mind acts approximately rationally relative to the universal
set U of the actual universe. This is a different question. What you'll
find is that minds act *more* approximately rationally relative to the U's
that it constructs based on its experience, than based on the universe's U.
Because our knowledge is finite, and this finitude affects our probabilistic
inferences in concrete & quantitative ways...
-- Ben
***
Theorem (Probabilistic Term Logic Deduction Formula)
Let U denote a set with |U| elements. Let Sub(m) denote the set of subsets
of U containing exactly m elements.
Let sA, sB, sC, sAB and sBC be numbers in [0,1], all of which divide evenly
into |U|.
Let f(x) = P( P(C|A) = x | A in Sub(|U| sA), B in Sub(|U| sB), C in Sub(|U|
sC), P(B|A)= sAB, P(C|B)=sBC )
Then, where E() denotes the expected value (mean),
E( f(x) ) = sAC sAB sBC + (1-sAB) ( sC - sB sBC ) / (1- sB )
***
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT