RE: Metarationality (was: JOIN: Alden Streeter)

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Aug 24 2002 - 22:00:11 MDT


>
> Everything that works is a form of rationality; works because it is
> rational; and is rational because it works.
>
> To be precise, everything that works noncoincidentally, with a
> probability
> greater than sheer random chance would predict, is a form of rationality;
> works because it is rational; and is rational because it works.

Well, that is one definition of the term "rationality."

However, it's a very general definition.... It certainly does not match the
standard dictionary definition, and I think it doesn't match the
common-language definition.

If we accept your definition, we then need another word for the process of
"conscious rational thought", which is only one kind of "rationality"
according to your definition.

Anyway, it seems to me the really deep question is: **In what circumstances
do processes that in themselves appear to have nothing to do with reasoning,
serve as a valuable or even critical part of "rationality" in the overall
context of an intelligent system?**

My conjectural answer is:

-- Definitely, in circumstances involving badly limited computational
resources (e.g. the human brain)
-- Maybe, in all circumstances

>
> The visual cortex is a form of rationality.

If so, it's a badly flawed form, due to errors like I mentioned
(misestimating the distance to faraway objects in the desert...).

>
> That's what the Bayesian Probability Theorem *is* - a *universal*
> description of the way in which things can be evidence about other things.

This seems to me to be a large overstatement.

Bayes' Theorem is a powerful tool (and one of the central parts of the
Novamente reasoning module).

However, like the rest of elementary probability theory, it pertains to a
set of probabilities defined over a fixed "universal set."

Defining the universal set is a serious issue in any practical situation.
You can say "define it as the set of all entities ever observed by the mind
doing the reasoning". But this doesn't really work, because we *hear* about
entities via linguistic communication, including many entities we haven't
seen. I may want to include Beijing in my universal set for my internal
probabilistic inference, even though I have never been there or seen it.

So then you can't rationally posit a mind that operates only using
elementary probability theory. You must have two processes in a mind, at
least:

-- probabilistic inference (implemented in one of a huge number of possible
ways)
-- a process for defining the universal set (either once and for all, or
contextually)

But then how does this second process take place? If you say it takes place
by prob. inference you get a regress. But if not, you must posit some other
process, and you must admit that the conclusions of the prob. inference
(logical reasoning) will be relative to the operation of this other process,
rather than "objectively rational" (if rationality, to you, is defined by
the laws of probability theory).

Probability theory is a tool minds use. If a mind is inept at using
*noninferential* methods to appropriately define the universal set for
itself (and in reality, the appropriate universal set is often
*context-dependent*), it may use this tool accurately but uselessly (i.e.
unintelligently).

Probability theory is an important component of intelligence, but not the
be-all and end-all of intelligence. Whether it's viewed as the "core" of
intelligence or not is largely a matter of taste... mind seems in a way to
be "multi-cored" ;)

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT