From: Ben Goertzel (ben@goertzel.org)
Date: Mon Aug 26 2002 - 00:57:40 MDT
> In what way does this have nothing to do with reasoning? It is a product
> of evolution's version of induction. Furthermore, the fact that
> acting in
> this way will increase the probability of reproduction is quite knowable
> to a Bayesian reasoner. A perfect Bayesian reasoner might use a more
> optimal method of reproduction, but an imperfect Bayesian reasoner might
> end up employing exactly this method. Because evolution *is*
> that kind of
> imperfect reasoner.
It is true, any process of learning or adaptation can be viewed as
"imperfect probabilistic reasoning."
The question is, in what cases is this view a useful one...
> > I generally have been using the word "rational" to mean
> something narrower.
> > Basically, to mean "following a process of logical inference", meaning a
> > process inside some mind in which premises are represented,
> conclusions are
> > represented, and processes for getting from the premises to the
> conclusions
> > are carried out. Note that this sense of "following a process
> of logical
> > inference" is implementation-independent; it applies to neural
> nets and so
> > forth as easily as to minds embodying some sort of explicit
> > symbol-manipulation component.
>
> Yes. And because you use "rational" to denote only this kind of thought,
> you can get away with telling yourself that "both 'rationality' and
> 'irrationality' are necessary to thought", thereby avoiding the necessity
> of getting rid of comforting irrational thoughts.
I think it's kind of absurd for you to assume so much about my personal
psychology. The "more rational than thou" attitude that you and Gordon
display sometimes is really getting tiring. So I'll move on to other
issues...
> Which it is why it's so important to realize that verbal thought
> is verbal
> thought, intuition is intuition, and that both can be either
> "rational" or
> "irrational".
What you're saying is that it's important for everyone to define terms
exactly the way YOU do, instead of the way the dictionary does.
I think you really push the envelope of the definition of "rational" by
saying that a bird acting on instinct is acting "rationally"....
I don't dispute your right to use the word that way, but I do maintain my
right NOT to use the word that way...
I can see that the bird acts somewhat as it would if it were a reasoning
being, calculating its actions... but the fact is, the bird is acting on
instinct, not based on any of its own reasoning...
I'd be more OK to say that evolution was "rational" to "design" the bird
that way, because evolution is what contributed the intelligence to the
bird's behavior, not the bird itself...
> It's very easy to find a good name, other than
> "rationality", for verbal reasoning or deliberative reasoning or
> conscious
> reasoning... in fact, I tossed out three right there.
I talk about "explicit probabilistic logical reasoning" and "conscious
reasoning" sometimes. These are different things, as some conscious
reasoning may not be explicitly prob-logic based, and some explicitly
prob-logic based reasoning may be unconscious.
I agree that rationality extends beyond explicit prob. logical reasoning.
But I don't agree that it extends to a bird or a human acting on instinct.
And I think there's a key point of difference between our views, which I've
been struggling to express in a way you'll understand.
Maybe a good phrasing is this: "Within systems or process that are OVERALL
rational (in your sense of being decent approximations to prob. inference),
there are subcomponents and subprocesses with the property that, if you just
look at them in isolation, they don't look rational at all."
It is these subcomponents and subprocesses that I refer to as nonrational
components of the mind.
You may say that these nonrational subcomponents and subprocesses don't need
to be there.
I believe that they need to be there to SOME extent.
For instance, as I've argued, there needs to be, in any finite system within
the universe, some non-prob-inference process for adapting the universal set
U used in the system's prob-inference processes.
How large a percentage of a rational mind, is optimally composed of these
"locally nonrational" subprocesses, I'm not sure.
In the human mind it seems to be a LARGE percentage, but for future AI minds
it may be less...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT