Re: Re[4]: Metarationality (was: JOIN: Alden Streeter)

From: Gordon Worley (
Date: Sun Aug 25 2002 - 12:55:57 MDT

On Sunday, August 25, 2002, at 02:33 PM, Cliff Stabbert wrote:

> Sunday, August 25, 2002, 10:29:09 AM, Gordon Worley wrote:
> GW> In other words, everything follows roughly the same decision
> GW> making process and it's how you assign the weights and what
> GW> decisions you try to assign weights to that makes all the
> GW> difference.
> OK, this is at least a new approach. Without addressing the question
> of whether everything can be reduced to the BPT, I'll point out that
> all you've done is shift the burden of the problem: now "how we assign
> the weights" is apparently what makes for more or less intelligence.
> Which means that there is *some* sort of intelligence involved in
> those decisions. Ben Goertzel (jump in if I'm wrong) believes this
> rests on sub-/non-/pre-rational substrates. And he points out that if
> you want this, too, to be a rational process you're stuck in an
> infinite regress.

This will not happen. A mind has priors that it uses. The result of
each decision is to improve the quality of those priors. You start with
a set of a priori priors (in humans and in AIs if you're designing it in
a way that's initially compatible with rationality) and work from
there. One aspect of rationality is making use of the best priors
rather than inaccurate, evolved ones like emotional priors to encourage

Gordon Worley                     `When I use a word,' Humpty Dumpty            said, `it means just what I choose                it to mean--neither more nor less.'
PGP:  0xBBD3B003                                  --Lewis Carroll

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT