Re: "Boy with Incredible Brain"

From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Mar 22 2006 - 18:59:46 MST


Philip Goetz wrote:
> On 3/22/06, Richard Loosemore <rpwl@lightlink.com> wrote:
>> Michael Vassar wrote:
>>> It would be SL4 if it was somehow possible to teach a high power savant
>>> to do applied Bayesian reasoning and decision theory in everyday life in
>>> a useful way. Maybe it would be worth the time for someone in the
>
>> Good grief, I have to say that I couldn't disagree more!
>>
>> What exactly would be the result of teaching him to do applied Bayesian
>> reasoning? (And how do you justify your answer?)
>
> The answers to these questions are so obvious that I fear
> stating them would do no good here. Richard, you seem to
> have an emotional reaction. Why? It seems you think that teaching
> a savant how to reason would be abusing him.
>
> The purpose is not to build Searle's Chinese room, just to see what
> the fellow could accomplish. Whether, for instance, thorough-going
> Bayesian decisions could help him achieve his goals, in a way that
> super-geniuses tend not to.
>
> - Phil
>
>

Oh come now, it wasn't an emotional reaction, and the answers are not in
the least bit obvious!

Michael's suggestion did not seem to be about "I wonder what would
happen if he learned reasoning?" He said "it would be SL4 if... " and
took it from there. Not sure how else to interpret that comment except
the way I did. Michael could clarify, perhaps. It seemed as though he
was saying Daniel could be valuable to SL4 causes if he could be
enhanced with Bayesian powers.

On reflection, and being as self-deprecating as I am able (;-)) I guess
I might have completely misunderstood Michael's intention to "use"
Daniel to make a supergenius who could help solve some SL4 issues ... if
so, sorry Michael!

But that would still leave intact my criticisms of the usefulness (to
Daniel) of making him into a super Bayesian reasoner.

If someone is going to do lots of Bayesian reasoning and as a result
push back the frontiers of knowledge, they need:

1) solid ways to relate concepts to actual things in the world without
someone helping them with the mapping (the semantics of the tokens)

2) good techniques for creating new concepts about the world (some of
which are going to be very subtle), as a result of gathering empirical
experience of the world.

3) solid data about prior probabilities of at least something (and
justification for why the numbers *are* solid, of course)

4) ways to represent subtle questions and statements about real world
situations in such a format that a Bayesian reasoning system could
actually do something sensible with them (for example, answering
questions about abstract analogies)

Inasmuch as Michael was suggesting that Daniel's existing genius could
be combined with a fabulous Bayesian reasoning ability to yield
something truly superlative and useful (and if that was not what Michael
intended, I stand corrected of course), then I am disagreeing and saying
that:

a) Daniel would not necessarily be any better than the rest of us in
these respects, and

b) In fact, given our understanding of other savants, we might expect
him to be somewhat worse (I hesitate to generalize, because all cases
are unique), and

c) The performance of a Bayesian reasoner is crucially dependent on
these peripheral factors.

My shopping list of issues is, as you know, the same shopping list of
unsolved problems that can be laid at the door of anyone trying to build
an AGI that is predominantly based on Bayesian reasoning.

And my critique is quite general in one other respect: it can be
applied just as easily to the frequent comments on this list by people
who imply that Bayesian reasoning is in some sense the highest form of
human thought. Something more than just a useful tool under some
circumstances, at least for people who are already modestly competent at
general purpose reasoning.

Ordinary brain + fabulous competence in Bayesian mathematics = something
that depends on how that brain does all the work of building new
symbols, mapping symbols to real things in the world, interpreting real
issues into representations that mean something to the Bayesian module,
etc etc etc (the shopping list) .... and if all this apparatus is just
feeding the Bayesian module heaps of low quality data, then heaps of low
quality conclusions are what you get out the other end.

In summary, I don't think you can say that it is *obvious* how the above
problems would be addressed by someone who went in there to teach Daniel
Bayesian reasoning.

Richard Loosemore.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT