From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sat Sep 18 2004 - 22:54:19 MDT
Emil Gilliam wrote:
>> (c) a Bayesian strives to attain the best possible calibration of
>> prior probabilities just as one strives for the best possible
>> calibration in posterior probabilities, and just because
>> mathematicians haven't *yet* all agreed on which formal, universal
>> process to pick for doing this, doesn't change the goal or its
>> importance;
>
> But you don't *know* that there's a formal, universal process that
> mathematicians will eventually agree on. Obviously they should keep
> trying, but in the meantime you shouldn't assume there is one just
> because the Bayesian religion requires it.
I don't know if they'll agree, but I do know that (a computable
approximation of) Kolmogorov complexity looks like a good answer to me.
> What happens if your Bayesian-to-the-utter-core FAI fails to discover a
> formal, universal process?
A Bayesian AI *is* a formal answer to the question of how to assign
probabilities. So's a human brain, albeit the formal answer is so
complicated it looks informal. And as for universality, a Bayesian to the
utter core cares not whether the answer is something of which some other
mind may be persuaded, only whether the answer is well-calibrated.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST