From: Eliezer Yudkowsky (email@example.com)
Date: Sat Jan 15 2005 - 09:47:14 MST
Ben Goertzel wrote:
> About philosophy of mind: I agree somewhat with your criticism. My own
> approach to AI is founded on years of thinking I did about the philosophy of
> mind, as well as more scientific considerations. I think that Eliezer's
> work could use a little more depth in this area.
I also did years of thinking on philosophy of mind when I was a teenager.
It's a good thing I didn't spend much time embarassing myself by writing it
up, or at least that I didn't publish. Though I suppose my recent works on
Bayes contain "philosophy of mind"; it just happens to be philosophy of
mind that can be applied to perform quantitative calculations.
> About probability theory: I agree with Eliezer that in principle,
> brain-minds act as if they were obeying an approximation to probability
> theory. Now, whether this is explicit or implicit in the structure and
> dynamics of a given brain-mind is a totally different question. In the
> brain I believe it's implicit, and in
> I have made some arguments as to how Hebbian learning in the brain might
> give rise to probabilistic inference on the emergent level. In an AI system
> it may be implicit or explicit depending on the design. In my own Novamente
> AI design it's explicit, and I think Eliezer is proposing to make it
> explicit in his AI design as well.
> As to the foundation for the claim that probabilistic reasoning is
> foundational, Cox's mathematical arguments are pretty convincing. Cox shows
> that any measure of plausibility that obeys certain very sensible axioms
> *must* be probability: for a discussion see e.g.
> I don't fully understand the fuss about "Bayesian reasoning" -- to me, Bayes
> rule is just one among many useful mathematical rules derivable from the
> axioms of elementary probability theory, which IMO are the correct axioms to
> use for making intelligent judgments in the face of uncertainty.
"Bayesian reasoning" nowadays appears to be used to describe reasoning
obeying Cox's laws in general, including the inverse probability theorem
a.k.a. Bayes's Rule. With the distinction that Bayesian statistical
techniques tend to be theorems, relating posterior probabilities to
explicitly stated assumptions about prior probabilities and likelihood
distributions, rather than ad-hoc methods like "gee, let's calculate the
squared error, cuz like it's fun". A Bayesian would see that trying to
minimize the squared error presumes a uniform prior and a Gaussian error
function, or some similar assumption that ought to be stated explicitly.
At least that is the tradition of Saint Jaynes.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT