**From:** Tennessee Leeuwenburg (*tennessee@tennessee.id.au*)

**Date:** Mon Feb 05 2007 - 23:20:40 MST

**Next message:**Mitchell Porter: "Re: Optimality of using probability"**Previous message:**Eliezer S. Yudkowsky: "Re: Optimality of using probability"**In reply to:**Eliezer S. Yudkowsky: "Re: Optimality of using probability"**Next in thread:**Mitchell Porter: "Re: Optimality of using probability"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Bayesian reasoning seems straightforward enough. Assuming it's the best

method of reasoning, perhaps each agent doesn't need tailored reasoning

abilities, but rather a tailored ontology.

Cheers,

-T

On 2/6/07, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:

*>
*

*> Mitchell Porter wrote:
*

*> >
*

*> > If you the programmer ('you' being an AI, I assume) already have the
*

*> > concept of probability, and you can prove that a possible program will
*

*> > estimate probabilities more accurately than you do, you should be able
*

*> > to prove that it would provide an increase in utility, to a degree
*

*> > depending on the superiority of its estimates and the structure of
*

*> > your utility function. (A trivial observation, but that's usually where
*

*> > you have to start.)
*

*>
*

*> Mitch, I haven't found that problem to be trivial if one seeks a precise
*

*> demonstration. I say "precise demonstration", rather than "formal
*

*> proof", because formal proof often carries the connotation of
*

*> first-order logic, which is not necessarily what I'm looking for. But a
*

*> line of reasoning that an AI itself carries out will have some exact
*

*> particular representation and this is what I mean by "precise". What
*

*> exactly does it mean for an AI to believe that a program, a collection
*

*> of ones and zeroes, "estimates probabilities" "more accurately" than
*

*> does the AI? And how does the AI use this belief to choose that the
*

*> expected utility of running its program is ordinally greater than the
*

*> expected utility of the AI exerting direct control? For simple cases -
*

*> where the statistical structure of the environment is known, so that you
*

*> could calculate the probabilities yourself given the same sensory
*

*> observations as the program - this can be argued precisely by summing
*

*> over all probable observations. What if you can't do the exact sum?
*

*> How would you make the demonstration precise enough for an AI to walk
*

*> through it, let alone independently discover it?
*

*>
*

*> *Intuitively* the argument is clear enough, I agree.
*

*>
*

*> --
*

*> Eliezer S. Yudkowsky http://intelligence.org/
*

*> Research Fellow, Singularity Institute for Artificial Intelligence
*

*>
*

**Next message:**Mitchell Porter: "Re: Optimality of using probability"**Previous message:**Eliezer S. Yudkowsky: "Re: Optimality of using probability"**In reply to:**Eliezer S. Yudkowsky: "Re: Optimality of using probability"**Next in thread:**Mitchell Porter: "Re: Optimality of using probability"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:57 MDT
*