From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Oct 22 2004 - 06:55:28 MDT
Wei Dai wrote:
> On Thu, Oct 21, 2004 at 04:10:57AM -0400, Eliezer Yudkowsky wrote:
>
>>One way that might succeed in salvaging the Godel-Machine formalism would
>>be to isolate the rewriting of the theorem verifier from the rewriting of
>>the rest of the Godel Machine. [...]
>
> I don't think this is necessary. Everything you describe can be
> accomplished by carefully choosing the axioms built into the initial
> theorem verifier, without changing the formalism itself. The utility
> function would be built into the axioms. The verifier will only accept a
> proposed rewrite if it comes with a proof that expected utility is
> increased, and the axioms could be chosen so that the only way to prove
> that a rewrite of the verifier increases expected utility is to prove that
> the new verifier accepts exactly the same set of proofs that the old
> verifier does, but faster.
How do you do this? Expected utility runs on chained conditional
probabilities: p(B|A), p(C|AB), p(D|ABC). I can imagine proving theorems
about an axiomatically specified environment's chained probabilities,
theorems which avoid the need to evaluate each and every joint outcome
individually - deterministic theorems, akin to proving that the outcome of
alpha-beta search is exactly the same as full minimax search. Proving the
*probable* utility of a computational shortcut, i.e., proving that a
shortcut for guessing expected utility may work some of the time but not
all of the time, strikes me as a whole new class of problem from proving
deterministic theorems - if anyone has any literature on it, I'd love to
see it. In effect, you would be taking axioms about expected utility and
probability and the environment and formally proving that something was
*probably* true. As far as I know, that's a new idea, which Schmidhuber
doesn't explicitly point out as new. Of course, it may not be a new idea.
The problem with this planet is that you can't read even a millionth of
the relevant literature, which is why I'm often annoyed when someone says
"Science doesn't know X". How can any one human being know what science
does or does not know?
Still, I've never run across a formal system for expected utility. I know
that expected utility started with the von Neumann-Morgenstern axioms, but
I'm not sure if expected utility as a mathematical field has ever been
systematically formalized like set theory and geometry and so on. (Vide
the Metamath project, Mizar, etc.) Proving abstract facts about expected
utility, or evaluating expected utility over abstract world-models, is
another problem. Proving reflective facts about utility, i.e., that a
theorem prover which rewrites game-playing functions has expected utility,
is another new class of problem, and one into which it seems likely that
Godelian considerations would enter. I haven't read anything on abstract
expected utility or reflective expected utility - as far as I know they're
unsolved, perhaps even unconsidered problems.
We're not talking about a trivial amendment of the axioms, and I would like
to know which axioms will do the trick.
> Such a system will likely miss a lot of opportunities for improvement
> (i.e., those that can't be proved using the built-in axioms). I think
> Schmidhuber acknowledged this in his paper. The axioms might also have
> bugs leading to unintended consequences, which the AI would not be able to
> fix. For example the utility function could be defined carelessly. (And
> for a general AI I don't see how it could be defined safely.)
That's the entire problem of Friendly AI in a nutshell, isn't it?
(Though subtract the assumption that the FAI must use a utility function.
CV needs to approximate a general decision system,
something-that-outputs-decisions, without assuming that the decision system
obeys expected utility axioms.)
> This is a
> problem inherited from AIXI which Schmidhuber doesn't seem to address.
Yes, but then the FAI problem is generally overlooked or treated as a
casual afterthought, and journal referees aren't likely to point this out
as a problem. Hardly fair to Schmidhuber to zap him on that one, unless
he's planning to actually build a Godel Machine.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:47 MST