From: Tim Freeman (tim@fungible.com)
Date: Thu Nov 22 2007 - 14:39:48 MST
From: "Wei Dai" <weidai@weidai.com>
>Actually I am not a big fan of the Speed prior. See
>http://groups.google.com/group/everything-list/browse_frm/thread/411eedecc7af80d8/d818a1f516f5a368
>for a discussion between Juergen Schmidhuber and myself about it.
I wanted my algorithm to be a decision procedure for do-what-we-want.
There is an error bound, in the sense that it would keep churning
until the total a-priori probability of the ignored possibilities is
less than the error bound. The prior used for a-priori probability
was the speed prior. The only more dominant prior I understand well
is the universal prior, where you don't penalize programs for running
too long. If I had used the universal prior instead of the speed
prior, then my algorithm wouldn't have been an algorithm because it
would sometimes run forever. (Another alternative is to impose an
arbitrary computation time bound. I did that in an earlier draft and
eventually realized that I didn't have an error bound any more, which
is no good.)
Do you know of a reasonable prior I could have used that dominates the
speed prior without breaking the algorithm? The essential
requirements are that, for any explanation, we can compute answers to
the questions:
* Does this explanation match the observed past and make predictions
about the future?
* What predictions does it make about the future?
* What is the a-priori probability of this explanation?
We also have to be able to enumerate explanations in an order where the
total possible a-priori probability of the not-yet-enumerated
explanations is less than an arbitrary positive error bound.
>First suppose I don't know that the AI exists. If I reach for an apple in a
>store, that is a good indication that I consider the benefit of owning the
>apple higher than the cost of the apple. If the AI observes me reaching for
>an apple without being aware of its existence, it can reasonably deduce that
>fact about my utility function, and pay for the apple for me out of its own
>pocket. But what happens if I do know that the AI exists? In that case I
>might reach for the apple even if the benefit of the apple to me is quite
>small, because I think the AI will pay for me. So then how can the AI figure
>out how much I really value the apple?
Good example. If the AI knows you expect the AI to buy the apple if
you reach for the apple, then all the AI learns from your reaching is
that the value of the apple to you exceeds the effort required to
reach. So the AI can enumerate all utility functions consistent with
that and compute lots of different possible utilities for you to get
the apple. Because there's little information about you and
apples going into the procedure, there will be a large range of
possible utilites coming out. The AI might want to periodically
temporarily stop helping you so it can get more information about your
utility function and make better decisions about how to prioritize.
The same issue arises when raising kids -- if you give them too much
then everybody involved loses all sense of what's important. It's an
essential issue with helping people, not something specific about this
algorithm.
>Do you think this will actually result in a *fair* comparison of
>interpersonal utilities? If so why?
How do you define "fair" here?
Assuming there is no real definition of "fair":
The main issue here seems to be that I put a plausible algorithm for
"do-what-we-want" on the table, and we don't have any other
specification of "do-what-we-want" so there's no way to judge whether
the algorithm is any good. I can see no approaches to solving that
problem other than implementing and running some practical
approximation to the algorithm. This seems unsafe, but less unsafe
than a bunch of other AGI projects presently in progress that don't
have a model of friendliness. I would welcome any ideas.
If you're worried about fairness, keep in mind that unless the AI is
unfairly biased toward its creator, the creator seems likely to be
murdered. The AI would have to have a lot of respect for its creator
so it doesn't murder the creator itself, and it would have to have a
lot of compassion for its creator if it is going to defend its creator
against murder attempts by others. Back-of-the-envelope estimates are
at http://www.fungible.com/respect/paper.html#murder-creator and
http://www.fungible.com/respect/paper.html#defending-creator.
>What about my desire for greater support of classical music, versus my
>neighbor's desire for more research into mind-altering drugs? It's not
>always so clear...
Yes, sometimes the AI will decide arbitrarily because the situation
really is ambiguous. If it gets plausible answers to the important
questions, I'll be satisfied with it. People really are dying from
many different things, the world is burning in places, etc., so there
are lots of obvious conclusions to draw about interpersonal comparison
of utilities.
>There is actually an infinite number of algorithms that can be used,
>and choosing among them is the real problem.
I agree about having an infinite number of algorithms, but I don't see
it as a problem. Life seems to require arbitrary choices.
All of the algorithmic priors I've run into depend on measuring the
complexity of something by counting the bits in an encoded
representation of an algorithm. There are infinitely many ways to do
the encoding, but people don't seem to mind it too much. If you're
looking for indefensible arbitrary choices, the choice of what
language to use for knowledge representation seems less defensible
than the algorithm for interpersonal utility comparison we're talking
about here.
-- Tim Freeman http://www.fungible.com tim@fungible.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT