From: Vladimir Nesov (robotact@gmail.com)
Date: Wed Apr 09 2008 - 05:24:48 MDT
On Tue, Apr 8, 2008 at 4:33 PM, Nick Tarleton <nickptar@gmail.com> wrote:
> >
> > Problem is that 3^^^3 payoff is a part of specific miracle hypothesis.
> > Prior probability of magic needs to be chosen so that you'll be able
> > to move it up to near-1 if it actually happens. As 3^^^3 payoff will
> > take really long time to verify, you are allowed to be equally
> > doubtful about the property of this specific magic to deliver this
> > payoff, and so you can set equally low prior probability on it.
>
> I'm afraid I don't follow. Probabilities of outcomes aren't discounted
> by utility, and this seems to beg the question in assuming that it
> will actually take so long to verify that the prior is on the order of
> 1/3^^^^3.
>
Sorry for a confusing comment. I now read the comment thread to
Pascal's mugging post, and can concisely state my argument: I believe
that a stable goal system requires scope insensitivity, at least in
some cases.
Probabilities can in a way be used instead of utility, to avoid
utility vs. probability dichotomy. That is, if decision-making
(action) is described as a probabilistic process, probabilities that
describe decision-making can be interpreted as defining utility.
Utility isn't explicitly used in this case. I prefer this view in
context of AI, because it enables unsupervised learning of utility
with the same flexibility as action and perception, without additional
explicit mechanism for it.
When action is considered as perception seen from another point of
view, it can be said that utilities also follow from probabilities
used for perception. It only changes facts for which probabilities are
estimated: roughly speaking, instead of negative utility attached to
'people will be hurt', system estimates probability of 'I will try to
oppose it', given that 'people will be hurt'.
If you have infinite amount of evidence and infinite resources, your
priors can be almost arbitrary, and you'll still approximate correct
distribution sooner or later (given that they are defined on strings,
so they can't be just all equal to each other). When priors on strings
are selected, some of hypotheses will get a larger chunk of
probability mass, and some will have to deal with tiny initial
probabilities. As learning progresses, presented evidence updates
probabilities, so that some of tiny probabilities can grow. With
limited amount of evidence, it's a good heuristics to worry more about
probabilities of hypotheses important for utility, so that they will
be spotted earlier, as in this case some of these probabilities
determine action. If prior probability is too low, it can take too
long to raise it to a level when it can be acted upon.
This reformulation makes me skeptical about additivity of utility: if
utility initially comes from probability of action choice, it's
reasonable that increasing frequency of positive (action-reinforcing)
experiences will increase resulting utility to some extent, but going
linearly all the way is too much: there are other causes for action
and other drives. Self-improvement (learning) needs to extrapolate a
current hegdopodge of drives 'simultaneously'. Of course if we are
talking about an alien utilitarian AI with unstable goal system, the
conclusion that it can be mugged might as well be correct. So, our
mugger will still need to present lots of evidence in order to change
action probability, even though stakes are seemingly insane. It's not
a generally valid argument, as in some cases self-improvement can
adjust system's probabilities independently of external evidence,
trying to better grasp intended goal system from current
implementation.
-- Vladimir Nesov robotact@gmail.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT