From: Gwern Branwen (gwern0@gmail.com)
Date: Mon Oct 26 2009 - 11:03:26 MDT
On Mon, Oct 26, 2009 at 10:24 AM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> Tim Freeman brought this paper by Peter de Blanc to our attention http://arxiv.org/abs/0907.5598
>
> It my understanding is correct, it dooms any hope that a computable mind could experience unbounded happiness, where "happiness" is defined as an increase in utility. I argued previously that in any finite state machine whose behavior can be modeled in terms of a utility function, that there is a state of maximum utility in which any thought or perception would be unpleasant because it would result in a different state. It should be obvious that this degenerate state of Utopian bliss that all goal seeking agents aspire to (whether they know it or not) is indistinguishable from death.
>
> A way out would be to continually add memory to your mind so that the number of states is unbounded. De Blanc's paper quashes that approach. Utility must be bounded or else you cannot make choices based on expected utility. De Blanc discusses some ways out in section 6, but these all involve alternative utility functions, all of which are bounded.
>
> Perhaps, then, friendliness is not computable, and we should just give up and let evolution do what it's going to do.
> -- Matt Mahoney, matmahoney@yahoo.com
"Thus we assume that the agent assigns a nonzero probability to any
computable environment function."
I noted this line with interest, given my recent* argument that
assigning nonzero probabilities to each member of an infinite set
leads to nonsensical conclusions. Aren't there an infinite (or at
least unbounded) number of computable environmental functions, given a
UTM?
Given a minimum probability for any environment, that seems to lead
straight to Pascal's Mugging - the program to turn
1/minimum-probability into minimum-probability+1 is very short...
One of the possible ways seems promising: "We could use a smaller
hypothesis space; perhaps not all computable environments should be
considered." It'd be interesting to know whether AIXI could be mugged;
if it can't, perhaps the issue is expecting a computable AI to do
something that requires uncomputability.
*
http://lesswrong.com/lw/1cv/extreme_risks_when_not_to_use_expected_utility/17ds
-- gwern
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT