[sl4] Is friendliness computable? (was Re: Why extrapolate?)

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon Oct 26 2009 - 08:24:28 MDT

Tim Freeman brought this paper by Peter de Blanc to our attention http://arxiv.org/abs/0907.5598

It my understanding is correct, it dooms any hope that a computable mind could experience unbounded happiness, where "happiness" is defined as an increase in utility. I argued previously that in any finite state machine whose behavior can be modeled in terms of a utility function, that there is a state of maximum utility in which any thought or perception would be unpleasant because it would result in a different state. It should be obvious that this degenerate state of Utopian bliss that all goal seeking agents aspire to (whether they know it or not) is indistinguishable from death.

A way out would be to continually add memory to your mind so that the number of states is unbounded. De Blanc's paper quashes that approach. Utility must be bounded or else you cannot make choices based on expected utility. De Blanc discusses some ways out in section 6, but these all involve alternative utility functions, all of which are bounded.

Perhaps, then, friendliness is not computable, and we should just give up and let evolution do what it's going to do.
 -- Matt Mahoney, matmahoney@yahoo.com

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT