Re: [sl4] Our arbitrary preferences (was: A model of RSI)

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Sat Sep 27 2008 - 03:23:50 MDT


> Are you opposed to replacing the human race with superhuman AI whose ethical system approves of this replacement?

Yes.

> And so we seek the optimal mental state of maximum utility and ultimate happiness, where any thought or perception would be unpleasant because it would result in a different mental state. How is that different from death?

What reason to you have for believing that a stable, fixed optimum
exists (even if the universe were taken to be unchanging?). Much
simpler systems than human utility and happiness lack stable optimums.

Stably unhappy Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT