From: Stuart Armstrong (firstname.lastname@example.org)
Date: Sat Sep 27 2008 - 03:23:50 MDT
> Are you opposed to replacing the human race with superhuman AI whose ethical system approves of this replacement?
> And so we seek the optimal mental state of maximum utility and ultimate happiness, where any thought or perception would be unpleasant because it would result in a different mental state. How is that different from death?
What reason to you have for believing that a stable, fixed optimum
exists (even if the universe were taken to be unchanging?). Much
simpler systems than human utility and happiness lack stable optimums.
Stably unhappy Stuart
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:01:21 MDT