From: Matt Mahoney (firstname.lastname@example.org)
Date: Sat Sep 27 2008 - 16:04:24 MDT
--- On Sat, 9/27/08, Stuart Armstrong <email@example.com> wrote:
> > Are you opposed to replacing the human race with
> > superhuman AI whose ethical system approves of this
> > replacement?
Then what is your alternative to banning AI, or delaying it until we solve the friendliness problem?
> > And so we seek the optimal mental state of maximum
> utility and ultimate happiness, where any thought or
> perception would be unpleasant because it would result in a
> different mental state. How is that different from death?
> What reason to you have for believing that a stable, fixed optimum
> exists (even if the universe were taken to be unchanging?).
> Much simpler systems than human utility and happiness lack
> stable optimums.
Because rational, goal seeking agents have scalar utility functions.
-- Matt Mahoney, firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Tue Jun 18 2013 - 04:01:01 MDT