From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sat Sep 27 2008 - 16:04:24 MDT
--- On Sat, 9/27/08, Stuart Armstrong <dragondreaming@googlemail.com> wrote:
> > Are you opposed to replacing the human race with
> > superhuman AI whose ethical system approves of this
> > replacement?
>
> Yes.
Then what is your alternative to banning AI, or delaying it until we solve the friendliness problem?
> > And so we seek the optimal mental state of maximum
> utility and ultimate happiness, where any thought or
> perception would be unpleasant because it would result in a
> different mental state. How is that different from death?
>
> What reason to you have for believing that a stable, fixed optimum
> exists (even if the universe were taken to be unchanging?).
> Much simpler systems than human utility and happiness lack
> stable optimums.
Because rational, goal seeking agents have scalar utility functions.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT