Re: [sl4] Comparative Advantage Doesn't Ensure Survival

From: Nick Tarleton (
Date: Mon Dec 01 2008 - 14:58:28 MST

On Mon, Dec 1, 2008 at 4:24 PM, Charles Hixson
> I still think I understand what you are saying, and yes, it would be
> economically advantageous to kill off (or at least fail to support) people.
> But this only dominates if economics is the most significant motive.

Um, again:

"...We then show that self-improving systems will be driven to clarify their
goals and represent them as economic utility functions. They will also
strive for their actions to approximate rational economic behavior...."

> I may not believe that "Friendly AI" is actually possible, but I do believe
> that an AI with a goal defined morality is. And economics would not
> dominate, if I were the designer. It would be important, because it must
> be, but other things would be more important. And one of those would be not
> acting in ways that were more detrimental to human survival that the
> majority of humans would act.

(Note that there are still a multitude of dogs and cats around, which
> economic determinism would also have consigned to be discarded.)

People value dogs and cats. Seen any dodos lately?

> But it's not clear that an AI would be designed to so depress it's
> inherent morality.

Its *what*?!


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT