Re: [sl4] Comparative Advantage Doesn't Ensure Survival

From: Nick Tarleton (nickptar@gmail.com)
Date: Mon Dec 01 2008 - 14:58:28 MST


On Mon, Dec 1, 2008 at 4:24 PM, Charles Hixson
<charleshixsn@earthlink.net>wrote:
>
> I still think I understand what you are saying, and yes, it would be
> economically advantageous to kill off (or at least fail to support) people.
> But this only dominates if economics is the most significant motive.

Um, again:
http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

"...We then show that self-improving systems will be driven to clarify their
goals and represent them as economic utility functions. They will also
strive for their actions to approximate rational economic behavior...."

> I may not believe that "Friendly AI" is actually possible, but I do believe
> that an AI with a goal defined morality is. And economics would not
> dominate, if I were the designer. It would be important, because it must
> be, but other things would be more important. And one of those would be not
> acting in ways that were more detrimental to human survival that the
> majority of humans would act.

http://www.overcomingbias.com/2007/11/complex-wishes.html
http://www.overcomingbias.com/2007/12/fake-utility-fu.html

(Note that there are still a multitude of dogs and cats around, which
> economic determinism would also have consigned to be discarded.)

People value dogs and cats. Seen any dodos lately?

> But it's not clear that an AI would be designed to so depress it's
> inherent morality.

Its *what*?!
http://www.overcomingbias.com/2008/08/pebblesorting-p.html

npt



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT