Re: Maximize the renormalized human utility function!

From: Keith Henson (hkhenson@rogers.com)
Date: Fri Aug 11 2006 - 13:28:54 MDT


At 06:13 PM 8/10/2006 -0700, Jef Allbright wrote:

(In reply to Eliezer)

>Mightn't it be reasonable in such a scenario
>to exert influence by beginning as early as possible to promote the
>amplification of human morality?
>
>On the other hand, if you expect proliferation of a diverse range of
>AI and IA technology leading up to the big FOOM, then again, mightn't
>it be reasonable to exploit this growth of intelligence toward
>development of a framework for increasingly moral social
>decision-making?

I may be wrong about this, but I think what we consider to be "good" moral
social decision-making is highly dependent on the perception of the
society's members about future prospects.

A *long* time of relatively positive prospects allows anti war and social
justice memes to become dominate. That's more or less the situation in the
western world today.

On the other hand, bleak prospects (or being attacked) causes xenophobic
memes to become dominate. I suspect the trip into negative memes can be
much faster than the rise of memes for social tolerance and social justice.

A set of xenophobic memes dominates in a substantial sector of the world today.

When you think about it, it makes sense for those you treat well to be
dependent on future prospects. Bad enough and it shrinks down to immediate
family. Good enough and it incorporates all peoples your group might
possibly obtain wives from.

All in the interest of the genes.

Keith Henson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT