Re: Morality simulator

From: Stefan Pernar (stefan.pernar@gmail.com)
Date: Thu Nov 15 2007 - 05:09:33 MST


On Nov 14, 2007 11:33 PM, Joshua Fox <joshua@joshuafox.com> wrote:

> Bill,
> >When it comes to explaining the evolution of human cooperation,
> > researchers have traditionally looked to the iterated Prisoner's
> > Dilemma (IPD) game as the paradigm
>
> Yes, IPD-type simulations are similar to one of the sorts of "morality
> simulator" I'm thinking about. But the "morality simulator" would
> focus on utility functions which include morality, whatever that means
> -- perhaps the assigning of value to others' welfare in addition to
> one's own.
>
> Towards the end of Steve Omohundro's recent Stanford talk
> (
> http://www.intelligence.org/blog/2007/11/09/steve-omohundro-at-stanford-october-24-2007/
> )
> he mentioned the need for a mathematics of values. This is also
> related to the "morality simulator".
>
> He gave the example of the "Trolley Problem" -- a moral
> thought-experiment which could also be developed as simple computer
> simulation. (Near-trivial in this case, and simply allowing the user
> to explicitly play with the utility functions rather than using
> implicit utility functions bundled into their intuition.)
>
> Joshua
>

I have been toying with such an idea lately and the preliminary results are
as following:

Assuming the rationally unobjectionable utility
function<http://www.jame5.com/?p=45>of 'ensure continued co-existence'
one must assume it to be at least the
implicit guiding principle i.e utility function of at least every human
being. But who is running around chanting 'Must. Ensure. Continued.
Co-existence.'? Not many. It follows that the implicit utility function
Fi(i) generally diverges from the explicit utility function Fe(i) in humans
and that those whose Fe(i) best approximates Fi(i) have the best chance for
ensuring continued co-existence.

Fe(i) can be best understood as an evolved
belief<http://www.jame5.com/?p=40>in regards to what should guide an
individual's actions while Fi(i) is what
rationally should guide an individual's actions.

Not long ago Eliezer proposed two
philosophers<http://www.overcomingbias.com/2007/11/fake-morality.html>with
the following statements:

Philosopher 1: "You should be selfish, because when people set out to
improve society, they meddle in their neighbors' affairs and pass laws and
seize control and make everyone unhappy. Take whichever job that pays the
most money: the reason the job pays more is that the efficient market thinks
it produces more value than its alternatives. Take a job that pays less, and
you're second-guessing what the market thinks will benefit society most."

Philosopher 2: "You should be altruistic, because the world is an iterated
Prisoner's Dilemma, and the strategy that fares best is Tit for Tat with
initial cooperation. People don't *like* jerks. Nice guys really do finish
first. Studies show that people who contribute to society and have a sense
of meaning in their lives, are happier than people who don't; being selfish
will only make you unhappy in the long run."

Philosopher 1 is promoting altruism on the basis of selfishness
Philosopher 2 is promoting selfishness on the basis of altruism

It is a contradiction - a paradox. But only in thought – not in reality.
What is actually taking place, is that both philosophers have intuitively
realized part of Fi(i) and are merely rationalizing differently as to why to
change their respective Fe(i).

The first one by wrongly applying the term selfishness on the fallacy that a
higher paid job contributes only to his personal continued existence by
giving him more resources while in reality it contributes to ensuring
continued co-existence because he is taking the job that is considered to
benefit society the most.

The second one by wrongly applying the term altruistic on the fallacy that
his recommendations are detrimental to his personal continued existence due
to loosing resources by being Mr nice guy while it actually contributes to
ensuring continued co-existence as it not only benefits him but other people
around him as well.

The solution thus becomes that the intuitive concepts of altruism and
selfishness are rather worthless.

An altruist giving up resources in a way that would lead to a reduction in
his personal continued existence would be irrationally acting against the
universal utility function thus being detrimental to all other agents not
only himself.

An egoist acting truly selfish would use resources in a way that leads to
sub-optimal usage of resources towards maximizing the universal utility
function thus being detrimental to himself and not only all other agents.

It follows that in reality there is neither altruistic nor egoistic behavior
- just irrational and rational behavior.

---
Concrete specs for a evolutionary simulation to follow. I predict that that
agent that has Fi(i) = Fe(i) will do best.
-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT