Re: Morality simulator

From: BillK (
Date: Mon Nov 12 2007 - 03:12:00 MST

On 11/12/07, Joshua Fox wrote:
> Eliezer Yudkowsky wrote:
> > Because unless you narrowly restrict the available options to Tit for
> > Tat like behavior, it's too hard. You can't get simulation of general
> > consequentialist reasoning without general intelligence. Never mind
> > simulations of tribal alliance formation and linguistic political
> > persuasion. This is very advanced stuff, cognitively speaking.
> But surely, any complex system can be simplified in a model. The model
> would not be as good as the real thing, but toy examples often do
> teach something. I am also suggesting that as a first easier first
> stage, we just evaluate the morality of agents, rather than driving
> their decisions.

The 'Snowdrift' game is claimed to improve on the 'Tit for Tat' game.


  When it comes to explaining the evolution of human cooperation,
researchers have traditionally looked to the iterated Prisoner's
Dilemma (IPD) game as the paradigm. However, the observed degree of
cooperation among humans is generally higher than predicted by
mathematical models using the IPD, leaving unanswered the question of
why humans cooperate to the extent they do.
A group of researchers from the University of Lausanne in Switzerland
and the University of Edinburgh in the UK suggests that a different
game, called the "iterated Snowdrift game" (ISD), may more
realistically reflect social situations that humans face, compared
with the IPD. In experimental tests, the proportion of cooperative
acts in the ISD game (48%) was significantly higher than those in the
IPD (29%).

The cause for this difference is due to the higher risks of being
exploited in the IPD compared with the ISD, where the risk of being
exploited by someone who doesn't cooperate when you do is lower.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT