From: Mikko Särelä (firstname.lastname@example.org)
Date: Wed May 12 2004 - 00:45:31 MDT
On Wed, 12 May 2004, Keith Henson wrote:
> At 09:43 AM 10/05/04 +0300, Mikko Särelä wrote:
> >Still one should remember that Stanford is just a one experiment with one
> >kind of population. More tests of the kind would be required to find out,
> You should read the URL http://www.prisonexp.org/ I was called off after
> only six days of the two weeks run time. The result were such that for
> ethical reasons it could never be done again, any more than Stanley
> Milgram's experiments could be duplicated.
Oh, I've read it and I know that it would be probably be unethical to
remake the experiment. That does _not_ invalidate the points I made
> >if the results hold for all kinds of population schemes. In addition
> >this does not take into account the possibility that the game theoretic
> >structure of a prison might create incentives toward such behavior -
> >which if true will undermine Keith's thesis.
> Possibly, but there is one heck of a lot of evidence that humans can be
> and often are vicious to captives. My efforts are directed to
> understanding why.
That is a noble goal and something very good and useful to understand. I
did not say that your theory cannot be true, it can. But if you truly want
to understand what happens in such circumstances you also need to look at
the incentive structures in such situations and all those cases in which
humans are _not_ vicious to captives (which also happens).
> >It also does not take into account the possibility that these things
> >arise from the social ideas in our culture (combined with the incentive
> Possible of course. Not very likely if the origin is in the stone age.
Are the origins truly in the stone age? If they are, how can you be sure
they are not part of the memetic baggage rather than genetic? After all if
we consider stone age, people were at the time capable of learning things
and of transmitting what they learned to their children. Ideas are created
and spread a lot faster than genetic changes, so this would imply hugely
greater probability that any such behavioral model created in the stone
ages would be memetic rather than genetic.
The possible genetic baggage should then come from some pre-human period
rather than from human stone age period.
> >This of course does not mean that we should not be careful when
> >creating an AI, be it uploaded person or a creation. It does not mean
> >that we should not look into the incentive structures that we create
> >for the super AI to come.
> The problem is that other psychological modes, the ones involved in war,
> suppress rational thinking in people. An irrational AI is not something
> to think about before bedtime. :-)
I'm not yet quite convinced that people are necessarily taken over by
irrational thinking modes in such situations. I was speaking of incentive
structures, and those apply to AI as well. Of course, how they affect the
behavior of an AI depends on what kind of decision making system the AI
uses, but they still do affect it. Or if they don't, the AI will not be
good at making things happen or at living its life according to its own
-- Mikko Särelä Emperor Bonaparte: "Where does God fit into your system?" Pièrre Simon Laplace: "Sire, I have no need for that hypothesis."
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT