From: Stephen Tattum (S.Tattum@dundee.ac.uk)
Date: Tue May 11 2004 - 07:14:29 MDT
>>> msarela@cc.hut.fi 05/10/04 7:43 AM >>>
On Sun, 9 May 2004, Michael Roy Ames wrote:
> "Keith Henson" wrote:
> [snip]
> And how many of these conditionally switched on dangerous psychological
> mechanisms do humans have left over from the Pleistocene? And what
> environmental conditions turn them on? Darned if I know.
> [snip]
> In any case, uploading looks far more dangerous than it did a week ago.
>
> ---
>
> Far more dangerous? Perhaps not, because of insights like yours. The more
> we (humans) understand what 'makes us tick' the better prepared we will be
> to handle the dangers. Only yesterday upon reading about the abuse of
> prisoners, my comment was: "Gee, I would expect some amount of captive abuse
> to be par-for-the-course in Iraq". Well, apparently my intuition (despite
> sounding somewhat cold when stated plainly) has some support - both from
> related experimental evidence at Stanford, and now with your evolutionary
> explanation.
>
> Thanks for your explanatory efforts. I sincerely hope your warnings don't
> go unheeded.
Still one should remember that Stanford is just a one experiment with one
kind of population. More tests of the kind would be required to find out,
if the results hold for all kinds of population schemes. In addition this
does not take into account the possibility that the game theoretic
structure of a prison might create incentives toward such behavior - which
if true will undermine Keith's thesis. It also does not take into account
the possibility that these things arise from the social ideas in our
culture (combined with the incentive structure).
This of course does not mean that we should not be careful when creating
an AI, be it uploaded person or a creation. It does not mean that we
should not look into the incentive structures that we create for the super
AI to come.
-- Mikko Särelä Emperor Bonaparte: "Where does God fit into your system?" Pièrre Simon Laplace: "Sire, I have no need for that hypothesis." I think you're very right Mikko, there's obviously more to this than we can sum up in a few e-mails, for instance my first thoughts were, what kind of person joins the army ? Is it not likely to be someone with traits of dominance and with a propensity, if not desire, for violence? As for the stanford experiment, what do we know about those that took part, were they volunteers? Did the guards volunteer to be guards? As for how this applies to any future AI, I think we have to keep in mind that we're not necessarily trying to re-create the organisation of the human brain in a piece of software, it is just that the human brain and the human condition are all we have as a model of intelligence - luckily there's enough intelligence in some of these brains that I think we can avoid our shortcomings when creating an AI, one of the reasons I think AI is so fascinating. Steve
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:35 MST