From: Keith Henson (hkhenson@rogers.com)
Date: Thu Nov 11 2004 - 21:01:11 MST
At 02:37 PM 11/11/04 -0700, you wrote:
>Keith Henson wrote:
>...
> > To tie this back into being on topic, it is going to be hard
> > enough for FAI
> > to emerge in humans who are not in "war mode." "Friendly" wouldn't be
> > high on the list for researchers in war mode. Consider the Manhattan
> > project. http://www.me.utexas.edu/~uer/manhattan/debates.html
>
>This is why Eli has assiduously avoided seeking any of the ample monies the
>US government is spending on military or "homeland security" computer
>applications.
Very high minded of Eli. Of course, the ability of the human mind to
rationalize is deeper than has ever been plumbed. I.e., if an agency of
the government were to offer enough $ and free enough terms, then I suspect
Eli would take them up on it. He could always justify his actions because
if he didn't work on it, someone with fewer scruples would.
>It also explains, IMO, why his definition of Collective
>Volition is not the eigen vector of aggregate human desires, but rather the
>apotheosis of what the "rational person" would want, just as Kant's
>Categorical Imperative abjures us to uphold whatever moral principle in our
>own lives that we would want others to uphold in theirs (which is like a
>meta-Golden Rule).
This might work out, but I am not entirely comfortable with it. "Rational"
sounds good, but as we all know, "rational" does not cut it with the
prisoner's dilemma game.
Keith Henson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT