From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri May 17 2002 - 14:00:06 MDT
James Rogers wrote:
>
> Implication: Any Friendliness theory for AGI that requires perfect
> rationality cannot be guaranteed to stay Friendly. Ironically, the best
> prophylactic for this (other than not doing it at all) would be to make
> the AI as big as possible to make the probability of a "psychotic
> episode" vanishingly small.
>
> An opinion on this from a "Friendliness" expert (Eliezer?) would be
> interesting.
Human altruism doesn't require perfect rationality. Why would Friendliness?
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT