RE: Complexity, universal predictors, wrong answers, and psychoticepisodes

From: ben goertzel (
Date: Fri May 17 2002 - 14:01:51 MDT

Human "altruism" is badly imperfect, and has led to many bad consequences.
 It might have wiped us out already, had more powerful technology been
present longer...

ben g

-----Original Message-----
From: Eliezer S. Yudkowsky []
Sent: Friday, May 17, 2002 2:00 PM
Subject: Re: Complexity, universal predictors, wrong answers, and

James Rogers wrote:
> Implication: Any Friendliness theory for AGI that requires perfect
> rationality cannot be guaranteed to stay Friendly. Ironically, the best
> prophylactic for this (other than not doing it at all) would be to make
> the AI as big as possible to make the probability of a "psychotic
> episode" vanishingly small.
> An opinion on this from a "Friendliness" expert (Eliezer?) would be
> interesting.

Human altruism doesn't require perfect rationality. Why would

-- -- -- -- --
Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT