Re: Friendly AI in "Positive Transcension"

From: Eliezer S. Yudkowsky (
Date: Sun Feb 15 2004 - 20:41:23 MST

Metaqualia wrote:
> Consider this summary.
> "Elizier::FAI consists in avoiding the imposition of moral supergoals (do
> this, do that) and recreating the cognitive architecture that humans use to
> set these moral goals for themselves (machine thinks: looks as if I should
> do this, do that)"
> Is it such a bad compact summary?

Yeah. For one thing, no positive proposal consists of avoiding something.
  For another, I would no longer use the term "moral supergoal". The part
about recreating cognitive architecture is an alarming prospect if you
leave out the renormalization - you'd get the icky parts too. Etc.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT