Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Tue Jun 06 2006 - 10:39:31 MDT


On Tue, Jun 06, 2006 at 08:09:58AM -0700, Ben Goertzel wrote:
> However, in comparison to "Creating a Friendly AI" (CFAI), I must
> note that the ambition of what's attempted in "Artificial
> Intelligence and Global Risk" (AIGR) is greatly reduced.
>
> CFAI tried, and ultimately didn't succeed, to articulate an
> approach to solving the problem of Friendly AI. Or at least, that
> is the impression it made on me....
>
> On the other hand, AIGR basically just outlines the problem of
> Friendly AI and explains why it's important and why it's hard.
>
> In this sense, it seems to be a retreat....

He does point out repeatedly that he's trying to operate in limited
space.

> I suppose the subtext is that your attempts to take the intuitions
> underlying CFAI and turn them into a more rigorous and defensible
> theory did not succeed.

That's a very interetsing jump. Perhaps he's merely not finished
yet?

-Robin

-- 
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute - http://intelligence.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT