Re: AGI Prototying Project

From: Russell Wallace (russell.wallace@gmail.com)
Date: Sun Feb 20 2005 - 12:58:31 MST


On Sun, 20 Feb 2005 12:02:25 -0500, Ben Goertzel <ben@goertzel.org> wrote:
>
> To state with such confidence that any AGI not based on this particular
> not-fully-baked philosophical theory "will destroy the world in a randomly
> chosen fashion" is just ridiculous.

I may be misunderstanding him, but I take him to mean not that that
_particular_ theory (collective volition) is the one true one, but
that to be more useful than dangerous an AGI project must:

1) Be based on _some_ solid theory of Friendliness,

2) Have a grave respect for the potential dangers and a strong
safety-first attitude.

The last point is the more important one - I'm skeptical about whether
CV is workable, but if it turns out that it isn't, SIAI strike me has
having a careful enough attitude that I think it's likely they'll
recognize this before it's too late. (And hopefully Novamente will
adopt a similar attitude if and when it gets closer to having a seed
AI.)

- Russell



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:53 MST