Re: AGI Reproduction? (Safety)

From: nuzz604 (nuzz604@gmail.com)
Date: Fri Feb 03 2006 - 21:15:20 MST


I understand that you have good intentions with AGI. My worry
involves -accidentally- creating an unfriendly AGI (And this worry applies
to anybody who builds an AGI). You can have good intentions and still
create an unfriendly AI because of some design flaw (even if it appears to
be a great design). I am worried because no one really knows how a Seed AI
will function when it is turned on, and whether it will be friendly or not.
There are so many things that can go wrong.

This is why I think that the system and it's safety should be analyzed, and
go through at least several phases of testing before activation is even
considered.

I would also feel better if these tests are conducted by a team of
independent AGI researchers rather than just one firm, or RGE Corp. by
itself. You can have many shots at creating a Seed AI, but you only get one
shot at creating a friendly one. If this is the Seed AI that is successful,
then I say make that friendly shot count. Since you want to be open with
the technology, I think that it is an idea worth considering.

Mark Nuzzolilo

----- Original Message -----
From: "Rick Geniale" <rickgeniale@pibot.com>
To: <sl4@sl4.org>
Sent: Friday, February 03, 2006 9:55 AM
Subject: Re: AGI Reproduction?

> If this message is related to the work of RGE Corp., you can be quite.
> You don't have to worry about our work.
> We don't have any ambition of power and control (this is not in our
> interest). We want only to create positive
> technologies and we want open and share them. We will practically
> demonstrate this. You can be sure of that.
> More to follow ... Wait and see.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT