Re: [SL4] brainstorm: a new vision for uploading

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Aug 14 2003 - 01:47:02 MDT


On Wednesday 13 August 2003 07:52, Gordon Worley wrote:

> Many AGI projects is, in my opinion, a bad idea. Each one is more than
> another chance to create the Singularity. Each one is a chance for
> existential disaster. Even a Friendly AI project has a significant
> risk of negative outcome because Earth has no AI experts. Rather we
> have a lot of smart people flopping around, some flopping in the right
> direction more than others, hoping they'll hit the right thing. But no
> one knows how to do it with great confidence. It could be that one day
> 10 or 20 years from now the universe just doesn't wake up because it
> was eaten during the night.

The entire universe? Naw. Many AGI projects is a great idea precisely
because we don't know which path is the most fruitful with least danger at
this time. If humanity is facing almost certain disaster without an AGI and
only with the right kind of AGI is the likelihood of survival/thriving high,
then even risky possible paths are reasonble in light of the certain of doom
without AGI.

>
> Each AGI project is a chance for failure. Even if you manage to create
> a project that has a chance of creating the Singularity, it has a good
> chance of going wrong. And just one killer AI is enough. You can't
> suppress it with 10 other `good' AIs; the one killer just wipes out
> everything in a couple minutes before anybody can respond. It's over
> before it even began.
>

We are doomed without AGI. Ergo, any AGI with a good chance of not destroying
us is better than our default condition. It would be a stupid AI that would
literally wipe out "everything" which would belie it being smart enough to
grab and retain control of "everything".

> We should try to limit our points of failure, not increase them. The
> fewer AGI projects, in my opinion, the better, because we will be able
> to focus more resources on it (intelligent people, money, etc.) rather
> than spreading the resources thin to create a lot of half-ass projects
> that have a greater chance of existential disaster than one well backed
> project.
>

Limiting points of failure as you would have it is actually limiting points of
success or even survival (of the 21st century in my estimation). If we
don't know how to do AGI then having one mega-effort is likely to result in
being bogged down in endless conflicting architectures, theory, and of course
normal big project monkey-shines. Again, given AGI of a friendly variety is
our own real hope, we must not put all our eggs in one controversy and
politics wrought basket.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT