Re: [SL4] brainstorm: a new vision for uploading

From: Samantha Atkins (samantha@objectent.com)
Date: Wed Aug 20 2003 - 17:33:17 MDT


On Thursday 14 August 2003 03:16, Nick Hay wrote:

> > The entire universe? Naw. Many AGI projects is a great idea precisely
> > because we don't know which path is the most fruitful with least danger
> > at this time. If humanity is facing almost certain disaster without an
> > AGI and only with the right kind of AGI is the likelihood of
> > survival/thriving high, then even risky possible paths are reasonble in
> > light of the certain of doom without AGI.
>
> Risky paths are reasonable only if there are no knowable faults with the
> path. Creating an AI without a concrete theory of Friendliness, perhaps
> because you don't think it's necessary or possible to work out anything
> beforehand, is a knowable fault. It is both necessary and possible to work
> out essential things beforehand (eg. identifying "silent death" scenarios
> where the AI you're experimenting with appears to perfectly pick up
> Friendliness, but become unFriendly as soon as it's not dependant on
> humans). You can't work out every detail so you'll update and test your
> theories as evidence from AI development comes in.

Well, that is fine except you did not address the primary point I attempted to
make. The measure of risk is relative to alternatives. If we face almost
certain death of humanity if we do not create AGI then the amount of risk
that is tolerable enough to move forward needs to be adjusted accordingly.

>
> An AI effort is only a necessary risk given that it has no knowable faults.

I disagree. What is an is not knowable comes into play and what is and isn't
a survivable fault is also germane.

> The project must have a complete theory of Friendliness, for instance. If
> you don't know exactly how your AI's going to be Friendly, it probably
> won't be, so you shouldn't start coding until you do. Even then you have to
> be careful to have a design that'll actually work out, which requires you
> to be sufficenlty rational and take efforts to "debug" as many human
> irrationalities and flaws as you can.

I am sorry but I see no way to fully work out the theory of Friendliness to
such a state of completion given the limitations of human minds at this time.
In particular there are many parts of an AI that have nothing to do with
Friendliness whose coding certainly doesn't require waiting for such a theory
to be complete. As we need some of these subparts to be available in order
to be bright enough to carry out the rest of the work including getting a
more airtight theory of Friendliness and testing its implementation, it would
obviously be suicidal to put off all coding until we had the complete theory.

>
> "AGI project" -> "Friendly AGI project" is not a trivial transformation.
> Most AI projects I know of have not taken sufficent upfront effort towards
> Friendliness, and are therefore "unFriendly AGI projects" (in the sense of
> non-Friendly, not explictly evil) until they do. You have to have pretty
> strong evidence that there is nothing that can be discovered upfront to not
> take the conservative decision to work out as much Friendliness as possible
> before starting.

Your "therefore" does not follow. It has some assumptions packed into it that
are questionable. What is "as much as possible"? To whose satisfaction?

>
> Since an unFriendly AI is one of the top (if not the top) existential risk,
> we're doomed both with and without AGI. For an AGI to have a good chance
> not to destroy us, Friendliness is necessary. Ergo, Friendly AIs are better
> than our default condition. By default an AGI is unFriendly: humane
> morality is not something that simply emerges from code. If the AGI project
> hasn't taken significant up front measures to understand Friendliness,
> along with continuing measures whilst developing the AI, it's not likely to
> be Friendly.
>

I disagree that unFriendly AI is the top existential risk. The top risk is
our own stupidity using the technology at our command to destroy ourselves
accidentally or on purpose. We are not bright enough individually or
collectively to successfully navigate the issues and problems that we
presently face indefinitely much less as they speed up and become more
complex. So the hightest priority toward saving humanity is to develop and
deploy greater intelligence. One of the things this includes is various
levels of AI leading up to AGI. I very much agree with the importance of
making AGI friendly. But I very much doubt that we are capable of fully
developing AGI much less friendly AGI in theory or practice without
considerable AI and IA along the way.
 
- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT