RE: SIAI's flawed friendliness analysis

From: Philip Sutton (Philip.Sutton@green-innovations.asn.au)
Date: Sat May 24 2003 - 08:47:28 MDT


Ben said:

> the most key point is the division between
>
> -- those who believe a hard takeoff is reasonably likely, based on a
> radical insight in AI design coupled with a favorable trajectory of
> self-improvemetn of a particular AI system
>
> -- those who believe in a soft takeoff, in which true AI is approached
> gradually [in which case government regulation, careful peer review and so
> forth are potentially relevant]
>
> My own projection is a semi-hard takeoff, which doesn't really bring much
> reassurance...

OK. Let's look at the semi-hard take off scenario. I assume that what
you mean is that for a period after the creation of a baby-AGI the
humans around it will have to do a lot of work to build it up to a
reasonable level of competence in the real world (lots of training/lots of
new code development/lots of hand holding) But at some stage all this
hard work will come together and the AGI (or a group of AGIs) will be
able to drive it's/their own self-improvement without much input from
humans. At that point we will get a hard take-off. Ben, have I
interpreted your views accurately?

If this is a reasonable summary, then it seems to me that we have to
have a very, very reliable guess as to when to expect the transition to
take-off to begin. (By the way it's like the birth process - which has
three phases - pre-transition, transition and then post-transition. In the
pre-transition phase there's lots of pushing, pushing, pushing but not
much moving. The pre-transition period has a fairly regular rhythm to it.
In post-transition the baby shoots down the birth canal (all being well)
very fast - like a wet bar of soap firing out of a squeezing fist!) But
transition is a very strange, uncomfortable phase where the old rhythm
of contraction becomes very irregular but the new pattern of post
transition fast movement hasn't taken hold yet. If this analogy has any
real value for understanding AGI development then AGIs (as a class)
might be considered to be born twice - once as non-self improving
intelligences and then again as self-improving intellgences [post take-
off].)

And it seems to me that all AGI-development projects need to ensure
that they have introduced powerful life-compassion capability into their
AGIs *before* the hard take off can possibly begin.

My understanding of things is that SIAI feels that we cannot know when
we are one the safe side of take-off so friendliness work should be done
now. Ben on the other hand (I think) thinks that it would be 100% safe
to have an early-model baby AGI in existence before much work was
done on introducing life-compassion.

Whether Ben is right I think depends on whether the lead time to go
from an early-model baby AGI to the point of hard take-off is longer
than the lead-time for AGI development teams and/or society to go
from a vague idea of what we want in the way of AGI morality to the
point where we can introduce it tangibly and securely into real AGIs.

My own feeling is that there are lots of issues about what sort of
morality we think AGIs should have that are not hardware/software
dependent and that have, most likley, a longer leadtime than the early-
model baby AGI to the point of hard take-off leadtime.

If that's so then we need to redouble the effort on AGI morality. The
Singularity Institute has done very valuable work in the area which
needs to be developed further. But I think there are aspects of the AGI
morality issue that the Institute itself hasn't even flagged.

It would be quite interesting to conduct some sort of collaborative
scoping exercise to identify what issues different people think we need
to look at. If we could produce a single document that had all the big
issues that each one of us thinks should be considered in the course of
tackling AGI morality then we might be able to avoid talking past each
other and from this document we might be able to generate an R&D
agenda - moving in several direction at once as I don't anticipate that
we will all be of one mind..

What do you think of this suggestion?

I don't have a lot of time to put into this but I do have *some* time to
contribute.

Cheers, Philip



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT