Re: AGI motivations

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sun Oct 23 2005 - 09:37:25 MDT


Michael Vassar wrote:
> Do you mean formally provable (there's no such thing as *more*
> provable), or only predictable with high confidence given certain
> conditions similar to those under which it has been tested.

There is an intermediate ground between a full formal proof and
simple extrapolation from experiment; use of a partial predictive
hypothesis to extend experimental results to untested conditions
and to predict the limits beyond which such inference becomes
uselessly imprecise. The SIAI is attempting to employ the purely
formal approach, but at this time it would be foolish to deny that
this may not be possible and it might prove necessary to employ a
combination of functional models and directed experiment. I would
note that any attempt to predict AI behaviour without a verifiable,
reliable model of how the AI works (which relies on a 'transparent',
'clean' architecture) falls into the 'simple extrapolation'
category; it's simply not possible to do better than guessing with
designs that employ strictly unconstrained 'emergence'.

> However, because no empirical data is even potentially available
> regarding the retention of Friendlyness in a post-singularity
> environment,

I agree, but this would not hold if it was actually possible to
build an 'AI box' capable of safely containing transhuman AGIs, so
I expect some people will disagree.

> This requires an analytically tractable motivational system, for
> safety reasons, and human-like motivational systems are not
> analytically tractable.

Definitely. The goal/motivation system is the most critical
component in questions of long-term predictability; this is the
very definition of 'optimisation target'. Even if the rest of your
AGI is an emergence-driven mess, it may be possible to make some
useful predictions if the goal system is amenable to formal analysis.
But if the goal system is opaque and chaotic it doesn't matter how
transparent the rest of the design is, you're still reduced to wild
guesses as to the long term effects of having the AGI around.

> It is possible that a non-Transhuman AI with a human-like motivational
> system could be helpful in designing and implementing an analytically
> tractable motivational system.

Well sure, for the same reason that it would be great to have some
intelligence-augmented humans or human uploads around to help design
the FAI. But actually trying to build one would be incredibly risky and
unlikely to work, even more so than independent IA and uploading
projects, so this observation isn't of much practical utility. What I
think /may/ be both useful and practical is some special purpose tools
based on constrained, infrahuman AGI.

> A priori there is no more reason to trust such an AI than to trust a
> human, though there could easily be conditions which would make it
> more or less worthy of such trust.

This immediately sets off warning bells simply because humans have a
lot of evolved cognitive machinery for evaluating 'trust', and strong
intuitive notions of how 'trust' works, which would utterly fail (and
hence be worse than useless) if the AGI has any significant deviations
from human cognitive architecture (which would be effectively
unavoidable). To work out from first principles (i.e. reliably) whether
you should trust a somewhat-human-equivalent AGI you'd need nearly as
much theory, if not more, than you'd need to just build an FAI in the
first place.
 
> I agree that this is worth discussion as part of singularity strategy if
> it turns out that it is easier to build a human-like AI with a human goal
> system than to build a seed AI with a Friendly goal system.

To be sure of building a human-like AI, you'd need to either very closely
follow neurophysiology (i.e. build an accurate brain simulation, which we
don't have the data or the hardware for yet) or use effectively the same
basic theory you'd need to build an FAI to ensure that the new AGI will
reliably show human-like behaviour. If you have the technology to do the
latter, you might as well just upload people; it's less risky than
trying to build a human-like AGI (though probably more risky than building
an FAI). If you don't, trying to build a human-like AGI is pointless;
developing the theory to do better than wild guessing is probably harder
than developing FAI theory and /still/ a poor risk/reward trade-off, while
attempting to do it without the theory (as plenty of AGI projects are,
unfortunately) is essentially racial suicide.

> Eliezer's position, as far as I understand it, is that he is confident
> that it is easier for a small team with limited resources such as SIAI
> to build a seed AI with a Friendly goal system within a decade or two
> than for it to build a human-like AI,

It's certainly my position. Trying to build a 'human-like' AGI would
necessitate massive amounts of pointless cruft (which would still have
to be meticulously researched, designed, functionally validated and
tested) and rule out the use of many techniques that greatly improve
tractability and transparency, severely increasing implementation
difficulty and hardware requirements. These are in addition to the
reasons why it's a bad idea given above.

> In addition, completing a human-like AI would not solve the requirement
> for a Friendly seed AI. It would still be necessary to produce a Friendly
> seed AI before anyone created an unFriendly one.

Yes, noting of course that it's extremely difficult to build a
'human-like AI' that isn't already an Unfriendly seed AI.

> We should still pursue the avenue in question because if neuromorphic
> engineering advances rapidly we may not have any better options,

In a choice between uploading and UFAI, or uploading and military
nanotech, uploading almost certainly wins. Given that uploading is
primarily an engineering rather than theoretical challenge, my position
would be that if you want to research 'human-like AGI', you'd be far
better off researching uploading (and/or brain-computer interfacing)
instead.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:16 MST