Re: Fighting UFAI

From: Michael Anissimov (michaelanissimov@gmail.com)
Date: Mon Jul 11 2005 - 01:08:22 MDT


Phillip Huggan wrote:

> I am looking for UFAI markers. UFAI isn't fightable by humans
> once it gains access to the internet, but if it could be detected
> at some stage of its rise to military hedgemony, we might be able
> to activate a 99% safe AGI to fight it. It is assumed the
> specific pathway an UFAI will take is irrelevant because of its
> many advantages over humans, but it might have a level playing
> field against an AGI that we would otherwise not risk unleashing
> upon the world. UFAI = doom, schools of thought, neglect that we
> might have FAI as an ally. Of course, if no UFAI markers can be
> brainstormed, I've just wasted everyones time again, esp. with
> this last sentence.
>

Unfortunately, the bootstrap curve for seed AI seems steep enough that
by the time an emergent UFAI is noticed, it's very likely time has
already run out. Ruling out the possibility of a false alarm and
confirming that the emerging seed is unFriendly would take even more
time. Remember that a bootstrapping AI will most probably be thinking
and acting very rapidly compared to humans - one second of human time
could be equivalent to millions of years of time from the AI's point of
view, plenty of time to consider strategies for wiping out all the
opposition. Of course there is also the problem of smartness - a
smarter-than-human AI running at human thinking speed would still be
very likely to overwhelm all opposition or seemingly clever plans to
stop it.

On the idea of UFAI markers - if we had a comprehensive set of UFAI
markers, then we'd have enough knowledge to create a FAI simply by
building an AI that lacked those markers. Part of the difficulty facing
us is that UFAI isn't an isolated special case of seed AI - it's the
norm. Creating a *Friendly* AI means narrowing down the state space
enough to find a dynamic we can coexist with even if that dynamic is
applying near-asmyptotic optimization pressure on its environment. This
specific type of dynamic seems rare and clear-cut enough that it's
dangerous to use terms like "99% safe" with respect to AI - in my
current mental picture, an AI is either Friendly or it isn't. Anything
not thoroughly verified as Friendly should be regarded as an danger to
the planet.

The idea of using a prehuman Friendly AI as an ally in performing tasks
is an interesting one, though. Even a very dumb FAI might be able to
produce ideas that would be fantastically useful to us. Most intriguing
would be the idea of using a prehuman FAI to assist in creating a
smarter FAI. Given a codic modality early, a sufficiently intelligent
Friendly AI might have more insight into its own workings than even the
most skilled of FAI programmers. Of course the road to this point is
sure to be long and hard.

Michael Anissimov



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT