RE: Military Friendly AI

From: Eugen Leitl (
Date: Sat Jun 29 2002 - 09:21:02 MDT

On Sat, 29 Jun 2002, Ben Goertzel wrote:

> You're suggesting that in 10-20 years from now,
> a) we will not have created a human-level AGI yet

Yes. However, I do not trust my judgement in this sufficiently, orelse I
wouldn't be here reading this forum. Most normal people think we all here
are engaged in massive mental masturbation. They're probably right, but
conventional risk assement breaks down here. (Plus, our time is exactly
ours to waste, right?).
> b) the gov'ts of the developed world will have instituted a surveillance
> system that has its feelers in every computer chip of any reasonable power,
> and every network hub, etc. Maybe even in every brain! AGI research will
> be allowed only in gov't regulated facilities, and if the surveillance
> system sees anyone doing unapproved AGI research, the gov't will step in and
> squash it.

This is a possible future. We've got several independent forces working
towards that. First, spooks need no extra incentive to snoop. It's their
business, and bureaucracies, emulating ideal gas, tend to expand to fill
all available room, if unopposed. So far, we don't see much backpressure
rising up from the grass roots. The matter is a bit technical, and people,
if not overworked, are too busy partying to notice. Though a possibility,
I personally tend to disbelieve the aura of overworked imcompetence the
diverse TLA agencies are projecting. Psyops ain't that hard, and I kinda
doubt the suddenly lost all capabilities in that respect, which were
rather well developed.

So the feds world-wide are pushing for more surveillance on the networks.

Secondly, the content and copyright owners (lately, Redmond is expanding
into content provider role, and shifts stragies) are trying to make you
paying for the hardware which keeps you from owning and using content you
paid for. Coincidentally, diverse Blut und Bod.., excuse me, Homeland
Securitate czars love this, because this recent rehash of the tired
Trusted Platform is key escrow in disguise.

Certain people who recently caused a collapse of twin buildings of certain
symbolic value and occupied by high-octane businesses, plus some random
federal employee busybody mailing B. anthracis spores in badly sealed
envelopes, or random wackos planned nerve gas attacks on the EU
parliament, greatly assisted the feds worldwide in their unprecedented
grab for power. (Whoever did it on which payroll is irrelevant, useful
idiots are useful idiots).

So even before Singularity seed AI research is classified as terrorism,
we've got a definite tendency on our hands. For those, who've been asleep
at the wheel long enough to miss the trend, and ask for evidence, can
browse the following news clipping over a relatively short observation

(no, it's no longer hosted at yahoogroups).
> I think this is a *possible* future, I just don't think it's a likely one.
> For one thing, unless we have world gov't, what's going to stop "rogue
> nations" from developing their own AGI as part of their own defense/offense
> efforts? To assume a safe future ensured by regulation, you effectively

Armageddon devices are not exactly weapons, not in the classical sense
(Dr. Strangelove, we really don't want to hear your objections). I don't
subscribe to the "rogue nations" rhetoric, but clearly the high threshold
of development indicates the grain size and competence required does make
for a rather short list of countries in which it could probably happen
(corrected for several decades downstream).

> have to assume a world gov't, otherwise, your future is going to include
> some kind of AGI arms race, which in my view is pretty darn dangerous. (I
> think it's *more* dangerous than a world where AGI is developed by industry
> or maverick scientists... others may disagree.)

I personally don't think the user experience is going to be all that

> I think the future you describe is unlikely even if your pessimistic stance
> about the pace of AGI development is wrong. But it's not an unthinkable or
> ridiculous prognostication; of course, predicting the future is not easy,
> especially in times of such rapid change...
> >I see a number of people talking deep
> > engineering hubris in face of the mothers of high-complexity efforts with
> > the highest impact imaginable.
> Yes, I can accept this characterization. I have "deep engineering hubris."
> ;)
> I kinda like the phrase, actually. Someone should use it to title a band,
> or at least a CD... not a techno CD, that would be too obvious; maybe some
> retro early 80's hardcore ;>

I'm not sure the candy ravers will take much notice.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT