Re: [SL4] Rogue AIs

From: DaleJohnstone@email.com
Date: Mon Feb 07 2000 - 13:51:02 MST


From: DaleJohnstone@email.com

On Sun, 06 Feb 2000 22:56:38 -0600, you wrote:

>> I also have some serious ethical concerns regarding a possible military seizure of the project. I think a 'grey goo'-like situation is possible with AI, call it a 'chrome army' scenario if you like. Truely smart weapons would be truely terrifying, and probably unstoppable (with the exception of having a nanotech defence).

>My book on the situation has been that AI that's dumb enough to be a
>willing tool will, indeed, have military applications. Virtually
>everything does. I don't think those military applications are so
>extreme as to grant an unbeatable advantage in war, especially if both

'dumb enough to be a willing tool' assumes a well-balanced mind. An AI
with modified (or hard coded) motivation can be a smart killing
machine. Even a genius can be a serial killer.

I don't know about you but I can think of some pretty extreme military
applications. Intelligent machine guns with legs, mass produced by the
thousands, air dropped into a country. You wouldn't stand a chance.
I'm sure Saddam Hussein could think of something even cheaper to
build.

>sides have the AI. And there's no intrinsic imbalance between offense

If both sides have AI then it would give the other side a chance, but
it isn't like a nuclear Mutual Assured Destruction arms race whereby
the stick just gets bigger. AI can be subtle and surprising. How would
you combat a swarm of robot insects without being prepared? I haven't
even thought about the possibilities of self-replicating macro/micro
robots.

Also how can we be sure that AI is shared anyway?, what if the US
military comes up with it first? or just takes a promising open-source
effort and completes it ahead of us?

>and defense. In information warfare, for example, an ounce of defensive
>AI will probably turn out to be worth a pound of offensive AI.

I'm not sure I understand your argument here. You mention there's no
imbalance, then give me an imbalanced example.

>Genuine AI might seriously exacerbate the nanotechnology problem,
>however. As it is, we're likely to see giant, unwieldy vats
>synthesizing diamondoid fighter jets years before we have to deal with
>actual goo. AI could seriously compress that time period.

I don't claim to fully understand the low-level chemistry. I hope
nanotech is naturally limited by lack of fuel/material in the same way
fire is. However bacteria seem to grow everywhere on earth.

Smart nanotech would be unimaginably powerful. Singularity type stuff
indeed.

>I don't really see a feasible alternative, however.
>
>> So as you can imagine I'm a little concerned at the idea of open source AI. This possible scenario is not mentioned anywhere on the website. As the #1 rule is 'Don't fry the planet' I think it deserves some attention.
>
>I'll join in a sigh to the general sentiment. Ultraproductivity kicking
>hell out of the economy, military applications of everything...
>accelerating the future is like tap-dancing through a minefield. Around
>the most you can do is hope to accelerate the defensive, social, and
>cushioning aspects of a technology ahead of the offensive,
>destabilizing, and sudden-shock aspects. A tradition going back to
>Drexler and hypertext.

You're probably right on that last point. Maybe a push to make AI more
open will help level the playing field.

I also think we should encourage researchers not to patent their
designs, unless it's to reserve the right for everyone to build it.
There's no such thing as a just monopoly.

--------------------------- ONElist Sponsor ----------------------------

Valentine's Day Shopping Made Simple.
<a href=" http://clickme.onelist.com/ad/SparksValentine7 ">Click Here</a>

------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT