[SL4] Sinking the Boat

From: Marc Forrester (A1200@mharr.f9.co.uk)
Date: Wed Feb 09 2000 - 04:53:59 MST


From: Marc Forrester <A1200@mharr.f9.co.uk>

>> An AI equivalent of grey goo is a disturbing idea, but it's not as
>> flat out terrifying as the nanotech and biotech dangers, they don't
>> have to be any more intelligent than smallpox to destroy our world.
>> Combined nanotech and AI in one weapon doesn't bear thinking about.
>
> I could argue that a smart smallpox would be even more dangerous but I
> think you acknowledged that indirectly with the nanotech + AI comment.

Yeah. In order of Fry the Planet risk factor, its Nano, Bio, AI.
I wouldn't say AI isn't dangerous, but it's less dangerous than the
other two, and done right, it offers a defense against them, and
against good old-fashioned Nuclear. And the other two are being
developed anyway. Hell, Bio -is- developed. The U.S. already
want to use it in their War on (some) Drugs.

> An architecture may be found that can simply be scaled to match human
> level intelligence regardless of whether it was the intention of their
> designers.

That will happen eventually, yes, but it's likely to happen
much faster if that -is- the intention of the designers.

> I don't think it's safe to assume that our 'moral' behaviour is
> the most optimal and that anything else puts a limit on potential.
> Probably a highly selfish, shoot first mentality would be. Although
> a society of such creatures wouldn't flourish, but the military
> certainly wouldn't care about that.

Precisely so. Whereas we do. We want to produce Minds that -can-
flourish in a society, and that can be a part of our society in order
to bootstrap them into their own. Intelligence -needs- a society,
because society is one of the most complex and extelligent worlds a
developing mind can play in. How smart would Einstein have grown to
be if he was 'raised' as a lab-rat by some alien species?

> As for redesigning itself, you're assuming this isn't a fundamental
> part of it's intelligent design in the first place. My money would be
> on some form of self-modification at some level to enable intelligent
> behaviour.

Some form, yes, but free to completely rebuild itself from the most
fundamental drives upward? Not a goal of smart weapon research.
You don't want your warplanes getting -too- smart and asking
dangerous questions like "What's in it for me?"

> Again I don't think you can design in any safeguards again 'irrational
> drives'. Asimovs Laws wouldn't work, and even if they did they could
> be changed. Once you understand how to build minds you can bias them
> quite easily.

I think the safeguards are built into the universe. If one group
create a mind full of irrational and contradictory instincts for use
as a weapon, and another build a saner mind as a sibling, friend,
and child, and encourage it to grow in all ways, which mind is
going to be the smartest, going by the universal intelligence
test of survival ability?

Asimov's laws themselves wouldn't work precisely because they too
are irrational and contradictory, designed as a seed for SF stories
and a (fictional) commercial defense against the Frankenstein complex.

> DARPA (www.darpa.mil) could compete just fine with a bunch of 'saner,
> smarter' Singularitarians. They have a budget of over 2 billion US
> dollars this year. I agree an open source project would be next to
> impossible to stop, but then they wouldn't need to reverse engineer
> anything.

Quite so. An open source Singularity project would not be hugely
useful to them, the primary concern is that they must not be allowed
to stop it. Certainly, they have resources. The critical difference
between us and them, I think, is that they won't encourage their 'smart'
missiles to read and play. They feel no kinship.

> I'm not sure I like the idea of changing the rules from under people.
> That sounds very destabilizing. You preferably want to keep the
> balance of power level and not rock the boat so much it sinks.

Ah, no, not really. The driving force behind Singularitarianism,
(Which is far too long a word, BTW :) is that the boat is already
sinking, the world is full of hatred, stupidity, destitution and
agonies, the 'civilised' nations are rapidly changing from the
imperfect Republics that they were into de-facto Feudal
Aristocracies that laughingly call themselves Democracy,
and physical science is charging forwards far ahead of
human maturity. And we thought the Cold War was scary.

We have no idea what kind of world the Singularity would result in.
It may even be one with no place in it for humans at all, but it's
looking increasingly likely that the only alternative to Singularity
is death. At least this way there will still -be- a world.

> A stable increase in the intelligence of AIs would be great,
> but I think it'll happen as a breakthrough. Hopefully the hardware
> limitations will cushion the blow so people can see the singularity
> growing and prepare for it, instead of crapping themselves and
> doing something stupid.

A breakthrough, and then a whole series of ever-faster breakthroughs,
researched by the AI Minds themselves. Singularity is essentially a
sudden and irrevocable boat-sinking change in the rules that will
inevitably occur as soon as intelligent minds arise with the ability
to design their own upgrades. It doesn't actually have to be AI,
with nanotech, transhumans will do it to themselves, but nanotech
is too dangerous without posthuman intelligence, and AI is something
we can start serious, effective work on right now.

People panicking doing something stupid, now, there is the greatest
danger. An open, distributed project is one powerful defense against
such things. Likeable, human (Or transhuman, or posthuman) AI Minds
with a sense of humour and a pleasant speaking voice would be another.
Films like the Bicentennial Man also, (As opposed to the Matrix..)
meaningless and deathist though that ultimately was, and characters
like Data and #5. It's not going to be an easy time, though.
The monotheistic religions are going to be the biggest problem.

> (I wouldn't mind seeing an open source group
> beat a 2 billion dollar agency though :)

It can happen. Open networks are smarter than primate heirarchies,
and free thinking futurists of all kinds are smarter than military
scientists. We will also take every opportunity to improve ourselves,
where their priority is simply to do their jobs and obey orders.

The biggest advantage of the two billion dollars is that they
get massively parallel architechtures to play with before we do,
but a brain isn't a mind, even a well structured human brain
isn't a mind until it's spent several years playing with you.

--------------------------- ONElist Sponsor ----------------------------

Want To Be Showered With Kisses?
Visit eGroups Valentine Gift Guide
<a href=" http://clickme.onelist.com/ad/SparksValentine9 ">Click Here</a>

------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT