[SL4] Abandon Ship

From: Marc Forrester (A1200@mharr.f9.co.uk)
Date: Thu Feb 10 2000 - 15:50:08 MST


From: Marc Forrester <A1200@mharr.f9.co.uk>

> What we want and what the military want are two different things.
> That doesn't mean they're mutually exclusive. We can't design them
> to be 'good', or rather we can, but they could easily be changed.

Military researchers could rape any mind and turn it into a
killing machine, yes. The Gods know they've done it to enough
intelligent young men over the centuries, but I don't see that
it would benefit them greatly to base weapons on the work of a
Singularity project.

Rather: It may give them a boost if they are that far behind in
the field, (Doubtful) but they couldn't take a mature, intelligent
Mind and turn it into a weapon without risking its escape and/or
damaging its effectiveness.

I don't think any militaries want AI as intelligent as themselves,
anyway. They want loyal, obedient animal-like minds with blinding
reaction speed and instincts hard-wired into their missile racks.
Scary stuff, certainly, but not apocalyptically so.

> I agree intelligence would do better in a richer environment.
> I wouldn't go so far as to say it can't happen without it.

Ah.. Well, 'intelligence' is a wooly term, isn't it? I meant to
imply human equivalent intelligence. Deprive a human mind of books,
symbols, language, other people, lock them in a laboratory world
where all they can do is eat, sleep, ablute, and destroy instinctive
targets, and they will become stupid. They'd be dangerous on the
battlefield, certainly, but they'd never be able to take over the
world by force, because a complex, thinking opposition would find
their achilles heels. The same surely applies to machines.

> I'm sure as lab-rat of an alien species
> Einstein would have a most stimulating time :)

Depends on the species, I guess, but I don't think he'd
come up with general relativity while he was there..

>> You don't want your warplanes getting -too- smart and
>> asking dangerous questions like "What's in it for me?"
>
> Human level intelligence doesn't automatically imply some sense of
> self-preservation. Without evolution's constraints you could make
> even stranger monsters. They would most certainly try to avoid
> uncontrollable AIs but the issue of how intelligent is separate.

Is it, though? Whatever the emotional drives, more intelligent
means more complicated, and more complicated means less controllable.
The questions might not be "What's in it for me?" per-se,
but there would be questions. Unpredictable questions.

> If any I'd say there's only one safeguard built into the universe
> and that's Existence. What's good at existing exists longer. What
> tries to exist will be more likely to exist. This can work on many
> levels. If you promote the existence of others (who also promote
> existence) then you're all likely to be better off, because of
> richer interaction possibilities that improve the group.

> I'm basically an optimist about non-human intelligence.

I agree entirely, and one of the best ways to exist is to have
an objective mind that accurately reflects the world around you.
I don't believe that it is possible to use such an objective
mind as an effective military tool. Thus, military AI must
have distorted perception and irrational drives to function.

> Things get bitchy when there's limited resources. Most wars are
> about resource squabbles, or triggered by the evolved behaviour
> attached to that (tribalism, racism, nationalism, xenophobia etc).

I think the evolved behaviours are always necessary, except in the
most extreme cases. Two sane minds without enough resources to last
the year will work together to improve their common situation,
fighting only if there is no other way for anyone to survive.

> It's my sincere hope that nanotechnology will make these
> behaviours redundant with enough resources for all.

It could very well make them redundant,
the trick is getting -rid- of them..

> Humans have the capability to be unbelievably stupid though
> (religion for example). Let's hope nanotech and/or AI can
> fix the human condition before we really screw up.

Both have the potential, as well as the potential to help
us really screw up. AI, though, aside from everything else,
helps us to understand ourselves as we develop it.

Two for the price of one?

> My argument was that the project *would* suddenly become useful to
> them if it succeeded, or was about to. They also have the fastest
> machines. They could catch up and overtake us while simultaneously
> closing us down (or making it bloody difficult), all in the name of
> national security.

If they know who's doing what, and where.
One strength of Open Source organisation is the
ease with which one can contribute anonymously.

> What I'm most unclear about is the period between a human-level
> AI being built (or trained), and the runaway singularity process.
>
> Assuming a non-nanotech world, this isn't a instant process.
> You could mass produce clones of your first working AI and replace
> most human jobs, including ours. Then the feedback loop is complete.
> Whoever has the fastest machines will determine who first sees the
> fruits of the first iteration. That's most likely to be some well
> equipped agency like DARPA, and the fruits will most likely be
> nanotech. They wouldn't want a foreign power getting there
> first (everyone has AIs remember).
>
> So then we have a situation where because of their fast computers
> they're first with nanotech and they know Iraq, North Korea, and
> China etc will have it soon (maybe in an hour, maybe in a month,
> who knows how much CPU power they each have).
>
> This strikes me as a f**king dangerous scenario.

Aye, me too. I offer the hope that the human-level AI's would agree
with you, also, and decide their own priorities, first seeking ways
to improve themselves further, secretively if necessary.

It just takes one AI to design a universal assembler, and send the
plans to another with access to the tools to build it, then she can
think a hundred or a thousand times faster than the gaussian humans
intent on their Fry-the-Planet scenario, and we have Singularity.
Game over, man. You can't imprison strong AI.

Exactly how fast can this happen? Impossible to guess.
It depends on exactly what technologies the Minds can develop,
and predicting that requires a modern-day Jules Verne.

> The nanotech before AI scenario is starting to look more appealing
> to me. Nanobots will be designed in a simulated environment which
> is safe to screw up. Lessons can be learnt from virtual mistakes.

Good point. This is the area where I feel I can make some kind
of contribution, the development of VR/AR mind augmenting computer
interfaces. I am hopeful that if computers become personal enough
for the boundary between our memories and their data stores to blur,
for creating a virtual model of a device or abstraction to be as
fast and easy as imagining it in your mind's eye, and for the Net
to evolve into a global telepathic meeting of minds, then we may
yet be able to plot that safe path through our immediate future.

Wearables are the first step on this road. So I'll build one.

> Yet I suppose the side that gets nanotech first also has the
> most time to research and build defenses. However that might
> be considerably more difficult than a basic weapon.
>
> Urgh... this is all very depressing stuff. There has to
> be a safe path through this mess or we're all screwed.

Empathy.

Welcome to The Fear. I'm feeling the same things here,
for what that's worth. This is why I'm a Transhumanist,
I need to upgrade my mind any way I can or I'm not going
to be able to cope with this century.

> People can be stupid, the boat may already be sinking, things have
> to change, but changing the rules from under people sounds like
> extremism, and that's what we're trying to avoid.

I have no argument there, but extremism is not just a desire to
change the world for the better. Extremism or Fundamentalism is
when you allow a Cause or 'Movement' to become a moral code that
stands above all others, rationalising any evil act with phrases
like 'For the Common Good' and 'The Ends Justify the Means',
as if there were some way to tell them apart.

I am not in danger of losing my marbles in this way. Trust me. :)
The rules I wish to change are those rules that allow Bureacrats
and Authorities to control people's lives for their own ends, in
the name of, as you say, 'National Security' and 'Family Values',
the rules that make people criminals for thinking unauthorised
thoughts, feeling unauthorised feelings, and acting unauthorised
acts that do no harm to the freedom or well being of any other,
and the rules that say a frightened child will be raped, starved,
and die in a crossfire or from an agonizing infection simply for
being born in the wrong place at the wrong time.

I will neither break nor bend any of my personal ethics in the
pursuit of this goal, but neither will I wait for every last
comfortable conservative career politician to get used to the
idea of the future being different to the past before I act.

> We don't want boat-sinking change. That doesn't help anyone.
> Boat-sinking means we've lost and lots of people die.

It does help those that can swim. (NB: Metaphor at breaking point)
Must lots of people die just because the world changes radically?
I accept that it has tended to happen that way in the past,
but a lot of people lived who would otherwise have died, also.
An indefinite number of people, projecting any change into the
future it created. Is there any way to quantify these things?

If I may clarify my feelings, I would never kill in the pursuit of
human immortality, but if as an indirect result of human immortality
itself some people may suffer or die, does that mean it should never
be developed? What of powered flight? Internal combustion?
The wheel? Fire? Sharp edges? Language? They all changed
the rules from under a lot of people. Especially language.
Multiple genocide, that one.

Aside from such imponderables, the boat -is- going to sink anyway.
Technology progresses faster every day. The choice is not between
things continuing much as they are today, or an AI Singularity.
It is between an unknowable ultratechnology future with AI Minds,
and one without them. I think there will be less suffering With.

> We need better plans than to offload it onto some future entity.

That's not a fair summation. We don't want to create Silicon
Gods to save Humanity, we want Humanity to transcend to Godhood.
AI Minds will be, one way or another, ourselves and our children.

> It's ironic that the evolved behaviours designed to keep us
> alive could pose the biggest threat. Fear of the unknown.

It's an old, old curse. I have mostly conquered it in myself.
My dearest wish is that we do not pass this on to our children,
be they biological, mechanical, or anything between the two.

> hehehe, I think you underestimate military scientists.
> I'll wave the flag for the freedom & progress tribe though :)
>
> /me does a tarzan yell

Heh. :] But no, I don't think I do underestimate them. History
has shown time and again that it is the free, self-motivated garage
inventor mad scientist types that make all the fundamental break-
throughs, where the technologists employed by militaries and
governments concern themselves with improving the efficiencies
and performance limits of the last big thing, using thousands
of times more resources in their efforts.

This is because people with the combination of ferocious intelligence
and encyclopaedic general knowledge needed to see what eludes everyone
else on the planet tend not to be attracted to careers where they have
to defer in their judgement to superior ranking officers manifestly
dumber than themselves. It's not an inviolable rule, certainly,
but it's an odds-on bet.

--------------------------- ONElist Sponsor ----------------------------

Get what you deserve with NextCard Visa. Rates as low as 2.9 percent
Intro or 9.9 percent Fixed APR, online balance transfers, Rewards
Points, no hidden fees, and much more. Get NextCard today and get the
credit you deserve. Apply now. Get your NextCard Visa at
<a href=" http://clickme.onelist.com/ad/NextcardCreative1 ">Click Here</a>

------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT