[sl4] [Hplusroadmap] [SL4] Re: Paper: Artificial Intelligence will Kill our Grandchildren

From: Bryan Bishop (kanzure@gmail.com)
Date: Tue Jun 24 2008 - 23:05:41 MDT


On Friday 13 June 2008, Anthony Berglas wrote:
> http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.h
>tml

Let's do this.

> There have been many exaggerated claims as to the power of Artificial
> Intelligence (AI), but there has also been real progress. Computers

This entirely depends on what progress you are expecting ai to come in
the form of. This might be due to the fact that many people are taking
peculiar routes to getting programmable intelligence, taking other
loopholes and whatever, assuming definitions of intelligence, and so
on. Difficult issues there, but otherwise if you look more closely
there might be some real progress in other areas. RSI, maybe.

> can drive cars across rough desert tracks, understand speech, and
> prove complex mathematical theorems. It is difficult to predict
> future progress, but if a computer ever became about as good at
> programming computers as people are, then it could program a copy of

No, it could just copy bits and bytes, nothing about programming is
needed for copying from one machine to another.

> itself. This would lead to an exponential rise in intelligence (now

No, this is not true. The ground problem inherently limits the available
hardware on which the software can 'exponentially' expand on. Really
the numbers are going to look like something that keeps hitting the top
of available hardware capacity, unless they are physically grounded
with manufacturing processes that allow it to exponentially make the
machinery to make the machinery to ... <do what it does>.

> often referred to as the Singularity). And evolution suggests that a
> sufficiently powerful AI would probably destroy humanity. This paper

Wha? How does 'evolution' suggest this? Evolution doesn't come to our
door and actually say this to us, so I fail to see what you're trying
to say here. Evolution isn't a good friend of ours (and I'm not saying
it's a bad friend). Merely that you need some elaboration and
clarification.

> reviews progress in Artificial Intelligence and some philosophical
> issues and objections. It then describes the danger and proposes a
> radical solution, namely to limit the production of ever more
> powerful computers and so try to starve any AI of processing power.

That's peculiar. That's like shooting yourself in the foot. Especially
considering the ancient post to SL4 stating "I am an ai anyway".

> This is urgent, as computers are already almost powerful enough to
> host an artificial intelligence.

Maybe the solution is that, instead of making silly regulations, you
should try to make redundant those things that you value the most and
want to save from that potential destruction that you forsee? Whether
or not there's enough computational power isn't going to stop much from
happening. Hell, we used to make computers in our basements out of
freaking vacuum tubes. These things can be made by hand if they had to.
And it's relatively easy to automate. As for the semiconductor
manufacturing industry < http://heybryan.org/semiconductor.html >,
those guys used to make circuits by photomasks and lenses they bought
from the stores down the street from their parents' garages. So
regulations aren't really going to stop that sort of thing.

> Hardware has certainly become much, much faster, but software has
> just become much, much slower to compensate. We think we understand
> computers and the sort of things they can do.

Are you a programmer and have you any idea ?

> But quietly in the background there has been slow but steady progress
> in a variety of techniques generally known as Artificial
> Intelligence. Glimpses of the progress appear in applications such
> as speech recognition, some expert systems and cars that can drive
> themselves unaided on freeways or rough desert tracks. The problem is
> far from being solved, but there are many brilliant minds working on
> it.

Although many grants are given to researchers claiming those
applciations to be relevant to ai, there's not necessarily anything
that says that a car that drives itself is intelligent at all. There's
the visual systems that are integrated into it, I suppose, but a lot of
that has been stolen from biology when it comes to programs like
pNEURON or GENESIS, so really that's just stealing from biology [and in
which case, our search should go to biology, not to the foundations of
computers, which I must admit my longing for ...].

> It might seem implausible that a computer could ever become truly
> intelligent. After all, they aren't intelligent now. But we have a
> solid existence proof that intelligence is possible — namely
> ourselves. Unless one believes in magic then our intelligence must
> result from well defined electro chemical processes in our brains.
> If those could be understood and simulated then you would have an
> intelligent machine. But current results suggests that such a
> simulation is not necessary, there are many ways to build an
> intelligent machine. It is difficult to predict just how hard it is
> to build an intelligent machine, but barring magic it is certainly
> possible.

Bingo. Let me point out something that you have stated very, very well.
It is /people/ that are intelligent. There's something about their
brains, something about these systems that does something that we want
to imitate and elaborate on. But we don't know what the hell it is. I
don't want any of the games with saying it's pattern recognition or
something silly like that, don't cite any pop psychology. What we know,
as of now, is that the brain is doing something awesome, and that we
want to figure out how to do it in other areas too. As for the "well
defined electro chemical processes", that's very, very vague for such
an important concept (intelligence). There's more to it than that.

http://heybryan.org/mediawiki/index.php/Henry_Markram

The brain is so much more than electrochem procs. I'm not saying that
it's all relevant, but at the same time I'm saying that handwaving
electrochem isn't the way to go about doing this ...

> Man's intelligence is intimately tied to his physical body. The

I'd argue Google.

> brain is very finite, cannot be physically extended or copied, takes

I'd argue arms and legs re: extensions.

> What is certain is that an intelligence that was good at world
> domination would be good at world domination. So if there were a
> large number artificial intelligences, and just one of them wanted to
> and was capable of dominating the world, then it would. That is just

I don't understand. Are you saying that because an ai really wants
something, that it is somehow privledged to get it because of its
nature as ai, whereas ai implemented in biological systems isn't going
to get it? That makes little sense, and sounds like ancient vitalism to
me. World domination is world domination, no matter whether you are ai
or hu.

> Darwin's evolution taken to the next level. The pen is mightier than

This is not Darwin.

> the sword, and the best intelligence has the best pen. It is also
> difficult to see why an AI would want humans around competing for
> resources and threatening the planet.

Let's think a little broader than the planet. This is SL4, not SL1.

> The first question to be addressed is whether computer hardware has
> sufficient power to run an intelligent program if such a program
> could be written.

The concept of power is meaningless when it comes to computer science.
Instead, you have to consider computational complexity, executing time,
or maybe even complexity classes (NP-hard, etc.). But power doesn't
mean much. It doesn't matter if the ai runs for a few billion years or
for a few seconds. Mainly the issue is that we want something that we
can use within a reasonable amount of time so that we can analyze the
data and see if it matches up with what it probably should be doing. So
that we can debug the software. Debugging something that takes more
than a few hours to run is a pain in the ass. Debugging something that
runs over many years is even worse. So the alternative is to just get
it correct the first time. But then this is assuming a cathedral-style,
monolithic development effort from some sort of 'universal model of
intelligence', when we don't even have that sort of model in the first
place. All that we have is the human brain and we know it's doing
something special. That's it. None of this statistical inference crap
to get around it. Debugging our methodology of implementing this in
other ways could be done if we accept that the implementation isn't
necessarily going to be somebody sitting down at the terminal and
typing out code [as much as I'd like to do just that].

> Our meat based brains have roughly 100 billion neurons. Each neuron
> can have complex behavior which is still not well understood, and may
> have an average of 7,000 connections to other neurons. Each neuron
> can operate concurrently with other neurons, which in theory could
> perform a staggering amount of computation. However, neurons are
> relatively slow, with only roughly 200 firings per second, so they
> have to work concurrently to produce results in a timely manner.

Actually, you might be interested in knowing that it has been shown that
in the hu brain there are only maybe up to 100 neurons in a path from
input to output, so there's lots of specialization of neurons and the
pathways for signal processing and such. It makes for an interesting
visualization of the brain. Just an idea.

> That said, the computer can perform several billion operations per
> second, which is over a million times faster than neurons. And
> specialized hardware and advanced architectures can perform many
> operations simultaneously. Computers are also extremely accurate
> which is fortunate as they are also extremely sensitive to any
> errors.

When computing an intelligence algorithm, how does one see an error? ;-)

The models don't add up, in other words.

> Thus a computer that was ten thousand times faster than a desktop
> computer would probably be at least as computationally powerful as
> the human brain. With specialized hardware it would not be
> difficult to build such a machine in the very near future.

What the hell is power? Argh. See my commentary above.

> But current progress in artificial intelligence is rarely limited by
> the speed and power of modern computer hardware. The current
> limitation is that we simply do not know how to write the software.

No, we know how to program. You just don't know what to program. I do,
but the results aren't going to look like what you're expecting them to
look like (intelligence in the form of code on your blinky ssh tty).

> The "software" for the human brain is ultimately encoded into our
> DNA. What is amazing is that the entire human genome only contains

Woah, hold on there. What the hell is software when it comes to the
human brain? Are you trying to make an analogy to the mind? Are you
trying to make an analogy to gene expression? There's already many
databases concerning the genetic expression of different regions of the
rat brain < http://brain-maps.org/ > and the Allen Institute has
recently begun funding a similar mapping expedition of the human brain
in a slice-by-slice manner. But I doubt that this is what you mean.
Perhaps you mean the internal perceptions of the running brain?

I see that you mention gene expression later. You write:

> Still, while babies are not born intelligent, it is clear that the
> core algorithms and structures that allow a baby to become
> intelligent are encoded in a very small amount of raw data. There is
> just not enough room for some large and arbitrary program to be
> encoded in our DNA, even if the junk DNA turns out not to be junk.
> This suggests that a very few clever insights might solve the problem
> of Artificial Intelligence very suddenly.

Some good thoughts. However, let's consider evolution for a moment here.
The problem that evolution was overcoming, in terms of natural
selection, is not going to be necessarily represented in the proteins
and the various structures that we see within the brain, but rather
that these are the /byproducts/ of those situations. I'm saying that
the design of the amorphous parallel nonlocalized systems of diffusion
signaling and all sorts of interesting molecular processes aren't going
to have a direct one-to-one correspondence with the problem space that
was being selected against/for/whatever. There are some aspects of the
human brain that are encoded in the genome that I suspect are important
in terms of intelligence.

        http://heybryan.org/intense_world_syndrome.html

However, translating that into "straight up" code that most ai theorists
want, isn't necessarily going to happen. But I'd like to point out that
the computational neuroscientists have been doing this already,
especially with their microcolumnar simulations. I offer:

        http://heybryan.org/mediawiki/index.php/Computational_neuroscience
        http://heybryan.org/mediawiki/Henry_Markram

Markram has a team of 20 postdoc programmers working away at integrating
all sorts of interesting code that models biology into his system. He
has it running on a supercomputer, generating something like one
terabyte per sec of raw output that he visualizes with a secondary
supercomputer. ;-) It works.

> It also suggests that if a small improvements can account for the
> huge increase in intelligence from ape to human, then producing super
> human intelligence might not be much harder than producing human
> intelligence (i.e. the problem is not exponentially difficult).

That's only true if you allow those same variables some relevance. For
example, in Markram's models there's the issue of spillover from the
synaptic junctions due to the decreased spacing between neurons in the
inhibitory channels of microcolumns. How would you apply that in a case
where you are *not* simulating neurons on a computer? I'd be interested
in knowing, because I haven't see any models that can convert that
very, very specific detail into any of the more loose/simplistic models
that float around here and the AGI discussion lists.

> Great progress was made in artificial intelligence during the early
> years of software. By the early 1970s computers could prove
> substantial theorems in mathematical logic, solve differential
> equations better than most undergraduate mathematicians, and beat
> most people at chess.

How is that related to intelligence? Except in that the people lied to
us?

> A major problem in AI is to relate the symbolic internal world of the
> computer to the real world at large, which is full of noisy and
> inconsistent data. "Neural networks" have an uncanny ability to
> learn complex relationships between a vector of observed inputs and a
> vector of known properties of the input. They can also be given
> memory between inferences, and thus solve complex problems. However,
> the models they learn are unintelligible arrays of numbers, and their
> utility for higher level reasoning and introspection is probably
> limited. (Their relationship to real neurons is tenuous.)

The symbolic grounding problem is largely one in that there needs to be
grounding within the world itself in order to observe the results.
Trying to peak in at the "unintelligable arrays of numbers" is evidence
that people might be trying to do something stupid ... i.e., assuming
that perceptions are going to be popping out at them "Hey! I'm the
basis of the understanding of a cat's respiratory system!". Heh.

> Having been unfashionable for many years, AI research is now big
> business. The easy software problems have all been solved, adding
> intelligence may give the commercial edge. Google has invested
> heavily — they want to understand documents at a deeper level than
> just their keywords. Microsoft has also made substantial investments
> (particularly in Bayesian networks) — they want to understand Google.

Just wait until Google installs a brain farm.

> One major driver will be the need for practical intelligence as
> robots leave the factory and start to interact with the real world.

No, that's more a 'driver' for people to come to terms with the problems
and realize that they might be interested in working on them, it's
nothing about the actuality of solving them, and whether or not robotic
machinery of those sorts are the correct platforms for intelligence
implementation, etc. Although I wouldn't mind messing around with such
robotics, or even highly parallel multiple core systems on a bot. Cool.

> In particular cars can already drive themselves over rough desert
> tracks and down freeways. We will see autonomous vehicles driving on
> our suburbs much sooner than later. (Initially they might be hybrid,
> and monitored remotely from somewhere in India.) Once robots start
> to mow grass, paint houses, explore Mars and lay bricks people may
> start to ask healthy questions as to the role of man. Being
> unnecessary is dangerous.

You're talking about physical manufacturing and mechanics, tasks that
machines can already do. Intelligence isn't really needed for those
things. In fact, it could be argued that people who do those things
aren't practicising much intelligence, especially in the case of the
houses that are automatically manufactured (Bucky died too soon), or
explore Mars (cough, we have a bot up there, and it's only somewhat
remotely operated, it needs to execute localized program from time to
time). As for 'the role of man'. I don't know what this means. I
suspect that you are assuming the typical scenario where man might be
displaced by automated machines that are nonbiological. I consider this
to be fixed by my request for you to consider more than our simple
planet. Who cares if you are out of work? The machines are taking care
of the necessities of life anyway, yes? Then what's the big deal? And
if you want to go somewhere else, where you can do some work that
robots are not doing at the moment, then go do it. It's not like
economics will stop you -- there's no reason to keep current monetary
systems really, especially when we consider singularities or F/OSS. And
so on.

> Creationists are right to reject evolution. For evolution consumes
> all that is good and noble in mankind and reduces it to base impulses
> that have simply been found to be effective for breeding children.
> The one thing the we know about each of our millions of ancestors is
> that they all successfully had grandchildren. Will you?

No, there's more than just base impulses when it comes to evolution.
Evolution is not entirely reductionist.

> Our sex drive produces children.

Wrong.

http://en.wikipedia.org/wiki/Intercourse

> Our love of our children ensures that we provide them with precious
> resources to thrive and breed.

Wrong. Hundreds of millions, if not billions, of mothers are too poor to
provide them precious resources to thrive and breed, no matter how much
they might love their children. I don't want to dissect the rest of
your paragraph. :(

> Nothing new above. But it is interesting to speculate what
> motivations an artificial intelligence might have. Would it inherit
> our noble goals?

Who cares? Let's plan for two scenarios:

1) Yes.

2) No.

Given #1: hurray?

Given #2: guess you should try to accomplish those goals yourself.

> It is difficult to see a role for humans in this scenario. Humans
> consume valuable resources, and could threaten the intelligence by
> destroying the planet. Maybe a few might be left in isolated parts
> of the world. But the intelligence would optimizes itself, why waste
> even 1% of the world's resources on man. Certainly evolution has
> left no place on earth for any other man-like hominids.

Geeze, if only there was something more than this planet ...

> If our computers threatened us, surely we could just turn them off?
> That is easier said than done.

You have any idea how easy it is to launch a nuclear warhead?

> The developers of the atomic bomb could not turn it off, even though
> some of them tried.

You can stop it from working by using it.

> Further, the Internet has enabled criminals to create huge botnets of
> other people's computers that they can control. The computer on your
> desk might be part of a botnet — it is very hard to know what a
> computer is thinking about. Ordinary dumb botnets are very difficult
> to eliminate due to their distributed nature. Imagine trying to
> control a truly intelligent botnet..

"Cap'n, just blow up the damn ship!"

> But a botnet cannot be shot with a zap gun. We live in the
> information age.

You sir, need a nuclear zap gun. You want to really, really kill it,
right?

> Presidents and dictators do not gain power through their own physical
> strength, but rather through their intelligence, drive and instincts.
> Modern politicians already rely on sophisticated software to manage
> their campaigns and daily interactions. Imagine if some of their
> software was truly intelligent. Who would really be in control?

Arguably who the hell is in control as it is now ? I'm not really sure
control is an accurate framework to describe the peculiar situation.

> Just because an AI could dominate the world does not mean that it
> would want to. But controlling one's environment (the world) is a
> subgoal of almost any other goal. For example, to study the
> universe, or even to prolong human life, one needs to continue to
> exist, and to acquire and utilize resources to solve the given goal.
> Allowing any competitor to kill the AI would defeat its ability to
> solve its base goal.

I've already argued against ai-domination-scenarios above. Not that they
are impossible, but that they are not the end of the world and hu
alike.

> Philosophers have asked whether an artificial intelligence has real
> intelligence or is just simulating intelligence. This is actually a
> non-question, because those that ask it cannot define what measurable
> property "real" intelligence has that simulated intelligence does not
> have. It will be "real" enough if it dominates the world and
> destroys humanity.

No, that "real enough" is re: any existential threat. That's completely
different from the concept of intelligence. Whether or not it is
intelligent is the issue ... not whether or not the result of death
is ... sigh. There's so many complex strands of bullshit running
through that paragraph of yours. It's not your fault, but I'm not
prepared to go through it entirely. Let me try, but I can't guarantee
anything here. Look: you are proposing that ai could end up with
domination and death, and then you proceed to say that if the result is
ending with domination or death and so on that then it was "real", even
though we're talking about *intelligence*, not about your inability to
plan for existential threats.

> There are many doom's day scenarios. Bio technologies, nano
> technologies, global warming, nuclear annihilation. While these
> might be annoying, they are all within our normal understanding and
> some of humanity is likely to survive. We also would have at least
> some time to understand and react to most them. But intelligence is
> fundamental to our existence and its onset could be very fast. How
> do you argue with a much more intelligent opponent?

Stop arguing and just implement your damn solution *now*. Seriously. :-)

> (Biotechnology has been much over hyped as a threat. We have been
> doing battle with microbes for billions of years, and our bodies are
> very good at fighting them. It might also be possible to produce
> some increase human intelligence by tweaking the brain's
> biochemistry. But again, evolution has also been trying to do this
> for a long time. For a real intelligence explosion we need a
> technology that we really understand. And that means digital
> computers.)

You're assuming that evolution is goal-directed by saying that it has
been trying to tweak intelligence to become even better at itself. That
doesn't make sense at all and isn't Darwin. Not by a long shot.

> Trying to prevent people from building intelligent computers is like

Trying to prevent people *period* is like trying to stop microbes.

> trying to stop the spread of knowledge. Once Eve picks the apple it
> is very hard to put it back on the tree. As we get get close to
> artificial intelligence capabilities, it would only take a small team
> of clever programmers anywhere in the world to push it over the line.

No, again, programming isn't the hard part. We can do programming very,
very well.

> But it is not so easy to build powerful new computer chips. It takes

What the hell is power in this context?

> large investments and large teams with many specialties from
> producing ultra pure silicon to developing extremely complex logical
> designs. Extremely complex and precise machinery is required to

Not really. There's guys with basements that are operating their own
semiconductor fabrication installations. And when the industry was just
starting out, like I mentioned above, people were using lenses from
down at the shop to write on their masks and ethyl etches.

> build them. Unlike programming, this is certainly not something that
> can be done in someone's garage.

Hahaha.. I think I'll go cry now. Please stop ignoring me. Really.

^ Also, it's a good idea to not ignore the pioneers of the industry.

> We have a precedent in the control of nuclear fuel. While far from
> perfect, we do have strong controls on the availability of bomb
> making materials, and they could be made stronger if the political
> will existed. It is relatively easy to make an atomic bomb once one
> has enough plutonium or highly enriched uranium. But making the fuel
> is much, much harder. That is why we are alive today.

Go check one of the recent threads on transhumantech. There's a story
about how this isn't quite true. You can assemble the right resources
from a variety of different companies and be completely legit.

> If someone produced a safe and affordable car powered by plutonium,
> would we welcome that as a solution to soaring fuel prices? Of
> course not. We would consider it far too dangerous to have plutonium
> scattered throughout society.

Who is this 'we'?

> It is the goal of this paper to help raise awareness of the danger
> that computers pose. If that can be raised to the level of nuclear
> bombs, then action might well be possible.

Except that in your fear you forget the original Project Orion.

> So ideally we would try to reduce the power of new processors and
> destroy existing ones.

What the hell is power? Have you ever bought a processor ?

> A 10 mega hertz processor running with 1 megabyte of memory is a
> thousand times weaker than current computers.

Weaker? Since when does the quarts crystal frequency determine physical
strength?

> Is the ability to have video games with very sexy graphics really
> worth the annihilation of humanity?

You see, you're still assuming that we can't do multiply redundant
backup systems, that earth is the only planet on which anything related
to humans can be processed, and yet our presence on Mars shows
completely otherwise. Have you ever read any science fiction? That
might be a good introduction ...

> Yudkowsky proposed an alternate solution, namely that it might be
> possible to program a "Friendly" AI that will not hurt us. If the
> very first AI was friendly, then it might be capable of preventing
> other unfriendly AIs from developing. The first AI would have a head
> start on reprogramming itself, so no other AI would be able to catch
> it, at least initially.

And if it's not? Let's work on those solutions too ... *cough*. What
good does making policies against ebola do when you're bleeding out
your ass?

> While a Friendly AI would be very nice, it is probably just wishful
> thinking. There is simply nothing in it for the AI to be friendly to
> man.

Why are you assuming incentives?

> The force of evolution is just too strong.

What the hell?

> The AI that is good at world domination is good at world domination.

Tautologies are like tautologies.

> That said, there is no reason why limiting hardware should prevent
> research into friendly AI. It just gives us more time.

No, it just shoots ourselves in the foot. You don't stop anybody from
making their own computers in their garages. This is what the whole
freaking Homebrew Computer Club was about in the first place [indeed,
they worked with some premanufactured components, but in reality those
components started out in garages as well].

> As worms have evolved into apes, and apes to man, the evolution of
> man to an AI is just a natural process and something that could be
> celebrated rather than avoided. Certainly it would probably only be
> a matter of a few centuries before modern man destroys the earth,
> whereas an artificial intelligence may be able to survive for
> millenia.

Holy shit man, you don't understand evolution. Particularly the part
about programming and "man evolving into AI". What most people consider
AI to be is something about programming. I truly doubt that a directed
intelligent process, like programming, is a naturally occuring
evolutionary process.

> We know that all of our impulses are just simple consequences of
> evolution. Love is an illusion, and all our endeavors are ultimately
> futile. The Zen Buddhists are right — desires are illusions, their
> abandonment is required for enlightenment.

Are you preaching?

> All very clever. But I have two little daughters, whom I love very
> much and would do anything for. That love may be a product of
> evolution, but it is real to me. AI means their death, so it matters
> to me. And so, I suspect, to the reader.

There's nothing about ai that means their death. See above.

> It is of course possible that "the Singularity" will never happen.
> That the problem of building an intelligent machine might just be too
> hard for man to solve.

Have you considered sex?

> This paper aims to raise awareness, and to encourage real discussion
> as to the fate of humanity and whether that matters.

You might be a newbie, and I might have been harsh on you. :)

- Bryan
________________________________________
http://heybryan.org/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT