Re: META: Killthread; (Re: Edge.org: Jaron Lanier)

From: Perry E.Metzger (perry@piermont.com)
Date: Sat Nov 29 2003 - 22:44:57 MST


Tommy McCabe <rocketjet314@yahoo.com> writes:
> Since when is there any evidence that Moore's Law is
> petering out? people have been claiming that for
> fifteen years and chips continue to grow faster.

Right now, we're hitting the fundamental limits on
photolithographically produced circuits. And yes, we really are. If
you look at the electron micrographs of the transistors we're
producing at the 90nm scale, it is pretty obvious that we're hitting
the limits -- anything below 10-15nm is not going to be within our
abilities, and the folks at the semi houses pretty much say that up
front.

We're also hitting serious problems with power dissipation. A chip of
maximal density using 10nm processes will be beyond our power to cool,
even if we can run it slow. That's because leakage currents have
gotten far too high. (By "beyond our ability to cool" I mean
"dissipate more than 10kw/cm^2".)

Now, it is entirely possible we could turn around and start using some
other technology -- say Drexlerian rod logic -- and get increases in
density past that, and lower power usage. However, we don't have true
molecular nanotechnology yet, and it seems very likely that we're
going to hit the end of the road for silicon before we do. Even if we
do manage to hop off of the silicon train and onto MNT, however, we do
eventually get to certain fundamental limits.

The .5kt limit isn't one we can avoid (though we can try to minimize
the effect by using reversible logic as much as possible), and
associated with .5kt is the cooling problem -- you have to dissipate
what you generate. Presumably we'll have to cool our machines to lower
and lower temperature to make .5kt lower, but there are limits there,
too. We also have no idea how to make components smaller than
individual atoms (and the uncertainty principle seems to give one
pause about any possibility of doing that even on a neutron star.)

In any case, however, Moore's Law ends within 40 years, period. If you
do the back of the envelope calculation, there are perhaps 2^77th
carbon atoms in a cc of diamond, and you aren't going to store many
more bits than that per cc, at least with forseeable
technologies. (I'd argue you can't even store that many if you expect
speed, but...)

> Since when does AI require specialized hardware? Fast
> hardware, quite possibly, but specialized hardware?

We don't know yet, as we don't yet have AI -- or perhaps you think
that because it is easy to sit about speculating about AI that you
actually know how to build it?

Perhaps it will turn out that, for better or ill, the best we know how
to do until we have improved ourselves a lot and understand brains
more is doing neural net simulations of natural brains. That might
very well require specialized hardware to achieve sufficient speeds.

> Fast hardware can be obtained by linking slow hardware together.

That's called "parallel processing". Many of us were doing that sort
of work some years ago. It turns out it requires alternative
programming techniques, and it also turns out not to be a panacea.

> Specialized hardware requires a redesign of the chip. We might need
> the former, but why would we need the latter?

Because, sadly, specialized designs continue to be faster than more
general ones.

>> > thorough knowledge of the hardware it is being
>>
>> Absolutely.
>
> I wasn't talking about it in the sense of "which chip
> you are running it on". You need to know if it's on a
> PC-type computer or a Mac-type computer, but there is
> no need to program in binary or even assembly code,
> which requires a lot of knowledge of the chip
> architecture. High-level languages will work just as
> well.

Spoken like someone who doesn't hack on operating systems much. :)
Also spoken like someone who doesn't hack on high performance apps.

>> > programmed on, there is no need to waste time
>>
>> If both of your premises weren't wrong, I'd
>> be agreeing with your conclusion.
>
> Please explain how my premises are wrong. AI is
> obviously harder and has more things on the line than
> a conventional programming project, but why can't it
> be done on regular hardware?

Don't know yet. I think we'll know when we've finished the work.

I will say this -- when I was doing vision work (many years ago I must
admit), the state of the art work was being done on artificial retinas
and processing networks implemented in hardware. That wasn't because
it wasn't possible to do the work on conventional architectures -- but
because the researchers wanted to get their results within their
lifetimes.

> And why can't it be done
> in a high-level language that doesn't require a lot of
> knowledge of the chip?

Don't know yet. We'll know when we're done, won't we?

>> > discussing it. If it isn't broken, don't fix it,
>> and
>> > don't spend valuable time discussing it. Quote
>> from
>> > Staring into the Singularity- "Ever since the late
>> > 90's, the Singularity has been only a problem of
>>
>> I'm completely immune to quotes. As long as you can
>> show me that hardware is not a problem, and more and
>> better hardware isn't a very powerful tool to
>> circumvent
>> this software (the separation between software and
>> hardware is a yet another sterile meme of the
>> complex
>> we started this discussion with) you could be as
>> well quoting from Mao's Little Red Book.
>
> You can have to most powerful chip on the planet, but
> you need software to run it on, and that's the tricky
> part.

The tricky part is understanding what you're doing. We don't know what
we're doing yet, so we don't really know what tools we'll need.

BTW, I agree with Eugen -- quit quoting manifestos. This is science
and engineering, not communism.

> You can get faster computers by stringing slower
> computers together, but you can't get better programs
> by stringing bad programs together.

Daniel Dennett notes that most people are only comfortable with the
idea of complex systems building less complex systems, and Darwin's
most revolutionary idea was that it was possible for an essentially
dumb and non-complicated process to construct exquisitely complicated
things.

The way evolution works is precisely by stringing bad programs
together, and twisting them until they work.

> I agree that better hardware makes the software problems easier, but
> even incredibly simple scenarios like software processes that were
> mutated by another low-intelligence process and selected for their
> ability to play chess require a good deal of software design. And
> that scenario would probably end with an AI that wants to demolish
> the solar system to make more room for hardware to play chess
> better.

Why would it necessarily end that way? We were built by such a process
and are not particularly awful in terms of desire to destroy
vs. desire to build. (Of course, I could go and quote Bakunin on the
urge to destroy here, but that would just be tweaking you more than I
am already.)

>> > software." The hardware companies can handle the
>> > problem of making fast chips- but we need the code
>> to
>>
>> No, they can't. That's the whole point of this discussion. Johnny
>> can't make fast chips, and if you want AI, you better understand
>> why.
>
> Even if modern PCs are too slow for AI, you can use a
> supercomputer or distributed computing (or lots of
> PC's working parallelly in some warehouse.) And even
> if that doesn't work, chips are getting faster.

And what if no such technique produces something fast enough?

I remember a few years ago when Jim Gillogly tried to write a simple
computer program to simulate the operation of the Bombe -- that was
the electromechanical system that was used to break the daily keys
used on the Enigma. He discovered, much to his shock, that 50 or 60
years of computer development haven't made a general purpose computer
fast enough to do what a simple arrangement of wires and rotors could
do in the '40s.

Or, to put it another way: every computer out there today has graphics
accelerators that do in a few million transistors what no amount of
general purpose computing would do for speeding up real time
animation.

It may easily turn out that a few simple circuits speed up neural net
simulations enough that we'll use specialized hardware instead of
general purpose for our AI work. Or, maybe we won't. Who knows?

The real point here, though, is this: quit being arrogant. You and I
are ignorant. We don't know how to build an AI, except in the most
general terms useful for philosophical arguments. Neither of us know
what technologies will be required in the end. Therefore, being humble
in the face of that ignorance is in order.

>> > make the chips become a Friendly Seed AI. And that's
>> > where SIAI comes in.
>>
>> I'm not feeling like joining the F issue before the
>> hardware and the software part isn't addressed.
>
> Even Eurisko shouldn't have been done without a
> coherent theory of Friendliness.

I suspect (I'm sorry to say) that assuring Friendliness is impossible,
both on a formal level (see Rice's Theorem) and on a practical level
(see informal points made by folks like Vinge on the impossibility of
understanding and thus controlling that which is vastly smarter than
you are.) I may be wrong, of course, but it doesn't look very good to
me.

(I realize that I've just violated the religion many people here on
this list subscribe to, but I have no respect for religion.)

However, given how lame the hardware Eurisko was run on was, the odds
of it getting "out of hand" seem utterly unwarranted. Eurisko wasn't
even as smart as a grasshopper.

(As an aside, I'll say this out loud even though some of Doug Lenat's
victims live here -- the joke about Lenat in the Hacker's Dictionary
was hardly severe enough. Cyc is one of the most massive wastes of
money I've ever seen.)

> When you're planning on making a being that has the potential to
> blow up the planet, you don't want to take any unnecessary risks by
> something as easily remedied as putting the AI before the
> Friendliness theory.

Keep in mind that likely, out there there are intelligent creatures
created without regard to "Friendliness theory" that whatever you
create is going to have to survive against. Someday, they'll encounter
each other. I'd prefer that my successors not be wiped out at first
glance in such an encounter, which likely requires that such designs
need to be stupendous badasses (to use the Neil Stephenson term).

Again, though, I'm probably violating the local religion in saying
that.

>> Unless, of course, that's off-topic for this list.
>> It it is so, this list is about plucking virtual
>> lint from our nonexisting navels.
>>
>> -- Eugen* Leitl leitl

Eugen and I seem to be in violent agreement about far too many
things. :)

Perry

-- 
Perry E. Metzger		perry@piermont.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT