Language considerations for AI

From: Fabio Mascarenhas (mascarenhas@acm.org)
Date: Mon Apr 16 2001 - 22:57:12 MDT


Hi everyone,

I've been lurking for quite a while, and will try to send a JOIN message
shortly. For now, I'm a computer science student on my last semester to
graduation, with interests in distributed systems, programming languages
and, of course, AI.

I'll try to comment if other languages, specially dynamic ones would bring
the same issues for AI development as Java, and how they could be addressed.

>Ben Gortzel writes:
>
>Firstly, development is clearly faster in Java than in C/C++. Not having to
>debug malloc() errors is nice.

Yes, this is accepted fact. But also the software engineering community
(those that do the actual programming, at least) is recognizing the benefits
that dynamic languages (such as Lisp, Scheme, Smalltalk, Python and Ruby)
offer even faster development time. Of course the lack of static typing
demands more testing of the system, but overall development goes faster, as
in practice tests for the correctness of behavior also check correctness of
types.

And of course all these languages can interface with C/C++ code, so speed
bottlenecks can be recoded in C. This still leaves the memory bottleneck,
though. See below.

>Java, if you use HotSpot, is no slower than C++. It is surely still slower
>than a speed-optimized C program, but not by an order of magnitude

This is to be expected, as Java method calls are conceptually identical to
calls to virtual methods in C++, so their performance should be similar
after JIT-compilation.

Dynamic languages, with optimization, can come close to this, too. Modern
Smalltalk JVMs optimize the method calls so they also are almost as fast as
C++ virtual function calls.

>A Java program uses about 4 times as much memory as a comparable C++
>program. There's not much you can do about this. Also, all current JVM's
>on all current OS's are limited to about 1.2 GB of RAM per Java process.
>(Sun promises a 64-bit JVM, someday, but, well...). So, if you want an AI
>program with a lot of data in it, and you want to use Java, you're pushed
to
>deal with distributed processing pretty early in your work. Distributed
>processing, however, slows down processing a lot. With a lot of work, we
>got the performance of a distributed system in the range of 5-10 times
>slower than that of a comparable 1-JVM system with the same amount of data.

The 4x figure for memory comsuption is pretty large. Is this due to
inneficient garbage-collection? (i.e. half or more of the memory comsumption
are "dead" objects). And the 1.2Gb limit is serious, after all 32-bit
operating systems nowadays give at least 2Gb addressable memory per-process.
Sloppy work they did! I don't know how Lisp and Smalltalk JVMs compare,
though, but Python and Ruby, being C-based (Ruby even uses a modified malloc
for memory allocation) should be able to handle just as much as a C process.

>The bottleneck turns out to be, not the actual network bandwidth, but the
>time required for Java to serialize/deserialize objects. One can minimize
>this by overriding the readObject() and writeObject() methods [custom
>serialization], but this has limited value, unfortunately.

Yes, serialization is slow in Java. Cloning via clone() versus cloning via
serializing and serializing (to a memory stream) is two orders of magnitude
faster. I really don't know how this can be. For the other languages cited:
I don't know how Lisp and Scheme handle networking and serialization, but it
should be easy for them. I'll try some benchmarks with Ruby to see how it
fares. Maybe this is a design flaw in ObjectOutputStream?

>Furthermore, garbage collection in existing JVM's is fantastically
>inefficient for programs creating billions of objects. It seems that the
>JVM is not tuned for truly large-scale applications.

What is the object creation profile of Webmind? Does its operation produce a
lot of "dead" objects? Java's standard GC is primitive (I don't know for
HotSpot JVM). It's now multithreaded, it's not generational... there was a
paper in January's ACM SIGPLAN Notices ("Thread-Specific Heaps for
Multi-Threaded Programs", by Bjarne Steensgaard, Microsoft Research) about a
generational, multi-threaded garbage collector for a experimental
Java-to-native code compiler that gives each thread it's own heap, allowing
garbage collection of specific heaps to go in parallel with normal operation
in other threads. There were references for similar works for Java and
functional languages.

Lisp and Smalltalk have got more advanced garbage collectors by now (the
functional nature of Lisp gives advantages to generational GC). Python does
reference-counting: circular structures leak memory (and I really think
you've got a lot of them :-) ). Ruby still has primitive (Java-level GC),
but a generational collector is being implemented.

>And, there are no existing Java profilers that will profile a large-scale,
>multi-threaded Java system. Quite literally, they all crash when applied to
>such a program. For this reason we have created our own Java profiler. [The
>eason the commercial profilers crash is that they try to report all events
>in the program to their GUI's, and the Java GUI tools are not able to deal
>with such a high volume of data. Our profiler just saves events in a log
>file, which avoids the problem.]

Don't know about profilers, but mandatory reporting to UI is very braindead!

>There are also no adequate debuggers in Java, when one is creating a
>large-scale software system.

You can alleviate the need for some debugging by careful test discipline.
Take a look at the testing discipline of Extreme Programming
(c2.com/cgi/wiki?ExtremeProgramming). It mandates that each class in the
system must have a test suite that exercises any interesting behavior the
class has. The suite must run wirhout human intervention, and best results
are arrived when you code a test *before* coding functionality for the test
to pass. Of course it doesn't help with concurrency and system tests, but
localized errors can be found and corrected (even prevented from ocurring)
without debugging the whole system.

>work with distributed processing very early on. Distributed computing is a
>whole science unto itself, which we've mastered, but it took a lot of time.

At least you are in a local network, so get ordering of messages for free
(all machines receive messages in the same order)! :-)

>What a few of us are doing now is taking the most essential aspects of the
>system and re-coding them in C in a very simple and specialized way. We're
>not dealing with distributed processing in this experimental version yet.
>We just want a simple system in which we can play around with the
>interaction of the various AI components of the system, freely and easily.
>In a couple months we'll deal with the reintegration of this into the main
>codebase, using JNI, which works very nicely BTW.

I see how you have been burned by Java. In retrospect I think Smalltalk or
Lisp would have been a better choice at the time (if you were starting now I
would still recommend them, in a couple years maybe Ruby will be a good
solution, too, but it's still in flux and the GC is poor). I think you
already came to this conclusion, but I would also recode the communication
core in C so messages woudn't have to be composed by Java serialization
(then you can really roll your own serialization, converting the Java
objects to the comm core message format). Wish you the best of luck!

>stick with C/C++. Java is just not tuned for such projects, and you run
>into a hell of a lot of annoying problems that waste a huge amount of time
>and effort, far outweighing the savings in programming time.
>LISP is a nice language too, and there are some fast LISPs. But in
>large-scale use cases, they have the same garbage collection inefficiency
>problems as current JVM's.

As I said, Lisp and Smalltalk GCs are now a whole different animal than the
crude mark-and-sweep GC. They already have enhancements that are still in
the research labs for Java. Another paper in the same journal I cited above
has results showing that potential improvement for memory savings can be
from 23% to 74%, compared to the current Java GC. Of course it's an upper
bound, but it's still significant! And multi-threaded garbage collectors can
improve speed, too.

Well, this is pretty long for a first post! Hope I could contribute with
something at least a tiny bit useful!

Fabio Mascarenhas
mascarenhas@acm.org



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT