From: Patrick McCuller (patrick@kia.net)
Date: Sun Apr 15 2001 - 20:03:10 MDT
>
> > I suspect, very much suspect, that you would have had
> > nearly identical
> > performance problems regardless of what language you used, and if you had
> > spent man-hours debugging malloc(), you wouldn't have gotten so far into
> > actual AI.
>
> This is ~exactly~ the mindset I had a year and a half ago. Now, after much
> practical experience with large-scale Java, I know better. Webmind is
> probably the most complex and intensive Java program ever written in terms
> of having vast numbers of different types of objects on different machines
> in different threads. We studied everything ever written about JVM's and OO
> programming, and used every design pattern known to man, and many new ones.
> We arrived at amazingly efficient design patterns using arrays in tricky
> ways to store data (int arrays storing various values accessed through bit
> masking), and accessing these through elegant interfaces. Even so,
> ultimately, we keep running up against the terrible inefficiency of the JVM
> when used for a program of this scale. I really believe that the JVM simply
> was never tested/tuned/tweaked on programs of this scale. C/C++ has been
> used for really massive and intensive programs before, and is just plain
> more robust in this regard.
Thanks for clarifying this. You obviously speak from a great store of
experience on the matter. Do you have much experience with C++? If not, I
caution you not to be oversold on it: it's not THAT much better, and it has
its own issues. If I had a nickel for every hour I spent debugging
multithreading problems in C++, I would have to get a bigger piggy bank. :)
What did you find was more of a problem: memory consumption, execution speed,
or distributed interfaces? Even more mature languages only have a few years on
Java in terms of distributed architectures, so I suspect that while you had a
challenge on your hands with that, it wasn't the limiting factor. Am I wrong
about that?
Do you mind if I ask a few more questions?
How many machines are we talking about - five, fifty, five hundred?
How many JVMs did you run on each machine, and how many threads in each JVM?
You mentioned billions of objects - are you serious about that? I've never
worked with a system with so many active objects. Rows in databases, yes. But
for running objects, even in a distributed C++ architecture, I've never seen
anything like that many. Were they active (Runnable), serving as data storage,
or what?
>
> Man-hours debugging malloc() suck. But we've spent numerous man-hours
> working around Java's undocumented inefficiencies and limitations, and the
> lack of adequate debuggers or working profilers, and in the end, we still
> haven't solved the problems. The solutions are
Many of them are documented, somewhere - you just have to go look where
people bash on Java. They can be astoundingly specific, which is often really
helpful. :)
Still, I agree, there are always problems with the tools.
>
> -- give up on java for the performance-intensive, RAM-hogging parts of the
> code, or
> -- improve the JVM itself
>
> I'd love to do the latter, we have plenty of ideas for how, but we lack the
> resources ;p
Ultimately, the latter would be better, but I understand the resource
limitation. I take it you tried every available JVM?
Patrick McCuller
>
> ben
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT