From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Thu Feb 02 2006 - 16:38:30 MST
On Thursday 02 February 2006 03:44 pm, entropy@farviolet.com wrote:
> On Fri, 3 Feb 2006, Kevin Osborne wrote:
> >> Have you looked at Alice, from Stanford University? Still in the early
> >> stages, but it looks like it supports parallelism in a quite interesting
> >> fashion. Check out "Promises". Nice!
> >
> > will do; though new/experimental languages have certain advantages
> > (evolution says they'll be better) they also have certian
> > disadvantages; like the necessity to write your own API modules for
> > solved problems (sorts, collections, tcp/ip, unicode, math, remoting,
> > etc) - this is not to say that any newer-language-of-your-choice
> > doesn't have some or all of these features; but does in have
> > http://cpan.org/authors/id/D/DC/DCANTRELL/Acme-Scurvy-Whoreson-BilgeRat-1
> >.0.readme ? :-)
>
> One problem with most experimental languages is that they arn't compiled,
> or if are don't have the low level access to the machine that C++ does.
> I say C++ because the inline methods/functions give you high level
> abstract code that can compile directly down to an instruction or two of
> assembly. GCC these days is quite good at peeling away the layers of
> abstraction. For example an inline getter/setter method usually produces
> exactly the same code that a direct assignment does.
To me this sounds like "premature optimization". Almost all of the languages
either have or develop an FFI interface that lets one get down to C (not
usually C++) code, and developing at a higher lever is much faster. You can
often pay a price in speed of execution, but usually a profiler will let you
trace that to small pieces of the code that can then be optimized. (One of
the great disappointments that I had with Pyrex is that even though it links
Python and C in a very nice way, it was specifically announced that the
developer had no intention in using this to create optimized code. So you
still need to optimize in C and then link it into Python.) What I'm
currently experimenting with is Ruby (in my opinion the nicest of the
languages...but SLOW [so you need a faster machine]). Ruby has a way of
embedding C code within it (small chunks only, probably, but that's enough to
handle the calls), so when an application gets to the point of optimizing,
then you can profile & optimize. (AFTER checking your algorithms!)
I've also started wondering about using Apache to handle parallelism. I
haven't looked into the process at all, so this may be totally unreasonable
(or have unreasonable overhead), but Apache is noted for handling lots of
requests at once with unpredictable synchronization between the requests. A
lot of things can be served over the web, not just html, and the mechanism
for dispatching dynamically created web pages could probably be used in lots
of other ways (XML is used to represent entire databases...I generally think
of this as a bad idea, but it would make a dandy method for communicating
between processes that weren't necessarily physically connected...and someone
else has already *done* most of the work).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT