From: J. Andrew Rogers (firstname.lastname@example.org)
Date: Sun Nov 30 2008 - 01:07:45 MST
On Nov 29, 2008, at 9:39 PM, Philip Hunt wrote:
> It's true that any Turing-complete language can emulate any other, but
> that's not what I was talking about. The advantage of Lisp is that
> it's *very easy* to implement another language in it, a lot easier
> than going to the trouble of writing a compiler/interpreter.
It depends on what you mean by "going to the trouble", though LISP
does excel at this in many cases. For well-defined simple/elegant
language/model implementations, there are a number of programming
languages that can do most implementations in a trivial amount of
code. I don't consider a complete language implementation in less
than a hundred lines of code to be particularly onerous. However, the
really small, "pure" languages usually require ugly hacks to implement
some constructs in other small, "pure" computational models. Or at
least I cannot think of one that does not, and that was my main
point. You could find yourself trying to implement something in your
chosen language that is all "ugly hack".
I would guess that one of the pragmatic reasons for designing impure
languages in the first place is to create idiomatic shortcuts that
bypass necessarily ugly hacks in purer languages.
> When you talk of implementing a Forth-like language in Python, I
> assume you mean something like this:
> forthlikeProgram = [3, 4, '+'] # adds 3 and 4
> def runForth(program):
> #... runs 'program' in a stack-based language
> Now obviously the same thing could be done in Lisp. However, Lisp
> s-expressions are a less verbose encoding than Python lists, so the
> forth program would look like:
> (3 4 +)
> This isn't a big deal when the program only has 3 elements, but if it
> was bigger, it would be an issue.
> So I would say that Lisp is better than Python at working in the
> Forth idiom.
The semantics and dynamics of compositional and applicative languages
can be pretty different, that they both have superficially similar
list-like structure was not really the point. You can do things in
the former that are kind of ugly in applicative languages even when
using LISP macros and similar since they bury their abstractions in
different places and ways. Python's notion of lists is sufficiently
general and featured (read: "impure") that it can do both applicative
and compositional semantics pretty easily, though Python's native
dictionaries are somewhat inadequate for the latter purpose (so it is
only a half-win). Python was just an easy example.
For many vanilla language implementations you will not exercise the
differences between compositional and applicative computational models
-- a lot of common computer science works very well with applicative
models. On the other hand, there are interesting computational models
that make the theoretical differences obvious and have a preference
for expression on one implementation language over the either i.e.
implementation on a FORTH-like language and a LISP-like language would
have substantially differing complexity, not always in favor of LISP.
LISP macros are slick indeed if you know you probably won't be better
off with something different.
My original point on this thread is that we have no idea what we
really need, at least not well enough to declare a particular
implementation language to be optimal for AGI. I have a subjective
preference for applicative models when doing implementations, but back
when I was prototyping computational models there were a few
interesting ones that were not a natural fit for LISP-like languages
and I got a lot of mileage out of the fact that Python could support
other abstract models with a minimum of fuss. A reasonable argument
could be made that a more generally expressive language rather than a
small, pure language is a conservative choice.
And if you are implementing in C/C++ for speed and scalability, it
will be moot anyway.
> Which for hard problems will often be the case. AI programming vcan be
> thought of as a special case of explotratory programming, where you
> don't know what the solution will be when you shart coding. Therefore
> it helps if the language is flexible, e.g. if a variable's type is
> determined at run time not compile time.
Yet another pragmatic tradeoff. The extent to which different typing
model will help you depends on the nature of the code that is being
written. Some times code is ugly and evil without dynamic typing
other times there are no significant costs with static typing (and
obvious benefits). I think the argument for strong typing is a bit
more biased though.
Choosing the best tool for the job is dependent on rigorously defining
"the job". Absent that, you are left with a bunch of pragmatic
concerns like comfort using the language and flexibility.
J. Andrew Rogers
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT