Re: [sl4] How much do languages matter for AGI programming?

From: Edward Miller (progressive_1987@yahoo.com)
Date: Fri Nov 28 2008 - 07:53:40 MST


Now that I think about it, the minds-in-general concept is a little different than that image I linked to, and he often represents it as concentric circles or a quadrant in a grid. You get the idea though. There are virtually endless possibilities for how a mind could function. Whether there is an objectively optimum mind that a recursively self-improving AI would naturally gravitate towards, regardless of how it begins, is another matter. For my money, I say we need to be very careful with details like the programming language which may seem irrelevant at first glance, since I do not think it is likely to just all work out regardless of programming methodology..

Another thing I just thought of with regard to Matt's post was how modular and how decentralized this thing should be. Would it be programmed to be so distributed that it could lose chunks of it's hardware and still function? Obviously this would make failure much less likely, but I'm not sure how amenable all the current AGI models are to this sort of structure. Would any of the current models not be amenable to a massively parallelized architecture?

________________________________
From: Edward Miller <progressive_1987@yahoo.com>
To: sl4@sl4.org
Sent: Friday, November 28, 2008 8:36:09 AM
Subject: Re: [sl4] How much do languages matter for AGI programming?

Philip,

I do not know enough about subject-oriented programming to offer any coherent description. As for the minds-in-general thing, I was just referring to this:

http://intelligence.org/sites/all/themes/siai/images/photos/home/header_home-photo3.jpg

I think I fundamentally agree with you that this problem will require more than one tool. Creating some sort of neural network may benefit from highly parallelized object oriented programming, whereas building more specialized decision functions may benefit from logic programming, and a robotic or virtual embodiment may benefit from procedural programming. Now everybody wins!

Also, I now think that it is quite important for the designs of the hardware and firmware to also be available to the AGI for improvement. Obviously unless we create programmable bio-circuits or some such thing, it won't be able to change hardware on the fly while it is still young, but it would be best to provide the AGI with something as close to approaching that as possible to allow for maximum recursive self-improvement. Open hardware designs, or at least licensed designs, are the the closest we can get to that ideal currently.

Matt,

MapReduce does sound like a good idea. It is open source, and this has allowed people to port it to Java. Thus, I wouldn't feel tied down to C++, or even Java. We need to find the best language. Would you want an AGI that is prone to buffer overflows? These sorts of problems can make the AGI very vulnerable to hacking, malfunction, or complete failure. I am assuming it would be best to implement the most rock-solid language(s) possible. If such a language doesn't exist to fit our precise requirements, perhaps creating a new language or modifying an existing language will be necessary.

________________________________
From: Philip Hunt <cabalamat@googlemail.com>
To: sl4@sl4.org
Sent: Friday, November 28, 2008 3:25:38 AM
Subject: Re: [sl4] How much do languages matter for AGI programming?

2008/11/28 Edward Miller <progressive_1987@yahoo.com>:
> I was just reading about programming methods that I had never heard of
> before such as Aspect-oriented programming and Subject-oriented programming.
> I was thinking what consequences the programming language has for AGI.

Aspect-oriented programming, from what I've seen of it, is only useful in
unfrequently-encountered situations; I doubt if it'll ever catch on in
a big way.

I've never heard of Subject-oriented programming -- maybe you can elaborate?

> I know there are lots of flame wars regarding the superiority of programming
> languages, and the Artificial Intelligence community has been arguing over
> it for years. Some prefer Logic Programming, while others like Marvin Minsky
> prefer good ol procedural programming.

A programming lanugage is a tool and you should use the right tool for
the job. Anyone who claims that there is one language which is the
best at all jobs is wrong. So is the domain of AI programming small
enough that there is one language that is best for all of it? I very
much doubt if this is the case.

> Finally, I believe Eliezer has
> recommended Java, correct me if I am wrong.

I believe he has. It's a decision I find puzzling -- I wouldn't
recommend Java for anything that involves exploratory programming and
coding complex algorithms.

> I was looking over a lot of the criticisms of all these languages, and it
> seems to become much more serious when you think about what it could mean
> for AGI. Yet, even within a particular programming paradigm there is much
> variation. I am reminded of the people who prize C# because it is impossible
> to have buffer overflows and so forth.

This is true of most languages that do automatic storage management,
e.g. Lisp, Python, Java, etc.

> Is there any way to know which would be the best for specifically a
> recursively-improving AGI?

I would approach this problem it by using two separate languages. One
would be the "implementation language" in which the majority of the
program was written in. Running inside this program in a sandbox would
be a simpler language, possibly a subset of Lisp, which the program
could reason about and alter.

Of course, I'm sure I wouldn't get the architecture right on the first
attempt. As Frederick Brooks said, plan to throw the first one away,
you will do anyway. If that's true of mundane programming, how much
more true is it of AGI?

> Or does it not matter what the base language is
> because the AGI will just evolve new languages?

All Turing-complete languages are in some sense equivalent. But the
differ in terms of how fast they run and how easy they are to code in.
Humans find it easy to reason about a program in a high-level language
than a low-level one; an AI would surely be the same.

> That would sort of assume
> that there is really only one perfect type of intelligence that all roads
> lead to, which I am not so sure about...

It's easy to imagine two intelligences such that one is always better
at Domain A and the other is always better at Domain B.

> (I am imagining Eliezer's minds-in-general diagram).

I'm not familiar with this. URL?

> Granted, if solving this problem requires
> learning or inventing some new obscure programming method, this would be a
> burden, but maybe a necessary one.

Learning a new programming language is a vastly easier task than coding an AGI.

-- 
Philip Hunt, <cabalamat@googlemail.com>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
      


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT