RE: Lojban and AI

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Mar 14 2005 - 04:59:06 MST


> I have been interested in Lojban for a long time but I never found the
> time to learn more. I remember reading that Lojban was explicitly
> designed with applications to AI in mind.
> Of course if Lojban is used to interface with AIs on the basis that
> its easier to parse for the AI, we are moving a difficult issue from
> the computational system to the external world (documents have to be
> translated in Lojban and operators have to be trained in Lojban), and
> it is not very clear to me why this makes sense. Of course if parsing
> natural language proves really beyond our skills, and parsing Lojban
> does not, then it makes perfect sense.

It is clear to me that teaching an AI natural language is NOT fundamentally
"beyond our skills."

I am in the following position.

I believe I have a correct design for an AGI system, but I also believe it
will take a substantial amount of work -- including programming, testing,
tuning, algorithm-tweaking and *teaching* -- to turn this design into a
working AGI system with human-level intelligence.

I currently have close to zero funding oriented toward this AGI project (and
am grateful to those who have invested or donated a little, so that the
amount isn't exactly zero...).

I've been trying to get the job done "along the way" by reusing tools built
for commercial narrow-AI projects, but this hasn't worked out as well as
hoped. We have built plenty of tools useful for both narrow AI and AGI, but
what we're left with now is a big chunk of pure-AGI work that can't be done
under the guise of any commercial narrow-AI project.

So I'm thinking hard about ways to reduce the amount of man-years required
to get from here (AGI design + software framework + useful collection of
tools within that framework) to there (working AGI that can think like a
human toddler, for a start). I know I can't reduce it that close to zero,
but the smaller the better ... every little bit helps.

My assumption is that once we have created an AI with human-toddler-level
intelligence, the next stage will be a lot easier from an AI-science
perspective, and the nasty issues will be the ones Eliezer has often focused
on -- control, ethics, and so forth.

So, my current estimation is that the quickest path to a AI system with
human-toddler-level intelligence may well be to make a system that interacts
with humans in a 3D simulation-world (Ari Heljakka is currently building us
a nice open-source one, using the CrystalSpace game-world toolkit), and
communicates with humans about what it's doing using Lojban.

This is not because using English instead of Lojban is fundamentally
difficult. It's because -- if one wants to take an approach where one
"seeds" experiential language learning with some hard-wired linguistic
knowledge -- then there's a lot more of English knowledge to wire in, since
English is a much more complex language. Which means a lot more for the AI
to unlearn/adapt as its language understanding gradually becomes more
experientially grounded, thru its interactions with humans.

As I keep saying, this is a pragmatic point rather than a fundamental point.

>From my point of view, if I can shave say 5-10 man-years of
development/tuning/teaching from the path to toddler-level AI by doing the
first teaching in Lojban rather than English, this is very worthwhile.

Compared to making an AI toddler fluent in ANY human language (including
Lojban), teaching an AI toddler a second language is gonna be a piece of
cake...

The only really good counterargument I can see to the perspective I'm
presenting here would be the following:

a) the whole idea of a hybrid approach (wiring in some linguistic knowledge
to seed experiential learning) is wrong: everything has to be done via
experiential learning

b) if an AI is too dumb to learn English rapidly and easily, then it's too
dumb to learn to think anyway, even if it *is* smart enough to learn some
Lojban basics due to Lojban's simplicity

I don't really agree with this counter-perspective, but I don't have a
convincing disproof.

-- Ben G



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:55 MST