Re: RE: What are useful for a phd/JOIN Dan Burfoot

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Dec 07 2006 - 05:49:38 MST


> My question:
> Following Ben's comment about his colleague who did his PhD while working
> for Novamente, can anyone formulate a specific question related to FAI that
> could plausibly be answered by a PhD thesis? In other words, what do people
> think are the well-defined problems (I am specifically interested in
> questions relating to the above topics, but feel that precise questions are
> generally Good Things)
>
> -Dan Burfoot

Well, if you could make recent Shane Legg's attempted proof of the
impossibility of Friendly AI work, that would be an interesting part
of a PhD thesis ;-)

In the rest of this email I will suggest some potential PhD thesis
topics that are relevant to AGI, though not addressing Friendliness in
particular.

If you are interested in neural nets (as I infer from your list of
interests given in your email), one fascinating area to study is the
manner in which (approximate)probabilistic inference may emerge from
neural networks (based on "Hebbian learning" of various sorts, e.g.
spike-timing-dependent long-term potentiation). There is an emerging
literature on neural probabilistic inference, but the work being done
is quite primitive, and I have some concrete ideas on how to extend
it, but have not followed up on them due to other priorities.

Next, there are many ways to follow up on Moshe Looks' recent PhD
thesis work, see www.metacog.org . He, in collaboration with me, has
created a probabilistic evolutionary program learning system called
MOSES, useful as a narrow-AI (e.g. we've used it for bioinformatic
data analysis) but mainly intended for use within an integrated AGI
system. MOSES is very nice as-is, but at the moment we have only used
it in a fairly crude way, as a "machine learning" type system that
approaches each problem as a fresh challenge. What needs to be done
for using MOSES in an AGI context, is to have MOSES start out its
analysis of a problem via transferring knowledge that it gained
analysing other, related problems. This could be a very nice PhD
topic, MOSES+transfer learning. It has both narrow-AI and AGI
applicability.

In the cognitive robotics domain, a key research area would be "symbol
grounding." Deb Roy has done some nice work with grounding of simple
terms like nouns, verbs and adjectives. But this stuff is
conceptually very simple, and is a challenge mainly in terms of
systems integration. What is subtler and more fascinating from an AGI
perspective is grounding of **prepositions** based on embodied
experience. This requires much greater contextual understanding. It
may be done using actual robots as Deb Roy has done, or using
simulated robots (as we are playing with in the AGISim simulation
world environment), or both (using a simulation world to perfect one's
ideas and code, then transferring it to a physical robot, perhaps by
using a standard API like Pyro to control both one's real robot and
one's simulated robot).

If any of these problems interests you, we could discuss more details off-list.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT