RE: Shapers, protocols and timescales.

From: Ben Goertzel (ben@goertzel.org)
Date: Fri Apr 09 2004 - 06:52:37 MDT


Hi,

> > Essentially, a shaper network requires a workable, learnable,
> > reason-able representation of abstract content, which
> allows abstract
> > bits of uncertain knowledge to interact with each other, to modify
> > each other, to spawn actions, etc.
>
> Unsurprisingly that sounds like an agent-based active memory
> implementation, which is too general and powerful a system to
> be able to say much from that one sentence description.

Yes of course, that description covers a lot of possible things!

>I
> started with a classic probabilistic inference network in
> which inconsistencies were removed by heuristic repair (I
> tried various approaches); the repair heuristics are the
> principles for reasoning about the morals (this layering can
> be repeated indefinitely). I then started modelling cognitive
> pressures to allow context-sensitive biasing of the general
> happiness function (FARGish I know), and when that wasn't
> flexible enough tried adding inference heuristics to do
> limited amounts of production-system-esque network extension
> as an alternative method of improving happiness. If I was
> feeling kind I might describe this sort of messing about as
> 'open-ended experimentalism'; if not 'poorly thought out hack
> and patch session' might be more appropriate.

Well, you're certainly grappling with the right issues here.

The Probabilistic Term Logic framework, used in Novamente, attempts to
address the issues you describe (and other) in a more principled and
systematic way.

> There are several interesting ways to combine Bayesian
> inference and directed evolution, but most of them have
> utility tree (goal system) risks in a utilitarian
> LOGI/CFAI-derived AGI. I hate to think what they'd do in a
> system that doesn't have a conceptually unified utility
> function; I've heard rumours on the grapevine that you've
> been revising Novamente to have a more conventional goal
> system and I sincerely hope they're correct.

Hmmm... Is there a "conventional goal system"??? A convention according
to whom? There are no conventions in AGI yet!!

The Novamente architecture in itself is pretty flexible; it could work
with a lot of different goal system architectures.

However, I revised the NM design last year so that the "attention
allocation" subsystem (not yet implemented except in a simplistic form)
will operate based on the probabilistic inference module rather than
(like before) using neural net like heuristics. This makes the system's
behavior a little more predictable in some ways, hence increasing the
odds of goal-system stability, and increasing the tractability of
STUDYING the issue of goal-system stability in the system.
 
> > But my point is that Eli's architecture gives a grand
> overall picture,
> > but doesn't actually give a workable and *learnable and
> > modifiable* way to represent complex knowledge.
>
> The constructive details are what the Flare project was
> working on.

On the contrary, Flare is a programming language, which might make
implementing Eli's AI designs easier, but seems to me to have little to
do with the details of those designs. You could program an expert
system or a neural net in Flare, for example...

> > In the end she agreed that 8-10 years was plausible and 15
> years was
> > fairly likely
>
> Do you really think it's possible to make that sort of
> prediction without a deep knowledge of performance of
> candidate architectures on all the relevant cognitive
> competence hurdles?

Well what's possible is to estimate the time required for success
CONDITIONAL on the assumption that some reasonably-close relative of the
current NM design will bring us success. If you remove this
conditionality then estimation becomes total kurzweilian extrapolatory
guesswork...

Given how much of the NM design is unimplemented, and given the
trickiness we've found in implementing and tweaking earlier parts, we
can set a LOWER BOUND at 2-3 years.

On the other hand, the NM design is not THAT big, and every possible way
of tweaking it will be explorable in a 10-15 year period. So if it
really can't be made to yield true AGI in that time period, then almost
surely the design is badly inadequate.

In practice, if the NM design doesn't start yielding true-AGI-ish
results after 2-3 years of serious pure-AGI research effort [as opposed
to the current part-time-focus on AGI], I will strongly consider whether
the framework is badly inadequate, and will start thinking about
alternatives (like Hebbian Logic Networks, a speculative and still
poorly fleshed out neural net based design I conceived). But of course
I'm betting NM will work ;-)

-- Ben Goertzel
  



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:35 MST