From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Fri Apr 09 2004 - 06:33:26 MDT
Ben Goertzel wrote;
> However, I do *not* believe Eliezer is hiding a correct design for
> an AGI due to security concerns.
>From http://www.sl4.org/bin/wiki.pl?SoYouWantToBeASeedAIProgrammer;
'You should have read through [Levels of Organization in General
Intelligence] and understood it fully. The AI theory we will actually
be using is deeper and less humanlike than the theory found in LOGI,
but LOGI will still help you prepare for encountering it.'
That does seem to be the only information on the current SIAI project
architecture plans currently available to the public (plus whatever
you extract from reading between the lines of the relevant SL4 posts).
> If he has no time for concrete AI design it's because he has
> prioritized other types of Singularity-oriented work.
True. LOGI-level stuff is actually a key part of Friendliness anyway,
as you can't be sure about goal system dynamics without a functional
account of the rest of the AI, but if you're equating 'concrete' to
'constructive' I agree.
>> and has relatively little actual coding or architecture experience.
>
> This one is a good point.
Listen to the systems architect.
The systems architect is your friend. :)
In reverse order;
> So far as I can tell there is nothing in Eli's AI framework that
> suggests a knowledge representation capable of being coupled with
> sufficiently powerful learning and reasoning algorithms to be used in
> this way.
LOGI KR elements all represent regularities of some sort. Ignoring for
now the intricate meta-regularities and meta-patterns that assemble
them into an actual knowledge source (and allow generalised heuristics
instead of just completely specific Bayesian inference rules), the
learning and reasoning mechanisms are various sorts of directed
correlation search and perception of implied structures (from
deliberative inference down to the modality level; e.g. noticing that
two shapes are reflections of each other).
> Essentially, a shaper network requires a workable, learnable,
> reason-able representation of abstract content, which allows abstract
> bits of uncertain knowledge to interact with each other, to modify
> each other, to spawn actions, etc.
Unsurprisingly that sounds like an agent-based active memory
implementation, which is too general and powerful a system to be able
to say much from that one sentence description. I started with a
classic probabilistic inference network in which inconsistencies were
removed by heuristic repair (I tried various approaches); the repair
heuristics are the principles for reasoning about the morals (this
layering can be repeated indefinitely). I then started modelling
cognitive pressures to allow context-sensitive biasing of the general
happiness function (FARGish I know), and when that wasn't flexible
enough tried adding inference heuristics to do limited amounts of
production-system-esque network extension as an alternative method of
improving happiness. If I was feeling kind I might describe this sort
of messing about as 'open-ended experimentalism'; if not 'poorly
thought out hack and patch session' might be more appropriate.
Doing that sort of thing on a live AGI with full-blown DE running would
be a seriously bad idea. I hope your experimental protocols will be
rather better considered. :)
> Novamente takes a different approach, using a "probabilistic
> combinatory term logic" approach to knowledge representation, and
> then using a special kind of probabilistic inference (with some
> relation to Hebbian learning) synthesized with evolutionary learning
> for learning/reasoning.
There are several interesting ways to combine Bayesian inference and
directed evolution, but most of them have utility tree (goal system)
risks in a utilitarian LOGI/CFAI-derived AGI. I hate to think what
they'd do in a system that doesn't have a conceptually unified utility
function; I've heard rumours on the grapevine that you've been revising
Novamente to have a more conventional goal system and I sincerely
hope they're correct.
> But my point is that Eli's architecture gives a grand overall
> picture, but doesn't actually give a workable and *learnable and
> modifiable* way to represent complex knowledge.
The constructive details are what the Flare project was working on.
Considerable additional progress has been made on LOGIesque KR
substrates since then (by various people).
> Of course it's easy to represent complex knowledge -- predicate logic
> does that, but it does so in a very brittle, non-learnable way.
There are a lot of things I'd like to say on this, but I really
shouldn't right now. Thus some of my derived statements will be
irritatingly unsupported; sorry.
> That could be; but a lot of genius researchers are working on related
> but apparently EASIER questions in complex systems dynamics, without
> making very rapid progress...
Self-modifying goal systems are a considerably specialised and
constrained form of complex dynamic system, ones based on 'pure'
Bayesian utility theory particularly so.
> Anyway, frankly, you do not know nearly enough about the architecture
> to know if it needs revision or not.
It's true, despite recruiting the best industrial spies money can
buy, my information remains incomplete and years out of date ;>
I'll just have to wait for the book :)
> I'll email you off-list some stuff about network security that I
> wrote about a year ago, when I was thinking about getting into that
> area.
Thanks.
> Hm.... I had a long argument about this with my girlfriend (who is
> also a Novamente programmer/scientist), a couple weeks ago.
>
> She at first argued for a 30-40 year timeframe, whereas I argued for
> 5-10
Timescales are the most unreliable part of a unreliable branch of the
generally unreliable business of predicting technological progress.
That said, SIAI project completion within 5 years of the starting gun
seems entirely reasonable to me. When that gun will be fired, I am not
in a position to predict.
> In the end she agreed that 8-10 years was plausible and 15 years was
> fairly likely
Do you really think it's possible to make that sort of prediction
without a deep knowledge of performance of candidate architectures on
all the relevant cognitive competence hurdles?
> I still believe that 5 years from now or even 3 years from now is not
> absurd to think about.
Tomorrow is not unreasonable to think about if you're willing to
consider the various unpleasant (and probably fatal) shortcuts
available.
* Michael Wilson
'For every complex problem, there is a solution that is simple,
neat and wrong.' - H.L. Mencken
.
____________________________________________________________
Yahoo! Messenger - Communicate instantly..."Ping"
your friends today! Download Messenger Now
http://uk.messenger.yahoo.com/download/index.html
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:35 MST