RE: Designing AGI

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Wed Oct 26 2005 - 07:02:46 MDT


Ben Goertzel wrote:
> Well, "emergence" is a very general term which always conceals
> all the particularities of the situation.

Amazingly there are a nontrivial number of people who use 'emergence'
as a blanket excuse not to bother thinking about what's actually
going on. Work by people in this category can generally be written
off out of hand (at least until they get silly amounts of brute force
to play with), but the ease of that dismissal should not trivialise
the overall issue into a binary distinction in understanding. The
real danger of emergence is a kind of impatience; researchers who do
make an effort to understand the critical dynamics, but prematurely
declare their understanding 'good enough', invoke emergence to cover
the remaining gap and proceed to implementation without an adequate
predictive model. In virtually any other area of science this would
be perfectly sensible behaviour, but it isn't in AGI due to both the
nonlinearity of the solution fitness landscape (i.e. it has to be
very nearly right to even be capable of produce interesting
experimental results) and the risks of unintended, Unfriendly
takeoff.

Our disagreement over 'emergence' is primarily about where that
critical threshold for understanding should be (though some of the
disagreement is probably definitional, or about the actual utility of
various kinds of informal understanding). It appears that I'd place
it a bit higher than you, and Eliezer would almost certainly place
his considerably higher than mine. Really low thresholds (i.e.
running off and implementing as soon as you have a vaguely coherent
set of bright ideas) aren't that dangerous, for the reasons
mentioned above; it's the levels of understanding good enough to
give a fair chance of cracking AGI but still insufficient to solve
the structural issues of FAI (i.e. build a controllable AGI) that's
the real danger.

This is what Eliezer was alluding to in this piece of black humour
(in the 'subtleties' section particularly);
http://www.sl4.org/wiki/GurpsFriendlyAI

> What gives map formation the flavor of "emergence" is that there
> is no MapFormation MindAgent (to lapse into Novamente lingo), i.e.
> no block of code that explicitly produces maps. Rather, map
> formation occurs as an indirect consequence of the "localized"
> activity of a bunch of processes that act on individual nodes and
> links, that then coordinate together to cause maps to "self-
> organize."

Generally I agree with the rest of your post. However this section
does set off alarm bells, simply because 'self-organising from local
dynamics' structures (using a reasonably narrow definition; you could
stretch that to cover nearly anything if you were so inclined) are
extremely difficult to do well and in a manner compatible with
reflective transparency and causal cleanliness. I do believe it's
possible, and you may have got it right, but if so that would be the
first design I've seen to do so.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT