Re: Loosemore's Proposal

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon Oct 24 2005 - 11:39:00 MDT


Richard Loosemore wrote:
> Proofs are for mathematicians. I consider the use of the word
> "proof," about the behavior of an AGI, as on the same level of
> validity as the use of the word "proof" in statements about
> evolutionary proclivities, for example "Prove that no tree could
> ever evolve, naturally, in such a way that it had a red smiley
> face depicted on every leaf."

This is a gross simplification, but basically this just means that
AGIs amenable to formal verification will resemble software systems
more than organic systems. It is intuitively apparent (and this is
a case where intuition is actually right) that since computers are
designed to support formal software systems, not organic simulations,
this approach will also make more efficient use of currently
available hardware.

> First, many people have talked as if building a "human-like" AGI would
> be very difficult. I think that this is a mistake, for the following
> reasons.

The quoted discussion focused on the difficultly of building perfectly
human-like AGIs, on the basis that any perceived safety advantage will
be lost if the system is not perfectly human-like.
 
> Specifically, I think that we (the early AI researchers) started from
> the observation of certain *high-level* reasoning mechanisms that are
> observable in the human mind, and generalized to the idea that these
> mechanisms could be the foundational mechanisms of a thinking system.

This observation is made in at least a third of the AI books on my
bookshelf. It was insightful circa 1985, it's common knowledge now.
It's true that some researchers still don't accept it, but they're
probably a minority by now.

> What we say is this. The logic approach is bad because it starts with
> presumptions about the local mechanisms of the system and then tries to
> extend that basic design out until the system can build its own new
> knowledge,

You're attacking a strawman position. As Ben pointed out earlier, no-one
on this list other than possibly the Cyc team are following this approach
in the form you criticise it. There /are/ well-grounded logic-based
approaches that avoid the massive layer collapse fallacy, but these
bear little relation to classic symbolic AI and do not (necessarily)
suffer from any of the failings you identify.

> instead, you should be noticing that the hardest part of your
> implementation is always the learning and grounding aspect of
> the system.

Again, a fairly common thing for frustrated AI researchers to say,
and indeed a good part of LOGI can be interpreted as a solution
to the 'grounding problem'.

> This is exactly what has been happening in AI research. And it has been
> going on for, what, 20 years now? Plenty of theoretical analysis. Lots
> of systems that do little jobs a little tiny bit better than before.

Actually thousands of connectionist and hundreds of 'hybrid' and
stochastic approaches have also been tried in that time, some of them
with supporting rheotic very similar to yours. Obviously no one has
got it right yet and there's plenty of room for new /designs/, but
you certainly don't have a novel /approach/. Personally I believe that
a AI research methodology is in fact necessary, but obviously what I
have in mind is not what you're on about.

> Build a development environment that allowed rapid construction of large
> numbers of different systems, so we can start to empirically study the
> effects of changing the local mechanisms.

Depending on your level of specificity, you are either proposing a 'new
language for AI', i.e. a project in the same general niche as Flare and
with the same basic problems, or just a fairly flexible 'AI substrate'
of the kind you could arguable say Ben has already developed. Either
would be a secondary issue; the key part is proposed 'local mechanisms'.

> But I can tell you this: we have never tried such an approach before,
> and the one thing that we do know from the complex systems research (you
> can argue with everything else, but you cannot argue with this) is that
> we won't know the outcome until we try.

People have been hacking about with 'stew of local dynamics' type
systems for at least two decades; look at Holland's classic work on
classifier systems, Kokinov's DUAL/AMBR work in the 90s, Edelman or
Calvin's neuromorphic projects (low and medium level respectively)
or Aleksander's recent human-cognition-inspired designs. Again, this
is not a new approach or a novel insight, though you probably have
novel specifics.

> (Notice that the availability of such a development environment would
> not in any way preclude the kind of logic-based AI that is now the
> favorite. You could just as easily build such models.

Ok, if it's that general, it's so general it doesn't actually
contribute any useful cognitive complexity and you're just designing
a language/IDE optimised for (your notions of) AI development work.
See past arguments about why this isn't a good use of time, unless
you can't think of anything better to do.

> The problem is that people who did so would be embarrassed into
> showing how their mechanisms interacted with real sensory and
> motor systems,

You really do seem to be picking on a small clique of researchers,
who maintain an outdated, discredited approach that you've managed
to identify some obvious flaws in, and then generalised from this
easily-derided group to the entire AI research community.

> Finally, on the subject that we started with: motivations of an AGI.
> The class of system I am proposing would have a motivational/emotional
> system that is distinct from the immediate goal stack. Related, but not
> be confused.
>
> I think we could build small scale examples of cognitive systems, insert
> different kinds of M/E systems in them, and allow them to interact
> with one another in simple virtual worlds. We could study the stability
> of the systems, their cooperative behavior towards one another, their
> response to situations in which they faced threats, etc. I think we
> could look for telltale signs of breakdown, and perhaps even track their
> "thoughts" to see what their view of the world was, and how that
> interacted with their motivations.

This part does not appear unreasonable; it seems similar to the
'experimental investigation of AGI goal system dynamics' that Ben
has historically been in favour of. It's just ridiculously unsafe
and overoptimistic in light of the dangers and difficulties
involved, in both the work itself and the generalisation.
 
> And what we might well discover is that the disconnect between M/E
> system and intellect is just as it appears to be in humans: humans
> are intellectual systems with aggressive M/E systems tacked on
> underneath.

How well modularised the human brain is in that respect is an open
question, but the very hard problem of designing a stable Friendly
'M/E' system remains; this is not something you can do by trial and
error, attempting to do it by trial and error will probably get
everyone killed, and first-principles research into FAI has already
generating strong evidence (actually forget that, even 'Heuristics
and Biases' research has generated strong evidence) for human-like
cognitive systems being a bad starting point.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT