From: Richard Loosemore (rpwl@lightlink.com)
Date: Mon Oct 24 2005 - 13:13:09 MDT
Michael Wilson wrote:
> Richard Loosemore wrote:
>
>>Proofs are for mathematicians. I consider the use of the word
>>"proof," about the behavior of an AGI, as on the same level of
>>validity as the use of the word "proof" in statements about
>>evolutionary proclivities, for example "Prove that no tree could
>>ever evolve, naturally, in such a way that it had a red smiley
>>face depicted on every leaf."
>
>
> This is a gross simplification, but basically this just means that
> AGIs amenable to formal verification will resemble software systems
> more than organic systems. It is intuitively apparent (and this is
> a case where intuition is actually right) that since computers are
> designed to support formal software systems, not organic simulations,
> this approach will also make more efficient use of currently
> available hardware.
That is not what I said.
>>First, many people have talked as if building a "human-like" AGI would
>>be very difficult. I think that this is a mistake, for the following
>>reasons.
>
>
> The quoted discussion focused on the difficultly of building perfectly
> human-like AGIs, on the basis that any perceived safety advantage will
> be lost if the system is not perfectly human-like.
The quoted discussion was not about "perfectly" human-like AIs.
>>Specifically, I think that we (the early AI researchers) started from
>>the observation of certain *high-level* reasoning mechanisms that are
>>observable in the human mind, and generalized to the idea that these
>>mechanisms could be the foundational mechanisms of a thinking system.
>
>
> This observation is made in at least a third of the AI books on my
> bookshelf. It was insightful circa 1985, it's common knowledge now.
> It's true that some researchers still don't accept it, but they're
> probably a minority by now.
You are still doing it. That was the whole point of my argument.
>>What we say is this. The logic approach is bad because it starts with
>>presumptions about the local mechanisms of the system and then tries to
>>extend that basic design out until the system can build its own new
>>knowledge,
>
>
> You're attacking a strawman position. As Ben pointed out earlier, no-one
> on this list other than possibly the Cyc team are following this approach
> in the form you criticise it. There /are/ well-grounded logic-based
> approaches that avoid the massive layer collapse fallacy, but these
> bear little relation to classic symbolic AI and do not (necessarily)
> suffer from any of the failings you identify.
Not a straw man: you yourselves are taking the "logic" approach that I
am talking about. Until you understand that, you are missing the point
of this entire argument.
>>instead, you should be noticing that the hardest part of your
>>implementation is always the learning and grounding aspect of
>>the system.
>
>
> Again, a fairly common thing for frustrated AI researchers to say,
> and indeed a good part of LOGI can be interpreted as a solution
> to the 'grounding problem'.
Nonsense: LOGI hasn't solved the grounding problem.
>>This is exactly what has been happening in AI research. And it has been
>>going on for, what, 20 years now? Plenty of theoretical analysis. Lots
>>of systems that do little jobs a little tiny bit better than before.
>
>
> Actually thousands of connectionist and hundreds of 'hybrid' and
> stochastic approaches have also been tried in that time, some of them
> with supporting rheotic very similar to yours. Obviously no one has
> got it right yet and there's plenty of room for new /designs/, but
> you certainly don't have a novel /approach/. Personally I believe that
> a AI research methodology is in fact necessary, but obviously what I
> have in mind is not what you're on about.
Cite one example of systematic variation of local mechanisms in complete
AGI systems, in search of stability. There is not one. Nobody has
tried the approach that I have adopted, so why, in your book, is it not
novel?
>>Build a development environment that allowed rapid construction of large
>>numbers of different systems, so we can start to empirically study the
>>effects of changing the local mechanisms.
>
>
> Depending on your level of specificity, you are either proposing a 'new
> language for AI', i.e. a project in the same general niche as Flare and
> with the same basic problems, or just a fairly flexible 'AI substrate'
> of the kind you could arguable say Ben has already developed. Either
> would be a secondary issue; the key part is proposed 'local mechanisms'.
read what I wrote. It would not even slightly resemble either Flare or
Ben's system.
>>But I can tell you this: we have never tried such an approach before,
>>and the one thing that we do know from the complex systems research (you
>>can argue with everything else, but you cannot argue with this) is that
>>we won't know the outcome until we try.
>
>
> People have been hacking about with 'stew of local dynamics' type
> systems for at least two decades; look at Holland's classic work on
> classifier systems, Kokinov's DUAL/AMBR work in the 90s, Edelman or
> Calvin's neuromorphic projects (low and medium level respectively)
> or Aleksander's recent human-cognition-inspired designs. Again, this
> is not a new approach or a novel insight, though you probably have
> novel specifics.
Again, only true if you ignore what I said.
>>(Notice that the availability of such a development environment would
>>not in any way preclude the kind of logic-based AI that is now the
>>favorite. You could just as easily build such models.
>
>
> Ok, if it's that general, it's so general it doesn't actually
> contribute any useful cognitive complexity and you're just designing
> a language/IDE optimised for (your notions of) AI development work.
> See past arguments about why this isn't a good use of time, unless
> you can't think of anything better to do.
So, where else is there a development environment that would easily
allow someone who was not a hacker to produce 100 different *designs* of
cognitive systems, using different local mechanisms, then feed them the
same sets of environmental data, then analyse the internal dynamics and
make side by side comparisons of the behavior of those 100 systems, and
get all this done in a week, so you can go on to look at another set of
100 systems next week?
Why do you think this would make no difference whatsoever to the way AGI
research is done?
>>The problem is that people who did so would be embarrassed into
>>showing how their mechanisms interacted with real sensory and
>>motor systems,
>
>
> You really do seem to be picking on a small clique of researchers,
> who maintain an outdated, discredited approach that you've managed
> to identify some obvious flaws in, and then generalised from this
> easily-derided group to the entire AI research community.
Stop trying to deflect attention to some other group: I am talking
about you and your approach.
If I am not talking about you, when was the last time you built a
complete AGI and tested it to see if the local mechanisms you chose
rendered it stable (a) in the face of real world environmental
interaction and (b) in the course of learning?
>>Finally, on the subject that we started with: motivations of an AGI.
>>The class of system I am proposing would have a motivational/emotional
>>system that is distinct from the immediate goal stack. Related, but not
>>be confused.
>>
>>I think we could build small scale examples of cognitive systems, insert
>>different kinds of M/E systems in them, and allow them to interact
>>with one another in simple virtual worlds. We could study the stability
>>of the systems, their cooperative behavior towards one another, their
>>response to situations in which they faced threats, etc. I think we
>>could look for telltale signs of breakdown, and perhaps even track their
>>"thoughts" to see what their view of the world was, and how that
>>interacted with their motivations.
>
>
> This part does not appear unreasonable; it seems similar to the
> 'experimental investigation of AGI goal system dynamics' that Ben
> has historically been in favour of. It's just ridiculously unsafe
> and overoptimistic in light of the dangers and difficulties
> involved, in both the work itself and the generalisation.
You have never got anywhere near trying it, nor (from the evidence of
this and other posts) understood exactly what it would involve, so what
makes you so able to pronounce that it would be "unsafe". You are
speculating.
>>And what we might well discover is that the disconnect between M/E
>>system and intellect is just as it appears to be in humans: humans
>>are intellectual systems with aggressive M/E systems tacked on
>>underneath.
>
>
> How well modularised the human brain is in that respect is an open
> question, but the very hard problem of designing a stable Friendly
> 'M/E' system remains; this is not something you can do by trial and
> error, attempting to do it by trial and error will probably get
> everyone killed, and first-principles research into FAI has already
> generating strong evidence (actually forget that, even 'Heuristics
> and Biases' research has generated strong evidence) for human-like
> cognitive systems being a bad starting point.
First-principles research in FAI? You don't have a workable theory of
FAI, you just have some armchair speculation.
Stop asserting this detached-from-reality pseudo-philosohpical nonsense
and get some empirical data to back up your claim.
Really, my patience is wearing thin with these spurious attacks.
Richard Loosemore.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT