Re: Loosemore's Proposal

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Mon Oct 24 2005 - 15:38:53 MDT


Richard Loosemore wrote:
>> This is a gross simplification, but basically this just means that
>> AGIs amenable to formal verification will resemble software systems
>> more than organic systems. It is intuitively apparent (and this is
>> a case where intuition is actually right) that since computers are
>> designed to support formal software systems, not organic simulations,
>> this approach will also make more efficient use of currently
>> available hardware.
>
> That is not what I said.

No, but it's what your assertion implies.

> Not a straw man: you yourselves are taking the "logic" approach that I
> am talking about. Until you understand that, you are missing the point
> of this entire argument.

The scope of 'yourselves' is unclear, but I certainly don't take
the position you're attempting to dismiss.

> Stop trying to deflect attention to some other group: I am talking
> about you and your approach.

You may be talking about me, but you are not describing my approach.

> Nonsense: LOGI hasn't solved the grounding problem.

True, but it does propose a solution in abstract, albeit without
much constructive detail. My point wasn't that the problem has been
conclusively solved, it hasn't, my point was that most researchers
are well aware of the problem and there have been several credible
attempts to solve it (most of which await experimental verification).
 
> Cite one example of systematic variation of local mechanisms in complete
> AGI systems, in search of stability. There is not one. Nobody has
> tried the approach that I have adopted, so why, in your book, is it not
> novel?

Plenty of researchers have tried a large number of local dynamics in
their system. Lenat's Eurisko project is a good example; he tried
hundreds of different hand-created heuristics and numerous structural
features in a search for a system that would work. This is usually
what happens when a system is built and fails to live up to initial
expectations; researchers fall back on trial and error. Eurisko is
actually a good example of a combination of manual and automated
trial-and-error; the system itself was a self-modifying design full
of semi-specialised 'local dynamics', with Lenat providing high
level guidance and the system itself doing local exploration of the
design landscape in each revision. I admitt that I am unclear as
to how you intend to direct your design space exploration.

> So, where else is there a development environment that would easily
> allow someone who was not a hacker to produce 100 different *designs* of
> cognitive systems, using different local mechanisms, then feed them the
> same sets of environmental data, then analyse the internal dynamics and
> make side by side comparisons of the behavior of those 100 systems, and
> get all this done in a week, so you can go on to look at another set of
> 100 systems next week?

I don't see how you can bypass fundemental limits on how quickly humans
can invent and input design complexity. Incremental improvements are
possible with a lot of work, but if you seriously expect to try 100
designs in a week they will need either very simple specifications or
will be very similar. This translates into either a very low resolution
search (if you used some kind of expression system to translate simple
specs into functional systems) or a very slow search of an extremely
large design space. This kind of thing could work if intelligence was
truely modularisable, you had all the essential modules predeveloped
and you were just looking for the correct way to connect them, but even
if that was possible it just pushes all the hard work into specifying
and building the 'adequate set' of modules.

> If I am not talking about you, when was the last time you built a
> complete AGI and tested it to see if the local mechanisms you chose
> rendered it stable (a) in the face of real world environmental
> interaction and (b) in the course of learning?

I haven't done this of course (yet; I'm working on it), but I can't
see how you can possibly apply this as a research standard. /No one/
has built a complete AGI yet, and past attempts to do so have taken
years or decades. It's hard even to find someone who will admitt that
they tried and failed; usually people say that they had to abandon
their projects due to lack of time/money/staff, or maintain that just
a few more years work should crack it.

>> This part does not appear unreasonable; it seems similar to the
>> 'experimental investigation of AGI goal system dynamics' that Ben
>> has historically been in favour of.
>
> You have never got anywhere near trying it

How do you know this? You don't, of course. Actually I have got
about as close to trying this sort of experiment as one can get
without embarking on a multi-year implementation project. I've
done a fair bit of exploratory implementation covering a
reasonably diverse range of basic architectures, of which the
problem solver based on enhanced genetic algorithms probably came
the closest to your sensibilities, though obviously the design
complexity of anything that can be implemented in a couple of
months is sharply bounded and well below what you'd need for an
AGI.

> First-principles research in FAI? You don't have a workable
> theory of FAI, you just have some armchair speculation.

I don't have such a theory nor am I working on one. But formal
theories of this calibre do not pop out of the ether fully
formed. They are the result of years of hard work on a progressive
series of precursors, which I /can/ observe the SIAI steadily
making progress on. I expect that you would write off the entire
field of theoretical physics as useless 'armchair speculation', or
at least you would if you could find a way to ignore the successes
of that discipline.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT