From: Samantha Atkins (email@example.com)
Date: Thu Jun 13 2002 - 20:06:54 MDT
Ben Goertzel wrote:
>>Yes, if a deductive inference engine and symbolic knowledge representation
>>repository were essential for Seed AI, then I agree that less than 20K
>>lines of java code would be required, especially if a relational database
>>were the object store.
> A very nice, efficient, 1-machine AGI-friendly data representation framework
> can be done in 5K-10K lines of C++.
> You don't need to use an RDB to make this sort of thing compact. Of course,
> Java bloats code compared to C++ ...
It is impossible to evaluate such a statement without knowing
what you have in mind as the requirements of this subsystem. Of
course, it is really meaningless to talk about LOC anyway as the
smallest in LOC is not necessarily either the best or the
simplest to implement.
>>My question to you then would be: Given a budget of 50K LOC, just what
>>behaviors would you require the system to have? Or if your base system
>>merely accepts knowledge for the next layer up, how many person-years of
>>effort at that next layer would be required to achieve Seed AI, and what
>>behaviors would be taught to the system with this budget.
> I think a complete Novamente could be compressed into 50K lines of C++, at
> significant cost in code comprehensibility and maintainability. Not a path
> we're likely to take though, I'd prefer 200K lines of *good* code ;->
What on earth is all this LOC talk about? I haven't seen the
like since we used to brag about getting tiny Basic in less than
2K bytes of machine code.
>>Is reading important?
> Reading should be learned, not wired-in. Ditto for nearly all linguistic
> knowledge. However, cognitive mechanisms may be parameter-tuned for
> performance on linguistic tasks. (e.g. logical unification may be tuned for
> unification feature structure grammar parsing)
If it can't read then how will it be trained? Talking to it,
perhaps? Feeding it factoids a la Cyc?
>>What kinds of machine learning are required?
> Probabilistic inference, but not Bayes-net-style, i.e. not assuming a global
> pdf across all knowledge
> Evolutionary programming
>>automated reasoning require justifications/explanations?
> Sometimes yes, sometimes no. Producing these can slow inference down and is
> not always contextually appropriate.
Do what humans do, rationalize after the fact a plausible
explanation for how the conclusion was arrived at. :-)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT