From: Ben Goertzel (firstname.lastname@example.org)
Date: Thu Jun 13 2002 - 20:46:18 MDT
> What on earth is all this LOC talk about? I haven't seen the
> like since we used to brag about getting tiny Basic in less than
> 2K bytes of machine code.
Yes, it's not a terribly relevant metric. On a crude level it is, however.
I mean, there's a big qualitative difference between a system that is
roughly 100K lines of code and one that is, say, 50M lines of code like
It is a key point that a digital mind need not be 50M lines of code, because
the bulk of the complexity is self-generated not explicitly encoded.
> > Reading should be learned, not wired-in. Ditto for nearly all
> > knowledge. However, cognitive mechanisms may be parameter-tuned for
> > performance on linguistic tasks. (e.g. logical unification may
> be tuned for
> > unification feature structure grammar parsing)
> If it can't read then how will it be trained? Talking to it,
Talking to it about shared experiences in a commonly perceivable/manipulable
>Feeding it factoids a la Cyc?
Nope, though this may be useful later in the learning process. Feeding an
encyclopedia into an AI's mind will be cool, but will not teach the AI
common sense or how to be an autonomous being...
> > Sometimes yes, sometimes no. Producing these can slow
> inference down and is
> > not always contextually appropriate.
> Do what humans do, rationalize after the fact a plausible
> explanation for how the conclusion was arrived at. :-)
As Nietzsche pointed out and Gazzaniga proved scientifically, this kind of
rationalization is one of the key roles of human consciousness -- and it
will indeed play a role in a real AI's mind as well, though without the
exaggerations that human emotions often bring to it.
-- ben g
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT