From: Ben Goertzel (firstname.lastname@example.org)
Date: Tue Jan 08 2002 - 04:37:04 MST
> Unless the storage
> substrate itself
> becomes smarter / can understand the semantics of what is stored,
> you're always going
> to have a potential for metadata and data to be out of sync.
This is true, but to be fully accurate the statement should be fuzzily
The degree of metadata/data asynchrony is going to be (roughly, perhaps
nonlinearly)proportional to the degree of understanding
embedded-in/attached-to the storage substrate
Until you can identify the details of this proportionality, you can't show
that an incremental improvement in "AI" won't lead to a significant
improvement in data/metadata asynchrony
I think this is where your intuition, Jeff, differs from that of those who
are really bullish on the Semantic Web. Such as Sasha Chislenko, for
instance, my good friend who was really hyped on the idea. He knew better
tech would be needed to make it work, but he thought that what in my view
constituted minor improvements (mostly to do with collaborative filtering)
would make a big difference. I actually didn't agree with him; in
conversations with him I always took the anti-Semantic-Web view, or at any
rate the view that pretty big AI improvements (though not necessarily
human-level AI) would be needed to make it work.
I should say, however, that Webmind Inc. made products that did automatic
markup of documents for some customers, and in many cases the products
worked quite well. In some cases they didn't work so well. So Sasha was
half right. As simple examples of success, automatically marking up
document by *topic area* can be gotten to work with 95%+ precision, and
automatically marking up *sentences* by topic area can be gotten to work
with 80% precision or so, assuming the sentences are grammatical (news
articles, not message board text...). We could also automatically mark up
addresses, phone numbers, prices and so forth with 95%+ precision. So a
part of the Semantic Web vision is certainly achievable right now, should
someone with appropriate power and $$ choose to commission the appropriate
people to write the appropriate software.
This partial success could be seen as evidence that the nonlinear
proportionality mentioned above is not as severe as you're suggesting, Jeff.
> is no evidence to
> suggest that information is stored / accessed / used in a
> structured way in biological
Actually, there IS a lot of evidence that information is *used* and *stored*
in a structured way in the human mind. However, the structures involved are
not the same as a simplistic semantic network.
There is a lot of knowledge out there about the structure of different
memory subsystems, including episodic memory, declarative memory, memory of
physical actions taken, etc. This knowledge doesn't come NEAR to a complete
theory of memory, and it's of dubious value for AI, but it certainly
demonstrates the existence of *memory structures*, i.e. of habitual patterns
of interrelatedness among "memory traces" in the brain (whatever precise
physical form they may take).
At best the semantic network is an OK model of how the human brain stores
declarative memory. In fact, the earliest semantic net models (Quillian et
al, in the mid-70's, was an early paper I remember -- no, I didn't read it
when it first came out, I wasn't quite *that* precocious of a
10-year-old...) were inspired by results on priming in human memory.
Unfortunately, the Web and many document databases contain a lot of episodic
knowledge which is not well described by standard semantic net models at
Also, the semantic net models out there are oversimplified in many important
ways. They rarely contain adequate ways of specifying uncertain knowledge.
They don't do well with n-ary relationships, being tailored for binary ones.
They don't do well with describing processes. Etc.
So, in my view, the problem is not that "natural intelligence"'s memory has
no structure to it. The problem is rather that this structure is vastly
more complex than the structure that the Semantic Web folks would like to
impose on it. Taking a structure that models word priming experiments and
generalizing it to all human knowledge is a rather big leap, I would say.
But a more advanced knowledge representation demands more advanced learning
mechanisms, so that accepting the oversimplicity of the semantic net
approach means accepting that building the Semantic Web adequately *really
is* an AI problem, and most Semantic Web boosters do not want to do that.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT