From: H C (lphege@hotmail.com)
Date: Mon Oct 24 2005 - 20:30:49 MDT
>From: rpwl@lightlink.com
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Loosemore's Proposal
>Date: Mon, 24 Oct 2005 22:57:32 GMT
>
>Ben Goertzel wrote:
> > Richard,
> >
> > It's true that making an AGI, given current software technology, is a
>big
> > pain, and it takes a long time to get from vision to implementation.
> >
> > I agree that better software tools would help make the process a lot
>easier,
> > even though I have a feeling your vision of better software tools is a
>bit
> > overidealistic.
> >
> > However, I have chosen to focus on AGI itself rather than on building
>better
> > tools, because I've judged that given my limited resources, I'll
>probably
> > get to AGI faster via focusing on AGI than via focusing on tools first.
> > While tools work is conceptually easier than AGI work by far, it still
> > requires a lot of thought and a lot of manpower.
> >
> > I would be more interested in your tools ideas if they were presented in
>a
> > more concrete way.
>
>But it would be a misunderstanding to treat my suggestion as "here is a a
>possible good way to build an AGI." If it were that sort of suggestion, I
>would be just one of a hundred tool designers with great ideas.
>
>I am saying something much more serious. I am saying that we *need* to do
>things this way. We will eventually realise that anything else is not
>going to work.
>
>We have to build systems that grow their own representations, we cannot
>presuppose those representations and then, later, tack on some learning
>mechanisms that will feed those representations with new knowledge. This
>fundamental point is crucial to my argument, so make sure that you are
>absolutely clear about that before we discuss fine details about the
>environment.
>
*ding ding ding* -- TKO
Ok, over your past few posts I tended to agree with michael that you were
using somewhat useless generalizations... not to mention some very, very
weak motivational rhetoric. But uh, anyway.
I think you just nailed a critical discussion point (somehow I don't see
other people as agreeing...), which are these *presupposed representations*.
I think your idea for a system that "grows" their own representations is a
very interesting concept and should be fleshed out a little bit.
On a broader note, I think discussion of knowledge representation in general
(well, not in general, because we have a good case study to observe, with
thousands of years of *empirical evidence* (not the fake, meaningless term
that gets thrown around here every once in a while) compiled into writings
all over the place) is an extremely important topic. Knowledge
representation (whether you call them ideas, concepts, motivations, symbols,
or whatever) is the convergence point of a huge domain of cognitive content,
structure, and dynamic processes. In fact, I would argue, there already
exists a full blown perfect AGI design distributed out there in all the
psychology/philosophy paperwork that we haven't gotten around to reading
yet.
The foundations of knowledge representation, as I said, is the convergence
point. All the important topics falling under intelligence are
*functionally* and *structurally* linked to knowledge representation: for
example, intelligent perception/awareness, motivational structures and
function, explicit sensory data, abstract categorical knowledge. Analogies
and metaphors are inherently formed based on some underlying conceptual
structure or relation system. The formation of thoughts (and the strings
that arise) are all necessarily and fundamentally dependent upon some
conceptual structure. The definition between objective and subjective,
rational and irrational, and countless other classes of psychological study
are hinged upon some underlying, all-encompassing, conceptual system.
This is one of the most important things I ever learned (LOGI did a decent
job of actually trying to reconcile these issues- which is work at least in
the right direction!). The conceptual structure of any workable AGI design,
must literally be the most general representation possible- such that every
single feature and process of intelligence (things such as every single
concept self-containing the means to infinite complexity, the ability to
create, formalize, and envision any formal system of any complexity, and the
ability to learn, create, pretend, lie, self-decieve, and pretty much,
cognitively, literally do whatever the hell it *wants*) must all be wrapped
up, justified, and tractable within the conceptual architecture.
Trying to reverse-fit a conceptual structure to anything less than
everything will lead you absolutely nowhere (that has a nice ring to it...).
Er, but what I mean by that is you can't point out a few necessary processes
(even if they are both necessary and sufficient for intelligence) and start
just gung-ho developing these little lame-ass useless incompatible tools,
just as equally as you can't bottom-up create your own (necessarily,
obscenely) presumptuous conceptual structure by attempting to fit it to some
ill-specified necessary procedures that intelligence must incorporate. It's
all one big shebang- the conceptual form fits the conceptual function, just
as much as the intelligence functions fit directly with conceptual form.
-- Th3Hegem0n
http://smarterhippie.blogspot.com
>Richard Loosemore
>
>
>
>
>
>---------------------------------------------
>This message was sent using Endymion MailMan.
>http://www.endymion.com/products/mailman/
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT