From: Justin Corwin (firstname.lastname@example.org)
Date: Sun May 19 2002 - 16:24:02 MDT
>From: "Ben Goertzel" <email@example.com>
>My own intuition is that
>1) Of course, a great diversity of powerful sense-inputs and actuators > is
>a good thing*
Of course. The question I have is whether environmental richness is
inextricably tied to internal mental complexity. Certain behaviors and
interdepencies in human minds lead me to think that Environmental complexity
may lead to internal organization. However this is an attempt to push this
issue out from within the complex lump of interdepencies the brain is, and
deal with this issue as one of Design.
>2) Unlike Eliezer, I think that interacting with humans and software agents
>on the Net [considered broadly, including financial datafeeds,
>weather satellite data etc. etc., not just Web pages], will probably
>an adequate environment for AGI, though it certainly won't lead to a
Adequate for what? Generally, environment seems to lead to mental structure.
While the Net may have lots of data, it's a rather wonky world to live in,
with bizarre rules. I would feel badly about an AI that reflexively tried to
apply a directory structure to concepts, the same way humans try to organize
>3) I think that in the early stages of an AGI project (and yes, Novamente
>*still* early-stage, because we don't have our mind-engine fully
>yet, not by a long shot. Webmind AI Engine was almost out of the early
>stage of implementation & software testing and into the mid-stage of basic
>testing and teaching, but I think it would not have passed thru the
>mid-stage due to various implementation and design issues), it is best NOT
>to focus on the building of elaborate perception and action systems.
I think you may be right. But my question is an eventual design issue, not
an initial one. I'm not arguing that environment needs to be included, RIGHT
NOW. Just what level of complexity in environment is needed for AI to make
it all the way to AGI?
>Partly, one's view on this issue depends on how humanlike one wants one's
>AGI to be. I am not aiming at a humanlike AGI, just a very smart one,
>because I think that the latter is an easier problem. Compared to more
>closely brain-inspired approaches like DGI and A2I2, my approach has less
>data to use for motivation (as the human brain is only a loose inspiration
>rather than a close guide), but has a lot fewer problems to solve in terms
>of efficient harmonization with current hardware platforms (though these
>problems are *still* very severe even for Novamente and we've put a lot of
>work in on them).
Well, yes and no. I like the Novamente approach because it's additive. The
supersystem you have allows for massive tweaking that may not even be
thought of yet, because framework is flexible in mindstuff(for lack of a
better way of explaining it. And you can always add something you missed
But I have an intuition that mental organization is dependent on environment
and what tools the AI has to interpret that. So an AI that lives on the Net
might never work at all, because of insufficient environmental feedback.
Oh, and about socialization. I agree Socialization is a huge part of
environment. But it's not neccesarily a part of explicit environment. I can
talk to people who aren't next to me. And there's no reason to assume that
the AI can't use chat windows either. So I left such interaction out of my
exploration of the subject because it can be dealt with without physical
representation.(or whatever the AI would call something it relates to as a
"two men walking, two men walking, more different than another than they
-Scat Singer I can't recall in a Cincinnati Night Club.
Chat with friends online, try MSN Messenger: http://messenger.msn.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT