From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 27 2002 - 10:43:43 MST
hi,
> No, merely expressing a well known truth about software engineering.. And
> not only about software engineering, engineering in general. Minds are one
> thing: complex. Look at the hardware layer, it is that very obviously. And
> we're getting increased evidence that these structures are indeed doing a
> lot, are not very reducible. This is not an intuition, I can cite you a
> few papers indicating that there's not much averaging going on. Each
> incoming bit of info from the trenches makes me lose any residual optimism
> I had.
I think this is an intuitive assessement of a diverse, messy, inconclusive
and in-process body of scientific knowledge.
> There's a bareer to the complexity of a system you can build as a single
> person. Different persons have different ceilings, mine is quite low.
> Teams do not really scale in that regard. A ceiling of a group is not
> dramatically higher than of a single individual, and the ceiling of a
> large group can be actually lower. This is basic software engineering
> knowledge.
The Novamente project right now involves 7 people, and our complexity
ceiling is *dramatically* higher than that of any one of us, because we
communicate very well together, and have different strengths and weaknesses
and inclinations.
I think we could add about 5 more people to the group (assuming they were
draw from our existing pool of known-to-be-compatible people) before we
reached a point where effectiveness stopped increasing significantly with
each new person.
The Webmind Inc. AI team definitely reached a point where each new person
added contributed only incrementally to total productivity.
> General intelligence is not property of a simple system. Far from it.
> As a result I predict that human software engineers coding an AI
> explicitly (i.e. using not stochastic/noisy/evolutionary methods) are
> going to fall short of the goal.
And as I said, you're welcome to your pessimistic intuition ;>
> > I agree that goal-directed self-modification is a specialized mental
> > function, similar (very roughly speaking) to, say, vision processing, or
> > mathematical reasoning, or social interaction. However, also like these
> > other things, it will be achieved by a combination of general
> > intelligence processes with more specialized heuristics.
>
> Am I correct to assume that we're talking about explicit codification of
> knowledge destilled from human experts? Is there any reason to suspect
> that we're going to do any better than Lenat & Co? The record track so far
> is not overwhelming.
You are not correct in your assumption.
I think that explicit codification will play at most a supporting role, not
a central role.
As I've said before, I think the Mizar (www.mizar.org) corpus of
mathematical knowledge may play a valuable role at some point.
> Um, C++ can easily lose an order of magnitude of performance over C, if
> you don't know what you're doing. Not to mention tweaking the compiler
> flags, and jiggle the code, trying to not run into performance killers
> (longword align, for instance).
>
> We seem to mean very different systems, when speaking high-performance.
> The absence of high-performance computing types in AI is notable.
Actually, this barb may be appropriate when aimed at me, but not at my
Novamente-team colleagues who are doing most of the C++ programming. They
are pretty damn good C programmers. We are in fact not incurring much OO
overhead in our codebase right now; most of the code is straight ANSI C,
with object-orientation only used carefully and in ways that do not cause
significant performance penalties.
In general, I think there are plenty of high-performance computing types in
traditional academic AI. Look at the people who made Deep Thought, later
transformed into Deep Blue. The problem in my view is a lack of "general
intelligence" types.
> Um, implementing a runaway AI is certainly not my problem. I'm interested
> in modelling biological organisms, which does not involve such dangerous
> components. This is the wrong forum to discuss it, however.
Biotechnology has many potential dangers associated with it as well.
Perhaps your work in particular does not, however.
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT