Fwd: Introduction TRANSPLANTED from seedaiwannabees.

From: justin corwin (outlawpoet@gmail.com)
Date: Wed Sep 07 2005 - 00:28:58 MDT

Michael Wilson and some few others on the seedaiwannabees list were
discussing AI design, and I wanted to continue the discussion,
including the wider readership of SL4.

The thread was started by Chris Paget who also appeared here not too
long ago, but the point was interested is below:

Michael Wilson wrote:
>> Single theorist, yes, in that I don't think the problem is solvable
>> without standing on the shoulders of giants /and/ paying close
>> attention to the past failures of geniuses and fools alike. However
>> I am skeptical about the real value of a large implementation team.
>> A software engineer I respected once said to me; 'really innovative
>> software is almost always the product of a single mad genius', and
>> he had a point. AGI is very difficult to understand, extremely
>> difficult to explain and just doesn't modularise well such that
>> people only need to understand the external interface of bits
>> they're not working on. I have some experience of trying to design
>> an AGI collaboratively and it just doesn't work for nontrivial
>> designs, at least not now when the field is in its infancy. And
>> yes, I have a reasonable amount of experience in commercial
>> software engineering, leading and following.
>> I would not be surprised at all if the first AGI is effectively the
>> product of 'one mad genius' architect, though that architect may
>> have a team of implementers following their instructions and
>> designing non-core support systems.

I would very much disagree with this point, and increasingly so as I
continue my studies of both AI, and other complex endeavors.

Alan Kay is credited with the phrase, "The best way to predict the
future is to invent it." and it is a popular concept, particularly
among the ambitious. Here in the transhumanist and singularitarian
community, it's a central one. If we are to steer the world away from
nasty places and situations, we must create the future, right?. We
even seem to have the biggest lever around, ultratechnologies, AI in
this case.

Particularly in America, the concept of single revolutionaries and
their power to define through invention, influence, and idea, is very
common(MW is in the UK, but I'm being memetic here)

But it is, insofar as I'm aware, a theory without any evidence. I
can't think of any one person who can seriously be credited not only
with innovative theories, but a complex final design of a nontrivial
part of our world. Things are much too complicated for that.

Especially software design, wherein the interactions (even inside a
system that could be said to be designed by one person) are often
entirely unexpected to the designer.

I suppose it just seems very unrealistic to imagine that all the
relevant complexity could be solved, designed, communicated to drones
or done personally, all by one person in any kind of reasonable
time-scale, if it were possible at all. AI in particular is not just
an architectural and theoretical problem, but many many
implementation, engineering, and "mere" software design issues.

To load examples a bit, I could hold up Google, ostensibly a simple
innovative idea by two graduate students. But upon closer inspection,
the actual reality of google is the interaction of much more than
that. Sergey and Larry did not design the cluster architecture it runs
on, nor the Google File System which contains their database, nor the
MapReduce implementation that underlies their fast response time.

It's true that they may have seen the *need* for these things, and
even spec'd them with the creators. But these systems shaped Google's
destiny in ways the founders did not plan or decide. It was their
database and cluster architecture which determined their ability to
maintain what was for a long time the largest page database on the
net, what allows them to serve so many different kinds of requests
quickly, to grow a commodity cluster to unreasonable sizes.

AI is more integrated, more centrally planned, and necessarily more
interconnected than most software and engineering, but not so far that
I believe it necessary or possible for a single person to determine
enough attributes to really be called 'a single mad architect'.

There are degrees, of course, and I certainly don't believe that an
entire Business Unit(a la Microsoft), or Research Center (a la PARC)
is required or advisable. But an undertaking the size of AI must
include more people doing significant work on what I would call core
functions. Quite aside from the time aspect, a single person is simply
too prone to conceptual mistakes and self induced theory blocks. As
optimistic as I am about mental plasticity, I don't' think any one
person can really solve all the problems that are theoretically within
their intelligence and ability. People get stuck, or lack inclination,
or prefer to focus on more interesting problems, or simply run out of
innovative ideas.

AI, like any complex endeavor, will require a culture to support it's
exploration. How large of one (whether it could be contained in a
single company, for example) is an interesting question, but quite
separate from the disproof of the alluring vision of just doing it all

Justin Corwin

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT