From: Yan King Yin (email@example.com)
Date: Thu Dec 30 2004 - 07:12:46 MST
Hi Justin =)
Your exposition is very readable and well-thought, so I wish to chime
in and share some opinion.
> "How do you intend to productize your research, what is it that you
> expect to come out of this company, and how can you make money?"
This is a very important question for for-profit AI companies. The
assumption that, simply because AI is intelligent, that it can *replace*
all human jobs, is actually questionable. I think there will be a lot
of resistence from society to oppose this trend. For example, right
now, with 2004 technology, we can already replace all McDonalds' workers
with robots, but this has not happened, even though economically it is
feasible. On the other hand, the erosion of jobs by automation is
indeed happening right now. So I think we should understand that the
transformation of society by automation is governed by several factors:
economical, technological, and (often neglected) societal and eco-
> Artificial Intelligence, even very weakly achieved, is not just
> another technology. It represents, at the very least, a complete
> industry, and most likely, is one of those events that redefines the
> landscape of human activity.
I agree that it will be a new industry, but can you pinpoint what
exactly is about AI, that makes it qualitatively different from the
tools/technologies we had before? I think, on a quite deep level,
that there is no real difference.
> -Third, that our status, as AI researchers and developers, will give
> us a privileged and controllable stake in the construction and
> deployment of AI products and resources, allowing us to capitalize on
> our investment, as per the standard industrial research model. This
> seems fairly safe, until one realizes that there are many forces that
> oppose such status, merely because of the nature of AI. Governments
> may not allow technology of this kind to remain concentrated in the
> hands of private corporations. AI may follow the same path as other
> technologies, with many parallel breakthroughs at the same time,
> leaving us as merely members of a population of AI projects suddenly
> getting results. The information nature of this development increases
> this problem a great deal. I have no reason to imagine that AI
> development requires specialized hardware, or is impossible to employ
> without the experience gained in the research of said AI software. So
> piracy, industrial espionage, and simple reverse-engineering may
> render our position very tenuous indeed. I have no easy answers for
> this assumption, save that while worrying, little evidence exists
> either way. I personally believe that our position is privileged and
> will remain so until the formation of other AI projects with
> commensurate theory, developed technology, and talent, at that point
> it becomes more problematic.
There is some truth in what you said above. Information, in general,
is very difficult to control, but it is also true that there has always
been control of information, even in advanced societies (eg laws against
piracy of software). Again, your assumption that AI will spread extremely
fast and without resistence is ignoring the human factors.
> I have a story I can tell here, but the supporting evidence is
> abstract, and indirect. Artificial Intelligence is likely, in my
> opinion, to follow an accelerating series of plateaus of development,
> starting with the low animal intelligence which is the focus of our
> research now. Progress will be slow, and spin off products limited in
> their scope. As intelligence increases, the more significant
> bottleneck will be trainability and transfer of learned content
> between AIs. This period represents the most fruitful opportunity for
> standard economic gain. The AI technology at this point will create
> three divisions across most industry, in terms of decision technology.
> You will have tasks that require human decision-making, tasks that can
> be fully mechanized, performed by standard programmatic
> approaches(normal coding, specialized hardware, special purpose
> products), and a new category, AI decision-making. This will be any
> task too general or too expensive to be solved algorithmically, and
> not complex enough to require human intervention. Both borders will
> expand, as it gets cheaper to throw AI at the problem than to go
> through and solve it mechanically, and as the upper bound of decision
> making gets more and more capable.
When all the factors are combined in consideration, it becomes very
likely that AI development and adoption will be gradual. In fact,
right now I and some partners are having a hard time trying to think
of a business model for our visual recognition prototype. You can
say that this here is an economic/social barrier -- The technology
is here, but people don't want the automation; They are OK with
doing things manually!!
> I'm not saying I can't make up clever uses for AI technologies that
> could make a gazillion dollars, if I had designs for them in my hand.
> There are obvious and clear storytelling ideas. But that would be
> intellectually dishonest. I'm looking for a way to express, in terms
> of investment return, what AI is likely to actually do for us, in a
> conservative, defensible sense.
What can AI do for us? In the short term, I think we really have
to think hard to create some new niches. The prospect of having to
fire a huge number of low-skill labor is a nightmare, so it may be
wiser to stay clear of that.
-- _______________________________________________ Find what you are looking for with the Lycos Yellow Pages http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10
This archive was generated by hypermail 2.1.5 : Mon May 20 2013 - 04:00:47 MDT