From: Ben Goertzel (ben@goertzel.org)
Date: Wed Dec 29 2004 - 09:00:29 MST
Hi Justin,
My opinion is that nearly all intelligent businesspeople understand that a
powerful AGI would be amazingly economically lucrative.
However, by the same token, nearly all such folks feel that a powerful AGI
is a *long way off*.
It will not be at all difficult to get business partnerships or investment
money, in massive amounts, once one actually has a powerful AGI -- or a
sufficiently convincing technology demonstration that an average computer
science professor is convinced by it that one has the secret to building a
powerful AGI.
What is difficult is to get anyone to invest in R&D aimed at creating an
AGI -- precisely because they believe it's a long way off. And because the
mainstream of scientific researchers firmly believes and argues that it's a
long way off.
-- Ben G
> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of justin
> corwin
> Sent: Tuesday, December 28, 2004 6:29 AM
> To: sl4@sl4.org
> Subject: Conservative Estimation of the Economic Impact of Artificial
> Intelligence
>
>
> In my work, I concentrate largely on the practical, necessary research
> to allow us to build an artificial intelligence. It is the central
> fascination of most of the people in my company. I would imagine that
> most people in this community have great interest(if not actual
> involvement) in the research, development, and application of novel
> intelligent systems, IA, AI, or otherwise.
>
> However, in the process of explaining my involvement to other people,
> as well as attempting to instill enthusiasm (or at least explain
> potential impact) for my research, I've been encountering the
> following baffling response with increasing frequency. My fearless
> leader here at A2I2, Peter Voss, has been encountering the same thing.
>
> "How do you intend to productize your research, what is it that you
> expect to come out of this company, and how can you make money?"
>
> I would like to take a moment to say that I am only indirectly
> interested in money. My involvement in A2I2 and AI research in general
> is motivated both by interest and idealism, with a backdrop of
> pragmatic analysis. AI is a very large lever. It will count, in game
> theoretic terms, an inconceivable amount, before changing the
> landscape entirely. Money, for me, is just a poorly normalized utility
> token, and inasmuch as I am interested in it, I expect to be more than
> able to backwards exchange the influence I have on the creation of AI
> for it. Digression ended.
>
> What these people are really asking, via the agency of hypothetical
> investors, is "what is AI good for, and how can that be delivered from
> your research."
>
> Initially, much like Peter had in previous discussion, I discounted
> this reaction as simply the miscategorization of AI as 'yet another
> techno-widget', to be judged in the marketplace and priced for proper
> placement on shelves. It has become clear to me, however, that it runs
> somewhat deeper than this.
>
> Artificial Intelligence, even very weakly achieved, is not just
> another technology. It represents, at the very least, a complete
> industry, and most likely, is one of those events that redefines the
> landscape of human activity.
>
> Any transhuman intelligence, of course, represents an absolute
> departure from human prediction, but for the time being, let us speak
> of what we can.
>
> The unfortunate thing, from my point of view, is that generating a
> conservative estimation of the economic impact of AI is nearly
> impossible. It pre-supposes several things.
>
> -First, that AI impacts economically before it changes the entire
> landscape, this seems quite possible, AI will take some time to
> develop, and even once complete will require some time to run. Even if
> it's just inflation hitting the roof as everyone with any money does
> whatever they think will avert the apocalypse during the last week of
> the Final Program, that counts as economic impact.
>
> -Second, that there is some period of stability in the development of
> AI that allows for AI 'products' to be evaluated in terms of
> relatively cognizant economic terms. This is very tricky. It has been
> popularly supposed by some that human-commensurate intelligence
> represents the top level, or a hard barrier, that AI research will
> continue to that point and then stop, or at least be slowed. It is
> likely that a certain level of intelligence represents the maximum
> effective potential of a particular design, due to scaling laws,
> architectural support requirements, or flaws in the design to start
> with. Unfortunately, an AI will not be using the same design as a
> human. It is, in my estimation, just as likely to top out at the
> commensurate intelligence of a mouse, or a dolphin, or so far above us
> that the intelligence is not measurable. It seems clear to me that
> minds need not follow a uniform plan with uniform strengths, although
> they may be very correlated. This makes design-independent analysis
> complicated.
>
> Some hope in the form of computer power requirements, assuming
> biologicals and previous experience with unintelligent mechanical
> computation hold, the physical task of running an intelligence may
> limit it to certain levels of potential until larger/faster computers
> can be built. Unfortunately even Kurzweil's rather charming little
> graphs give us little time before available computation far outstrips
> human level, leaving us in the same boat. The stability given us there
> is fleeting, but does allow enough years to be evaluated on the
> economic scale.
>
> -Third, that our status, as AI researchers and developers, will give
> us a privileged and controllable stake in the construction and
> deployment of AI products and resources, allowing us to capitalize on
> our investment, as per the standard industrial research model. This
> seems fairly safe, until one realizes that there are many forces that
> oppose such status, merely because of the nature of AI. Governments
> may not allow technology of this kind to remain concentrated in the
> hands of private corporations. AI may follow the same path as other
> technologies, with many parallel breakthroughs at the same time,
> leaving us as merely members of a population of AI projects suddenly
> getting results. The information nature of this development increases
> this problem a great deal. I have no reason to imagine that AI
> development requires specialized hardware, or is impossible to employ
> without the experience gained in the research of said AI software. So
> piracy, industrial espionage, and simple reverse-engineering may
> render our position very tenuous indeed. I have no easy answers for
> this assumption, save that while worrying, little evidence exists
> either way. I personally believe that our position is privileged and
> will remain so until the formation of other AI projects with
> commensurate theory, developed technology, and talent, at that point
> it becomes more problematic.
>
> Assuming we have answers to all these questions, we may find that AI
> is indeed a good way to make money, or at least in the near term.
>
> I have a story I can tell here, but the supporting evidence is
> abstract, and indirect. Artificial Intelligence is likely, in my
> opinion, to follow an accelerating series of plateaus of development,
> starting with the low animal intelligence which is the focus of our
> research now. Progress will be slow, and spin off products limited in
> their scope. As intelligence increases, the more significant
> bottleneck will be trainability and transfer of learned content
> between AIs. This period represents the most fruitful opportunity for
> standard economic gain. The AI technology at this point will create
> three divisions across most industry, in terms of decision technology.
> You will have tasks that require human decision-making, tasks that can
> be fully mechanized, performed by standard programmatic
> approaches(normal coding, specialized hardware, special purpose
> products), and a new category, AI decision-making. This will be any
> task too general or too expensive to be solved algorithmically, and
> not complex enough to require human intervention. Both borders will
> expand, as it gets cheaper to throw AI at the problem than to go
> through and solve it mechanically, and as the upper bound of decision
> making gets more and more capable.
>
> I'm afraid I have no real evidence as to how long this period will
> last. It depends entirely on the difficulty of increasing the
> intelligence of the AI, which may reside in design, hardware, and to a
> certain extent, motivation(goal systems are a thesis in themselves,
> ask EY). I suspect, based on my experiences thus far, that early AI
> designs will be very lossy and faulty and poorly optimized for
> increasing in intelligence. This may mean that a complete redesign of
> AI theory will be necessary to get to the next series of plateaus.
> Unless this is simply beyond human capability, there is no reason to
> think this will take any longer than the development of AI theory
> sufficient to get us to this point.
>
> Sometime after this, economic aspirations become fleeting in the
> general upheaval and reconstitution caused by the arrival of another
> kind of intelligence. Some might say this is rather the point of AI
> research.
>
> Projecting into the future is always dangerous. I think that any
> attempt, especially the one above, to characterize the trajectory of
> any technology is doomed to be largely irrelevant. But some choices
> must be made on best available guesses, so here are mine. AI research
> will change a lot of things. In the near term, it will remain a fringe
> activity, and people will still ask the strange question 'what will
> those AIs be good for, anyway?'. But some investors will come, and the
> clearest way I can communicate with them what the goals and value of
> AI research is that it is vastly enabling. I don't know what the first
> task an AI will perform is. I know that it will be something that
> can't be done with anything else. It represents, in the near term, an
> investment in future capability. If money is what you're after
> primarily, I don't know how to defend an investment in AI research
> from the perspective of, say, venture capital. I can point to examples
> of enabling technology, like CAD, or tooling, or electrical power,
> which did not fit into the world they arrived in, but created their
> own industries.
>
> I'm not saying I can't make up clever uses for AI technologies that
> could make a gazillion dollars, if I had designs for them in my hand.
> There are obvious and clear storytelling ideas. But that would be
> intellectually dishonest. I'm looking for a way to express, in terms
> of investment return, what AI is likely to actually do for us, in a
> conservative, defensible sense.
>
> This must be separated from, for example, safety concerns, in which it
> is perhaps useful to imagine, as some do on this forum, what the
> failure modes, what the fastest take off, what the actual capability
> of such developments may be. That isn't helpful in this kind of
> planning.
>
> I must anticipate a response suggesting that non-profit, private
> efforts to research AI, such as the Singularity Institute, AGIRI, etc
> are better suited for this subject matter, and in fact invalidate my
> queries as relevant at all. I remain very doubtful that this is the
> case. AI is not something to be solved quickly, nor something to be
> solved with few people with no money. It is in it's first stages of
> real development, and a massive amount of research and data needs to
> be collected, if AI theories are to be informed by more than
> introspection and biological analogue. Like so many things in our
> modern world, AI will be done long before we can properly evaluate and
> prepare ourselves for the results, however long it takes. But people
> need to have reasons to join AI efforts, to fund them, and to support
> them, in levels thus far not seen. I submit this is at least partially
> because this kind of analysis is either not publicised, or has simply
> not been done.
>
> ...
>
> This kind of analysis also raises the rather uncomfortable spectre of
> doubt, that I have jumped into a field of study without sufficient
> research and investigation, or have unrealistic(at least ungrounded)
> expectations for the fruits of my work. I submit that my primary
> interest in AI is at least partially unrelated to gain of these kinds,
> and secondarily informed by the safety concerns, asymmetric potential,
> and increasing importance investigated much more clearly by other
> authors(Vinge, Yudkowsky, Good).
>
> Any responses or questions can be asked on the sl4 mailing list, to
> which this is posted, to me privately, or on my blog.
>
> Justin Corwin
> outlawpoet@hell.com
> http://outlawpoet.blogspot.com
> http://www.adaptiveai.com
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT