Re: What best evidence for fast AI?

From: Thomas McCabe (pphysics141@gmail.com)
Date: Sat Nov 10 2007 - 13:23:11 MST


1). We already have more than enough computing power to implement
intelligence. Both our best supercomputers (BlueGene) and our best
distributed computing networks (Folding@Home) have surpassed the
PetaFLOPS level.
2). An AGI which is "human equivalent", in the sense that it can match
a human's capabilities in any field, should be able to stomp us into
the dust in computer programming. Remember that, for an AGI to learn
any new task of significant complexity, it has to be able to write
hundreds of lines of new code on-the-fly.
3). An AGI need not understand the world (in any significant sense) to
do huge amounts of damage. Even regular viruses can do huge amounts of
damage; a self-modifying virus could easily destroy most of the
world's computer systems, even without general intelligence.
4). If you could figure out how to influence the world from within a
computer, a human-equivalent AGI can, by definition.
5). You are quite correct on the difficulty of timing everything, but
the AGI project will probably be known to the public, due to the lack
of government-imposed secrecy.

 - Tom

On 11/10/07, Harry Chesley <chesley@acm.org> wrote:
> On 11/10/2007 3:26 AM, Robin Hanson wrote:
> > So I am here to ask: where are the best analyses
> > arguing the case for rapid (non-emulation) AI progress?
>
> I would not argue that it will happen soon or rapidly but rather
> unpredictably.
>
> Before there is an AI singularity, we need to 1) understand intelligence
> more than we do now; 2) have enough computing power to implement it; 3)
> put together enough "AI" in the right configuration to make a
> singularity (human equivalent AI is not enough); 4) create enough
> traction between the AI and the rest of the world to both allow it to
> understand the world (just reading about it won't do it when you come to
> fundamentally new theories), and to influence it substantially; and 5)
> move from the lab to the world in general (i.e., ship it). I can't
> predict how long any of those items will take. 1 and 2 could already
> have happened in some lab or garage somewhere, or could take another
> fifty years. 3 could be trivial or have scaling problems that are
> insurmountable (unlikely, but you never know). 4 and 5 are classic
> issues of science/engineering, and there are lots of examples of
> successes and total failures.
>
> Add them all together and it could take anything from a few years to
> never. Nor is it predictable how much time there will be from the
> initial appearance of substantive AI to the singularity. It could even
> be zero if the people involved don't publicize their initial successes.
>
> The analogy to the development of the atomic bomb has been brought up
> before, but it's instructive from a timing perspective. Imagine being in
> 1907 and trying to predict the appearance of the first a-bomb (analogous
> to human level AI), the arms race (development of singularity
> capability), and the destruction of all life on the planet (the
> singularity). The first one happened after decades of incremental
> advances plus a concerted and well-funded effort to achieve it. And it
> happened with zero warning to the world as a whole because that
> concerted effort happened in secret. The arms race took decades more in
> fits and starts, but ultiately succeeded. And the destruction of life
> never happened at all.
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT