Re: What best evidence for fast AI?

From: Robin Hanson (rhanson@gmu.edu)
Date: Sat Nov 10 2007 - 13:40:34 MST


At 12:35 PM 11/10/2007, Rolf wrote:
From a decision-theory perspective, the odds of AGI would have to be incredibly small to justify the current low level of Friendly AI funding.

Maybe, but that isn't the issue I'm addressing. 

You've probably heard the common arguments for AGI, mostly it's about debunking counter-arguments to AGI at this point.
1. It's already been pointed out that the track-record of human invention to match or outdo evolution, when "compactness" is not a criterion, is very good. You've heard the flight analogy, allegedly many experts were surprised by the Wright Brothers.

I'm not asking about the eventual achievement, but about the rate of progress to expect. 

2. When invention matches or exceeds evolution, it's usually sudden.

I'm not questioning rapid change once a certain threshold is reached. 

3. Adjust for overconfidence bias, if an expert says 95% confidence that AGI won't happen, then it's probably less than that, unless it's part of a larger well-calibrated model (which it isn't).
4. Some people's algorithm seems to be, "if it hasn't happened in the last X years, then surely it won't happen in the next X years." This is a *terrible* algorithm. ...
5. Another poor algorithm: "If someone predicts X will happen in 50 years, and it doesn't happen, then that means it will surely never happen." ...

Surely you can see these are very weak arguments in favor of any particular estimate.

Robin Hanson  rhanson@gmu.edu  http://hanson.gmu.edu
Research Associate, Future of Humanity Institute at Oxford University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323
 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT