From: Rolf Nelson (email@example.com)
Date: Sat Nov 10 2007 - 10:35:44 MST
> explicitly-coded AI
I'd avoid this terminology if you only mean non-emulated AI, it's confusing
to me whether seed AI that learns and self-improves fits in your category.
>From a decision-theory perspective, the odds of AGI would have to be
incredibly small to justify the current low level of Friendly AI funding.
You've probably heard the common arguments for AGI, mostly it's about
debunking counter-arguments to AGI at this point.
1. It's already been pointed out that the track-record of human invention to
match or outdo evolution, when "compactness" is not a criterion, is very
good. You've heard the flight analogy, allegedly many experts were surprised
by the Wright Brothers. Is that "cherry-picking"? I say no: ask a child to
list a few things that animals can do unaided but people can't, and "fly"
will probably be high on the list.
2. When invention matches or exceeds evolution, it's usually sudden. We
didn't spend a decade with powered flight at 1 mph, then 2 mph, and
gradually work our way up.
3. Adjust for overconfidence bias, if an expert says 95% confidence that AGI
won't happen, then it's probably less than that, unless it's part of a
larger well-calibrated model (which it isn't).
4. Some people's algorithm seems to be, "if it hasn't happened in the last X
years, then surely it won't happen in the next X years." This is a
*terrible* algorithm. Suppose AGI is destined to happen in 2150. Now the
year 2140 comes around; you will believe that AGI is 150 years away and be
caught flat-footed. If the algorithm is invalid in the year 2140, why expect
it to be valid in the year 1997? To put it another way: are there *specific
milestones* that we're all waiting for that will at some point scream, "AGI
is now 30 years away?" If so, what are those milestones? If not, how can we
be sure that AGI is more than 30 years away?
5. Another poor algorithm: "If someone predicts X will happen in 50 years,
and it doesn't happen, then that means it will surely never happen." If this
Magical Algorithm is correct, let me predict: "within 50 years, a war will
destroy mankind." By this Magical Algorithm, I have now magically guaranteed
that, if humanity survives past 2057, a time of peace will magically
descend. Hand over the Nobel Peace Prize, please! To put it another way:
Alan Turing saying something stupid in 1950 doesn't causally constrain
events that may or may not happen in the 21st century. It only provides
evidence that "Computer Science researchers aren't infallible", which in
turn paradoxically strengthens point (3).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT