From: Michael Wilson (firstname.lastname@example.org)
Date: Wed May 19 2004 - 14:11:07 MDT
> I hope Eliezer has more productive uses for his time. An intelligence
> looking at a DNA-based replicator 3 billion years ago could have made
> an educated guess as to whether that would do a better job of maximising
> replication than the available alternatives, even if it was impossible
> to predict most of the effects of replication.
To make that judgement the intelligence would have to have a reasonably
good conception of the entire design-space reachable by evolving DNA.
This is bad enough even without including methods and results of inventing
general intelligence, which bloats the space enormously (possibly
infinitely). It would then have to create at least a good approximation of
the entire probabilistic tree of all of the replicators, tracking
population sizes, genetic diversity and speciation, defined probabilistically
against 3 billion years of volatile environment heavily influenced by
the evolutionary tracks of the species in it. Had you constrained yourself
to something more managable like 100,000 years or so you might be able to
make statements with useful confidence just using simplistic heuristics,
but estimating germ-line survival over three billion years of planetary
evolution is a tough job even for a Power.
> Similarly, humans can do a better than random job of comparing the
> effects of different proposed designs for an AI.
True, but it's so hard and so easy to fake that almost no-one bothers.
Most AI researchers are floundering around based on uselessly
anthropomorphic insights, inappropriate metaphors and wishful thinking.
The history of Friendliness theory illustrates how bad even geniuses
are at a) predicting the trajectory of complex self-modifying systems
and b) realising how bad their predictions are. If the typical educated
human does guess noticably different from a random choice, I suspect
it's worse than random rather than better.
Every other sort of complex physical system we've encountered has
required the development of specialised math and analysis tools to make
sense of and usefully predict. The human brain is far more complex than
anything else we've encountered and as a result we still don't have the
analysis tools for it. The idea that you can productively analyse AI
behaviour, a problem of at least comparable difficulty, using just your
evolved instincts about goal-seeking agents dressed up with a few
concepts from CogSci and game theory is broken and dangerous.
* Michael Wilson
Yahoo! Messenger - Communicate instantly..."Ping"
your friends today! Download Messenger Now
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT