From: Peter C. McCluskey (pcm@rahul.net)
Date: Fri Nov 04 2005 - 16:19:20 MST
sentience@pobox.com (Eliezer S. Yudkowsky) writes (back in May):
>Peter C. McCluskey wrote:
>> Eric Baum
>> makes some moderately strong arguments in his book What is Thought?
>> against your claim. If your plans depend on a hard takeoff and your
>> reasons for expecting a hard takeoff are no better than the ones I've
>> run across, then I'm pretty certain you will fail.
>
>Eric Baum calculates 10^36 operations to get intelligence, based on the number
>of organisms that have ever lived. To see why this number is wrong you may
>consult http://dspace.dial.pipex.com/jcollie/sle/ or for more information
>George Williams's "Adaptation and Natural Selection."
I procrastinated about responding to this because I was busy and puzzled
as to whether I was missing something obvious.
Baum descibes 10^35 operations as an upper bound, and if you think that
number is a good summary of his conclusion, then you're being a bit
superficial.
I can't find anything in the "Speed Limit for Evolution" paper which bears
much resemblence to a criticism of Baum's conjecture. Were you claiming
that a limit on how much information could be encoded in genes implies
something about the computational power needed to produce that information?
That would seem as strange as saying that the ability to express a chess
move in a few bits implies that it couldn't take much cpu power to find
the best one.
Baum suggests that a nontrivial fraction of the cpu power needed to
simulate an organisms life is needed to determine how many offspring that
organism is fit to produce.
Another reason to expect slow (as in Kurzweil-speed) takeoff is that
progress so far on AI-like projects has consistently shown that incremental
progress at a rate no faster than what would be expected if learning is
cpu-bound is the norm.
I'm still wondering whether there are any arguments for hard takeoff that
address the questions about speed as directly as Baum does (which is
admittedly not nearly as direct an ideal argument would), and whether
thie issue ought to have any effect on how we should approach designing
a safe AI.
-- ------------------------------------------------------------------------------ Peter McCluskey | If a little knowledge is dangerous, where is the man www.bayesianinvestor.com| who has so much as to be out of danger? - T. Huxley
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:19 MST