From: David Hart (email@example.com)
Date: Thu Feb 23 2006 - 18:26:00 MST
Brian Atkins wrote:
> I've tried pondering this issue before (it would help if I actually
> knew a great deal about biology, etc., but I don't), and I come to the
> conclusion that it must be inaccurate to try and relate computer bits
> to base pair data. The reason is because I think you have to take into
> account the tremendous amount of "data compression" involved in the
> base pair information.
Comparing the relative size of two bitstrings is a valid indication of
their relative Kolmogorov complexity if and only if they run on the same
(or equivalent) Turing machine.
In the case of DNA vs contemporary machine code, the convoluted Turing
machine that is our physical reality which brought about life is likely
vastly more complex than the Turing machine that is a contemporary Von
Neumann computer, making invalid any direct size comparison (e.g. 700MB
of DNA vs 700MB of machine code). To make a valid comparison, the
lengths of entire bitstrings describing the respective Turing machines
must be included. Does anyone have even a wild guess of the relative
Kolmogorov complexity between a given meaningful amount DNA code
(+environment) vs machine code (+environment) given a similar size code
chunk? How many orders of magnitude difference between them?
In relation to the broader thread topic, I see two possibilities:
1) An intelligent system cannot prove the Friendliness of a larger
intelligent system (more information at greater density / Kolmogorov
complexity, i.e. the larger system being the output of recursive
self-improvement), forcing us to make do with 99.999% "probably
Friendly" rather than 100% "provably Friendly"
2) An intelligent system can prove the Friendliness of a larger
intelligent system, but only within certain narrow bounds including only
certain types of small incremental increases in information size and
But this is really just academic, since in either case the growth rate
of any intelligence under strong self-modification would be severely
slowed compared to an unconstrained intelligence, arguing yet again for
an AI-building race (i.e. an FAI must be built first, to detect and
mitigate unconstrained SAIs before they become larger than the FAI).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT