From: Nick Hay (nickjhay@gmail.com)
Date: Mon Oct 15 2007 - 20:52:48 MDT
On 10/15/07, Nick Hay <nickjhay@gmail.com> wrote:
> On 10/15/07, Matt Mahoney <matmahoney@yahoo.com> wrote:
> > A singularity is necessarily a recursive self improvement process because once
> > we achieve superhuman intelligence, those intelligences will be able to make
> > further improvements faster than we can. Legg proved [1] that an intelligence
> > (using the universal definition [2]) cannot completely predict the behavior of
> > a greater intelligence.
>
> This is not true. Shane proved, roughly, there is a c such that any
> intelligence of complexity x bits cannot correctly predict ALL systems
> of complexity x+c bits. The all is crucially important: one can
> construction predictors that succeed on arbitrarily complex systems of
> a specified form, and one can successfully build arbitrarily complex
> bridges.
>
> The proof is that for any system of complexity x bits you can
> construct a devil system of complexity x+c bits which models what you
> do and deliberately does the opposite. This is interesting, but
> certainly doesn't exclude predicting all complex systems.
That is, it doesn't exclude predicting some complex systems, maybe
many, and especially those of particularly nice forms (e.g. highly
complex but understandable bridges, complex programs with attached
proofs of correctness, etc).
-- Nick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT