From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Thu Jun 19 2008 - 03:31:14 MDT
> This is not recursive self-improvement. It does not become better at
> getting better.
True. But we can set up something that becomes better at getting
better - just take any program, cripple it, and add a subroutine that
removes the crippling, then cripple the subroutine in a way that still
allows it to act on itself. If we set it up right, the pace of
self-improvement picks up.
It a very narrow, artificial example, with a limit (once the program
is uncrippled, it has nowhere further to go) but it is an example of
RSI, and it is non evolutionary (the limit isn't a theoretical problem
- any computing mechanism with limited ressources has a limit). So
non-evolved RSI is possible in theory and in certain narrow practice;
the question is, is it practical for what we intend, i.e. to build
beyond human intelligences?
Stuart
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT