From: William Pearson (email@example.com)
Date: Thu Jun 19 2008 - 15:25:43 MDT
2008/6/19 Stuart Armstrong <firstname.lastname@example.org>:
>> This is not recursive self-improvement. It does not become better at
>> getting better.
> True. But we can set up something that becomes better at getting
> better - just take any program, cripple it, and add a subroutine that
> removes the crippling, then cripple the subroutine in a way that still
> allows it to act on itself. If we set it up right, the pace of
> self-improvement picks up.
I think this violates the criterion Matt gave originally "It would
also not include simulations where agents receiving external
information on how to improve themselves. They have to figure it out
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT