From: William Pearson (wil.pearson@gmail.com)
Date: Wed Jun 18 2008 - 11:26:11 MDT
2008/6/18 Stuart Armstrong <dragondreaming@googlemail.com>:
>> My question is not whether such a thing is possible (I think it is), but whether a *non-evolutionary* RSI is possible.
>
> Let's try and build a simple model. A lone agent in some simplified
> grid with simple "move, eat, look" commands. Assume for the moment (I
> know this isn't what you are looking for, but bear with me) that there
> is a unique ideal behaviour for the agent. The agents programming
> consists of:
> 1) The ideal behaviour algorithm (just a list of the right commands).
> 2) Lots of complicated subroutines that always produce the same
> answer, whatever the input possible imput.
> 3) A small program that looks for, and deletes, any subroutine that
> always produces the same answer, replacing it with just the answer. It
> does this one subroutine per trial.
>
> Now the agent is sent repeatedly through exactly the same situation.
> Its reward is zero if it fails to find the ideal behaviour; if it
> finds the ideal behaviour, its reward is inversely proportional to the
> time taken.
>
> Then this very simple model will display RSI, in that it will get
> faster and faster, hence better and better, at its job (admitedly this
> "job" is the reverse of what we mean by intelligence, but it is
> improving in its narrow sense). So non-evolutionary RSI is definitely
> possible, in some situations.
This is not recursive self-improvement. It does not become better at
getting better.
Will Pearson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT