Re: [sl4] A model of RSI

From: Matt Mahoney (
Date: Wed Sep 24 2008 - 06:25:21 MDT

--- On Wed, 9/17/08, Stuart Armstrong <> wrote:

> So a RSI has to be a statement about the actual architecture of a
> program, not about the equivalent Turing machine. Your model seems
> acceptable as a definition, as far as I can tell (there will be
> others). A heuristic definition of RSI could be a program in an
> architecture that returns, after some time, to state similar to the
> one it started with, except with an improvement. The formal definition
> would be given by specifying this architechture. For instance, you
> could demand that a program has to start in a certain isolated
> computer, with a certain amount of free space, always accepting
> certain inputs. Subject to these constraints, a RSI makes sense.

Define "improvement". In the context of a real machine (a finite state machine), we could relax the definition of a goal so that utility is a monotonically increasing function of time but reaches some maximum after finite time. Then we could define improvement as reaching that maximum faster or reaching a greater maximum in the same time. However, a finite state machine can only run a finite number of programs, so there can only be a finite sequence of improvements by any definition.

Perhaps it would help to give some real life examples of what we want to do, for example, robots working in a factory that builds better robots. But the first generation has to know what "better" means. We know that adding more memory or faster processors makes for "better" computers, but we only know that because we are still smarter than both generations. Suppose that the child robot had twice as much memory but the software was unable to use it effectively. How would the robots detect this problem?

We assume (but don't know) that adding more neurons to our brains would make us more intelligent and therefore better. But why do we want to be more intelligent? Because our brains are programmed that way. Intelligence requires both the ability to learn and the desire to learn. Suppose that we engineer our children for bigger brains, but in doing so we accidentally remove the desire to be intelligent. Then our children will engineer our grandchildren according to their interpretation of what it means to improve, not our interpretation.

Self improvement requires a test or goal that cannot be altered through generations. Assuming that goal is "intelligence", we are not smart enough to test for it above our own level. If we are, then perhaps someone could describe that test. Otherwise, "better" is by default going to be measured by counting descendants.

-- Matt Mahoney,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT