[sl4] Is there a model for RSI?

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun Jun 15 2008 - 14:18:28 MDT


Is there a model of recursive self improvement? A model would be a simulated environment in which agents improve themselves in terms of intelligence or some appropriate measure. This would not include genetic algorithms, i.e. agents make random changes to themselves or copies, followed by selection by an external fitness function not of the agent's choosing. It would also not include simulations where agents receiving external information on how to improve themselves. They have to figure it out for themselves.

The premise of the singularity is that humans will soon reach the point where we can enhance our own intelligence or make machines that are more intelligent than us. For example, we could genetically engineer humans for bigger brains, faster neurons, more synapses, etc. Alternatively, we could upload to computers, then upgrade them with more memory, more and faster processors, more I/O, more efficient software, etc. Or we could simply build intelligent machines or robots that would do the same.

Arguments in favor of RSI:
- Humans can improve themselves by going to school, practicing skills, reading, etc. (arguably not RSI).
- Moore's Law predicts computers will have as much computing power as human brains in a few decades, or sooner if we figure out more efficient algorithms for AI.
- Increasing machine intelligence should be a straightforward hardware upgrade.
- Evolution produced human brains capable of learning 10^9 bits of knowledge (stored using 10^15 synapses) with only 10^7 bits of genetic information. Therefore we are not cognitively limited from understanding our own code.

Arguments against RSI:
- A Turing machine cannot output a machine of greater algorithmic complexity.
- If an agent could reliably produce or test a more intelligent agent, it would already be that smart.
- We do not know how to test for IQs above 200.
- There are currently no non-evolutionary models of RSI in humans, animals, machines, or software (AFAIK, that is my question).

If RSI is possible, then we should be able to model simple environments with agents (with less than human intelligence) that could self improve (up to the computational limits of the model) without relying on an external intelligence test or fitness function. The agents must figure out for themselves how to improve their intelligence. How could this be done? We already have genetic algorithms in simulated environments that are much simpler than biology. Perhaps agents could modify their own code in some simplified or abstract language of the designer's choosing. If no such model exists, then why should we believe that humans are on the threshold of RSI?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT