Re: [sl4] Unlikely singularity?

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun Aug 10 2008 - 15:42:07 MDT


Joseph Henry <josephjah@gmail.com> wrote:

>Matt, I agree. A robust RSI model should be developed before we allow one of these supposedly
>super-intelligent agents to run rampant. Or perhaps, we shall employ their help in designing it?

We must first define what "improvement" (the I in RSI) means before we can settle the question of whether RSI is even possible. If we use Legg's definition of universal intelligence ( http://www.vetta.org/documents/ui_benelearn.pdf ) then RSI is clearly impossible because the test for intelligence is not computable. Nor can we use the Turing test, because it applies only to human level intelligence and not to higher levels.

This leaves problem solving tests in restricted environments. I can write a program that can beat me at chess, but this is not RSI for two reasons. First, the chess playing program can't write a better chess playing program; it is not recursive. Second and more important, this is not an improvement because I did not write the compiler for my program, or design the language, or develop a theory of digital computation, or build the infrastructure that made computer manufacturing and electricity possible. The proper measure of improvement is to compare with all of humanity, not just one human. Without language and culture, most of us would not figure out how to make spears out of sticks and rocks.

Even if we knew how to write chess programs that could write better chess programs, there are still two problems. First, there are no known provably hard problems that are easy to check. In the case of chess, after a finite number of generations the optimal strategy will be found, then no more improvement is possible. We could use a scalable problem such as factoring increasingly large numbers, decrypting messages with increasingly large keys, or solving NP-complete problems for increasing n. But there is no proof that any of these problems are hard for all n. We don't know that there is no fast algorithm for factoring. There are no known cryptographic systems (where decryption can be verified) that are provably secure. We can't prove P != NP, and therefore we can't find a provably hard subset of NP-complete problems. We have to do these things because the intelligence test has to be fixed for all generations. You cannot allow children to choose tests
 unknown to their parents, or else they could always claim to be intelligent. You have to get it right in the first generation.

The second problem is more serious. We think we know what we want; that it should be obvious that adding more memory, more CPU power, more bandwidth, and more I/O to the human brain would make us smarter, and therefore this is desirable. Sorry, I disagree. The human brain has been programmed through evolution to maximize a utility function that roughly correlates with reproductive fitness. An enhanced brain would not be. I don't need to describe again the dangers of simulating worlds with magic genies, or directly simulating reward signals. Suffice it to say that any goal seeking agent has an optimal mental state such that any thought or perception would be unpleasant because it would result in a different state. We are not smarter than evolution, and we are not smart enough to know what smarter than human intelligence is.

 -- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT