From: Matt Mahoney (matmahoney@yahoo.com)
Date: Tue Jun 17 2008 - 11:55:12 MDT
--- On Mon, 6/16/08, Stathis Papaioannou <stathisp@gmail.com> wrote:
> Might it not be that RSI is impossible below a certain threshold of
> intelligence, as seems to be the case for many human accomplishments?
It is the lack of even a mathematical model that concerns me. We don't know if RSI is possible at all.
Humans today are not significantly more intelligent than 10,000 years ago. Our brains are the same size. What has happened is that language, writing, telecommunication, computers, and population growth have made us better organized. Instead of 10^5 brains each with 10^9 bits of knowledge operating independently, we have a group of 10^10 brains with 10^18 bits of collective knowledge (assuming some information sharing to allow communication).
Without language and culture, I would probably not figure out how to make spears out of sticks and rocks. If I created an AI with 10^10 bits of knowledge, that is not RSI because I am using 10^18 bits of knowledge to do it.
RSI would be creating an agent with 10^19 bits of knowledge, for example, a more productive world economy, a bigger, faster, smarter internet with more sensors, effectors, and storage, with machines doing the work.
My question is not whether such a thing is possible (I think it is), but whether a *non-evolutionary* RSI is possible. The crucial difference is whether the parent chooses the fitness function (e.g. intelligence), or the environment chooses.
For example, suppose we bred mice for intelligence. We could give mice tests such as running mazes, understanding words, solving math problems, etc. and breed the best performers. This is non-evolutionary (in my intended sense) because we control the fitness function. But once mice achieve human intelligence, we reach a dead end. How do you distinguish an IQ of 1000 from an IQ of 2000? Who is going to do the testing? We have the Turing test for human level intelligence, but nothing for superhuman intelligence.
We could use genetic engineering to create mice with bigger brains or faster neurons. We could do the same on humans. We could upload our own minds and add processors, memory, and bandwidth. We could build AI and do the same. But none of these approaches solve the testing problem.
It is clear that there is a path to greater intelligence, i.e. evolution, which produced human brains from simple chemicals. I have no doubt that the internet will continue to grow as we add people and computers to it. As it gets bigger, it seems to get more useful, at least as long as most of the computation is carbon based. But what controls it?
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT