Re: [sl4] Is there a model for RSI?

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Jun 20 2008 - 15:49:19 MDT


--- On Fri, 6/20/08, Stuart Armstrong <dragondreaming@googlemail.com> wrote:

> > Also, most of the tests for above-human intelligence
> mentioned earlier, like winning an election or producing a
> blockbuster movie require judgment by a large group of
> people (voters or movie goers), which is collectively more
> intelligent than any individual. How do you collectively
> test for intelligence greater than the collective
> intelligence of all humanity?
>
> What about the ones that do not require human intelligence
> to test it
> - being the first to build a copy of Manhattan on the moon
> of Uranus,

How would you automate such a test without at least human level intelligence? Someone has to look through a telescope and recognize Manhattan. Someone has to build a telescope powerful enough to see the buildings?

Also, it is a test of collective intelligence. Passing the test depends on the ability of others to build a rocket and the the necessary equipment. A test of *individual* intelligence would be to build a spear out of sticks and rocks and kill an antelope, having been raised in a tribe that has never hunted. Most people would fail this test.

> or assembling a living copy of a certain human being from
> inert materials?

How much intelligence does it take to walk into a clone-o-matic center, insert a DNA sample, and push the "copy" button? Whose intelligence are we testing?

> I'm sceptical of any one problem to use as a test (especially the
> mathematical ones). By "intelligence", we refer to a wide variety of
> abilities, not just mathematical skill; we would need several tests to
> encourage what we mean by intelligence.

Agree, but for a simple model we need a simple definition of intelligence. If we can't solve the simple case, I doubt that adding complexity will help, so much as it will hide the fact that our solution won't work.

>
> > There are no provably hard problems.
>
> There is another issue: there may be problems where superior
> intelligence cannot result in better result. If we were to take
> playing Tic-tac-toe as a test, this would not help, as Tic-tac-toe is
> fully solved, even for us. Maybe some of these suggested tests are of
> the same nature; any entity with an "IQ" of 400
> can fully solve it as fast as it can be done, so increases in "IQ"
> don't help (and it may be true, but unprovable, that no
> improvements exist).

This is also a problem for chess, as I mentioned.

One approach (dangerous IMHO) is that whenever a problem is solved, the agent chooses a harder problem. It is dangerous because you no longer have a stable goal passed to offspring. How do you code "harder"?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT