Re: Cost of AI (was Re: [sl4] FAI development within academia.)

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Feb 27 2009 - 08:00:57 MST


--- On Fri, 2/27/09, Stuart Armstrong <dragondreaming@googlemail.com> wrote:

> Your section Text Compression is Equivalent to AI makes compression
> similar to a GLUT, which is a fantastically inefficient method of
> creating an AI.

And so is AIXI^tl. They are just at opposite ends of the speed-memory trade-off. What I am doing is looking in the practical range. The results I have collected so far are shown in the two graphs below the main table in http://cs.fit.edu/~mmahoney/compression/text.html

These are suggestive of the hardware requirements for AI. The top compressors using 2-4 GB of memory model language roughly at the level of a 2 or 3 year old child, in that the models have lexical and some semantic knowledge, and very rudimentary grammar (at the 2-gram and 3-gram level). I think an adult level model is possible with about 10x more memory. These programs are already much faster than real time (10-20 years).

Compression studies don't address the cost of knowledge, which I think will be the major obstacle to AI after hardware gets cheaper. But just because nobody has built AI doesn't mean we can't ask how much it will cost. We can estimate the cost of major projects like the space station and get the right answer within an order of magnitude about half of the time. With AI, we have estimates that differ by 10 orders of magnitude. We can do better than that.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT