Re: [sl4] Starglider's Mini-FAQ on Artificial Intelligence

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Wed Oct 07 2009 - 19:58:29 MDT


From: "wil.pearson@gmail.com" <wil.pearson@gmail.com>

> Thanks for sharing. I think the only thing I would disagree with is the idea of easily escaping connecionist AIs. We are an example of an AI that can't easily escape onto the internet. If the AI is built on an architecture where there are different security domains (so no part knows all the source code), *and* it is a messy design with no over-arching theme to copy then I don't see why it shouldn't have as hard a time escaping as we do.

I disagree. AI will *be* the internet because that is the cheapest way to build it. In order for AI to be friendly, it has to at least know what you know so it can predict what you will want. Most of what you know is not on the internet, so it has to be communicated to the AI. Humans know about 10^9 bits (Landauer's estimate of human long term memory) and can only communicate a few bits per second. An AI could guess 90% to 99% of what you know because that knowledge is shared by others. That is only possible if the AI is connected to many people. Also, the cheapest way to collect the remaining 1% to 10% is to monitor your communication and actions. For this, it needs internet access because that's where you do most of your communication.

And when I say "cheap" I mean on the order of US $100 trillion to $1 quadrillion. That's how much it costs to collect 10^17 to 10^18 bits of knowledge from 10^10 human brains at 150 words per minute, 1 bit per character compression, and a global average wage rate of $5 per hour. At least until we develop nanoscale brain scanners.

$1 quadrillion is possible. It's half the world GDP for 30 years. But it does eliminate more expensive propositions such as tutoring or hand coding human knowledge. It will take that long anyway, because right now only 25% of the world is on the internet and that is increasing 4% per year.

An AI isolated from the internet would be *more* dangerous, for the simple reasons that it would know less about people and people would know less about it. And that's assuming it's possible at all. And don't get started on RSI voodoo. It's humanity, not a human, that creates AI. So that is the threshold you need to cross. Anything less is gray goo. An AI can't understand its own source code (Wolpert's theorem) so any improvement has to come from learning and hardware.

 -- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT