Re: [sl4] FAI development within academia.

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Feb 24 2009 - 17:54:17 MST


Allow me to summarize the premises and conclusions of this argument:

1) Markets are absolutely, perfectly, exactly efficient over all
domains and between all times.

2) AI would make a quadrillion-dollar profit.

Conclusion:

3) AI can be obtained at any time by, and only by, spending a
quadrillion dollars.

It doesn't lack for audacity, but I'm afraid it somewhat lacks for sanity.

On Tue, Feb 24, 2009 at 4:40 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
> --- On Mon, 2/23/09, J. Andrew Rogers <andrew@ceruleansystems.com> wrote:
>
>> On Feb 23, 2009, at 6:37 AM, Matt Mahoney wrote:
>> > --- On Sun, 2/22/09, Roko Mijic
>> <rmijic@googlemail.com> wrote:
>> >
>> >> One way to hasten the development of FAI is for me to seek to do
>> >> research within academia. A disadvantage of this strategy is that
>> >> academia is an open community, and anyone can potentially look at the
>> >> results that the field is producing and use them to create uFAI.
>> >
>> > Unlikely. Nobody can build AI, much less FAI or uFAI.
>> All the top people in the field like Yudkowsky, Minsky, and
>> Kurzweil have realized the problem is too hard by
>> themselves, so they are not actually writing any software.
>> It has to be a global effort.
>>
>>
>> Dripping with non sequitur and dubious assertion, FTW.
>>
>> The assertion that nobody can build AI is very weak
>> conjecture; there is legitimate argument whether some or all
>> of the people you list are "top people in the
>> field"; there are plenty of other top people in the
>> field that you failed to list that contradict the argument
>> you are trying to make, raising the question of sample
>> selection bias; it is not obvious, at least to me, that the
>> people you list "realized the problem is too hard by
>> themselves" in any case;  nor does it follow that it
>> has to be a global effort; worse, given that we accept your
>> first assertion, it does not follow by any reasonable
>> calculus that I can think of that a "global
>> effort" addresses that assertion.
>>
>> The quality of argumentation leaves a lot to be desired.
>
> So let me make my argument clear.
>
> First, we define what we are trying to build. There are two main goals of AI. First, to automate the economy (because we don't want to work) and second, to upload (because we don't want to die). The first problem is to build slaves (or if you prefer, servers) that understand language and vision and human behavior and that know a lot about individual people like customers and owners. The second problem is to create programs that simulate specific people, which requires solving very similar problems.
>
> The usual approach of individual researchers and small groups is to attempt to build the equivalent of one human brain that is slightly smarter than the inventor. Then that brain (or lots of copies of it) could (in theory) produce an improved version of itself, and so on, launching a fast takeoff singularity.
>
> But that will now work. For two reasons. First, unless you made your computer out of dirt, then you did not make AI by yourself. Being smarter does not give you any greater capability of building AI any more than an aeronautical engineer going back 500 years in time could build an airplane.
>
> Second, you need to define "smart". What does it mean to be twice as smart as the average human? It is not so obvious as you think. Humans are notoriously bad at recognizing genius, as demonstrated by the persecution of Socrates, Galileo, and Turing. Today we still award Nobel prizes for work done decades earlier, after the rest of the world has caught up.
>
> Here are some possible definitions of "twice as smart":
>
> - Able to solve problems twice as fast.
> - Able to learn twice as fast.
> - Able to remember twice as much.
> - Able to make twice as much money.
>
> Which all suggest that twice as smart means twice as many people. And we do know that groups do tend to make better decisions than individuals. When a contestant on "Who wants to be a millionaire?" asks the audience, the majority almost always comes up with the right answer. Countries that are run by power sharing groups tend to be nicer places to live than countries ruled by absolute dictators.
>
> But once again, we are unable to recognize the superior intelligence of groups. This goes beyond ego. If the members of a group always agreed with the majority, then the group could not possibly be any smarter than any member. So we should expect this property to apply not just to humans, but to AI at any level of intelligence.
>
> Making a human brain equivalent and then lots of identical copies does not lead to greater intelligence. If we make the copies different through custom training, then you need to consider the training cost in your recursive self improvement (RSI) equation. If you make a single AI twice as smart as a human, you get the same effect as hiring two humans. In order for RSI to work, you have to be able to build AI at less cost than hiring people. The (exponential) rate of growth would be comparable to a company with access to cheap labor.
>
> Here is my cost estimate for human equivalent AI:
>
> - Computing power: 10^17 operations per second
> - Memory: 10^15 bits
> - I/O: 10^9 bits per second
> - Knowledge: 10^9 bits
>
> Computing power and memory assume a brain sized neural network with 10^15 synapses and 10 ms resolution. I/O could be reduced to 10^7 bps if you exclude low level processing in the retina and cochlea. Knowledge is based on Landauer's estimate of human long term memory, e.g. our ability to recall words and pictures. It excludes procedural memory, such as knowing how to see or walk.
>
> We can project when CPU, memory, and I/O costs will be competitive with human labor using Moore's Law. The exact numbers are not important. An order of magnitude error only affects the answer by a few years. The threshold is not far off and may have already been crossed.
>
> The problem is knowledge. Software cost is not subject to Moore's Law. The knowledge needed to run the economy is stored in 4 x 10^9 brains. We can estimate the amount of overlap from the cost of replacing an employee. It can be a year's salary, and increasing as jobs become more specialized. We may reasonably assume 90% to 99% overlap, or 10^17 bits. The problem is that the public internet has only about 10^14 bits. The knowledge has to be extracted from human brains at 2 bits per second, at an average worldwide labor cost of US $5 per hour, or about $1 per KB.
>
> The global economy has a value of about $1 quadrillion (world GDP divided by market interest rates). Knowledge extraction will cost $100 trillion if we are able to identify in advance where the overlap is, and just extract what we don't already know. It is economically feasible, just not by any small group or in a short period of time. If we can't identify the overlap before extraction, it will cost $4 quadrillion.
>
> The cost of knowledge extraction presents a huge economic incentive to build a system of pervasive public surveillance, where everything you say and do is public knowledge. Note also that this solves the problem of uploading without the need for brain scanning.
>
> RSI does not require smarter than human intelligence. It requires cheaper than human intelligence. That requires a very expensive infrastructure to bring the cost down. That is what I mean by a global effort.
>
> -- Matt Mahoney, matmahoney@yahoo.com
>
>

-- 
Eliezer Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT