Re: [sl4] to-do list for strong, nice AI

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun Oct 18 2009 - 18:07:29 MDT


From: Pavitra <celestialcognition@gmail.com>

>> I define "best" to mean the result that an ideal secrecy-free market
>> would produce. Do you have a better definition, for some definition
>> of "better definition"?

> Yes: the result that an ideal secrecy-free dictatorship run by me would
> produce. (Secrecy-free in this context means that there are no secrets
> from the dictator. I can choose what to divulge to which of the rest of
> the citizens.)

No problem. The AGI puts your brain in a simulator where you can be dictator.

>>>> One human knows 10^9 bits (Landauer's estimate of human long term
>>>> memory). 10^10 humans know 10^17 to 10^18 bits, allowing for
>>> some overlapping knowledge.
 
>>> Again, where are you obtaining your estimates of
>>> degree-of-compressibility?
 
>> The U.S. Dept. of Labor estimates it costs on average $15K to replace
>> an employee. This is about 4 months of U.S. per capita income, or
>> 0.5% of life expectancy. This mean on average that nobody knows more
>> than 99.5% of what you need to know to do your job. It is reasonable
>> to assume that as the economy grows and machines do our more mundane
>> tasks, that jobs will become more specialized and that the fraction
>> of shared knowledge will decrease. It is already the case that higher
>> paying jobs cost more to replace, e.g. 1-2 years.
 
>> Turnover cost is relevant because the primary function of AI will be
>> to make humans more productive, at least initially. Our interest is
>> in the cost of work-related knowledge.

>This feels wrong in several ways.

> Why is redundancy in employment utility a good indicator of redundancy
> in the aspects of human experience that we will care about preserving
> through the Singularity?

People learn at a fairly constant rate. If you spend 1/3 of your life at work, then 1/3 of what you know is work related. The cost of replacing you is the cost of the new employee re-learning all of the information that is found in your brain and nowhere else. An AGI can guess most of what you know from what other people know. It is critically important to measure how much you know that nobody else does. When you multiply that by the world's population, you have the number of bits needed to model all of the world's brains.

> Doesn't the variance in redundancy by job type imply that the
> cost-to-replace reflects the nature of the job more than it reflects the
> nature of the person?

Yes. As jobs become more specialized, the cost of automating all work goes up.

> Why will "the primary function of AI ... be to make humans more
> productive, at least initially"? Shouldn't the AI handle
> productivity/production more or less unilaterally, and make humans more
> happy/eudaimonic?

Because AGI is expensive and people want a return on their investment. So people will invest in ways to automate work and increase productivity. This happens to require solving hard problems in language, vision, and modeling human behavior.

>> I agree. Turing was aware of the problem in 1950 when he gave an
>> example of a computer taking 30 seconds to give the wrong answer to
>> an arithmetic problem. I proposed text compression as one
>> alternative. http://mattmahoney.net/dc/rationale.html

> That seems like a pretty good definition, but I'm not convinced that a
> gigabyte of Wikipedia is _the best_ possible corpus. In particular,
> Wikipedia is very thin on fiction. I want AI to be able to grok the arts.

It's not the ideal corpus, but that doesn't exist yet. But it's very similar to the problem we want to solve.

>>>>> C->D[ ] Develop an automated comparison test that returns the
>>>>> more intelligent of two given systems.
 
>>>> How? The test giver has to know more than the test taker.
 
>>> Again, this seems more a criticism of C than of D.
 
>> It depends on what you mean by "intelligence". A more general
>> definition might be making more accurate predictions, or making them
>> faster. But it raises the question of how the evaluator can know the
>> correct answers unless it is more intelligent than the evaluated. If
>> your goal is to predict human behavior (a prerequisite for
>> friendliness), then humans have to do the testing.

> This still sounds like you're talking about step C.

> Step D says, "Assuming we already have a formal definition of
> intelligence, develop a computable comparison test for intelligence". I
> don't see why the comparison test requires greater-than-tested
> intelligence _in addition to_ whatever level of intelligence the formal
> definition created in C constitutes.

Suppose the test is text compression. The test works because the correct answer to the question "what is the next bit?" was decided by humans that are smarter than the best predictors we now have. When these predictors get as good as humans, what will you use for your test then?

> The incomputability of Kolgomorov complexity is likely to be the largest
> obstacle in step D.

No. It's that knowledge can't come from nowhere. If it were possible for an agent in a closed system to test proposed modifications of itself for intelligence, then it could recursively self improve. But if intelligence means knowing more, then clearly that can't happen.

>> Nonhuman intelligence = human extinction.

> What about intelligence that is a proper superset of the human?

Now you're talking about superhuman intelligence, as in AI that knows what all humans know. This it is a question of whether the AI "is" us. It's a purely philosophical question because there is no test to tell whether a program that simulates you "is" you.

>> Define "Friendly" in 10^17 bits or less.

> Cheating answer: shares my value system.

What if the AI can reprogram your value system?

Suppose your value system rejects the idea of your value system being reprogrammed. But you also value being absolute dictator of the world. The AI has a model of your brain and knows this. It simulates a world where you can have everything you want. What do you think will happen to you? What happens to any reinforcement learner that receives only positive reinforcement no matter what it does?

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT