Re: [sl4] to-do list for strong, nice AI

From: Pavitra (celestialcognition@gmail.com)
Date: Sun Oct 18 2009 - 16:18:16 MDT


> I define "best" to mean the result that an ideal secrecy-free market
> would produce. Do you have a better definition, for some definition
> of "better definition"?

Yes: the result that an ideal secrecy-free dictatorship run by me would
produce. (Secrecy-free in this context means that there are no secrets
from the dictator. I can choose what to divulge to which of the rest of
the citizens.)

>>> One human knows 10^9 bits (Landauer's estimate of human long term
>>> memory). 10^10 humans know 10^17 to 10^18 bits, allowing for
>>> some overlapping knowledge.
>
>> Again, where are you obtaining your estimates of
>> degree-of-compressibility?
>
> The U.S. Dept. of Labor estimates it costs on average $15K to replace
> an employee. This is about 4 months of U.S. per capita income, or
> 0.5% of life expectancy. This mean on average that nobody knows more
> than 99.5% of what you need to know to do your job. It is reasonable
> to assume that as the economy grows and machines do our more mundane
> tasks, that jobs will become more specialized and that the fraction
> of shared knowledge will decrease. It is already the case that higher
> paying jobs cost more to replace, e.g. 1-2 years.
>
> Turnover cost is relevant because the primary function of AI will be
> to make humans more productive, at least initially. Our interest is
> in the cost of work-related knowledge.

This feels wrong in several ways.

Why is redundancy in employment utility a good indicator of redundancy
in the aspects of human experience that we will care about preserving
through the Singularity?

Doesn't the variance in redundancy by job type imply that the
cost-to-replace reflects the nature of the job more than it reflects the
nature of the person?

Why will "the primary function of AI ... be to make humans more
productive, at least initially"? Shouldn't the AI handle
productivity/production more or less unilaterally, and make humans more
happy/eudaimonic?

>> The Turing test is probably not suitable.
> ...
>> Chatterbots have been found to improve their Turing Test
>> performance significantly by committing deliberate errors of
>> spelling and avoiding topics that require intelligent or coherent
>> discourse.
>
> I agree. Turing was aware of the problem in 1950 when he gave an
> example of a computer taking 30 seconds to give the wrong answer to
> an arithmetic problem. I proposed text compression as one
> alternative. http://mattmahoney.net/dc/rationale.html

That seems like a pretty good definition, but I'm not convinced that a
gigabyte of Wikipedia is _the best_ possible corpus. In particular,
Wikipedia is very thin on fiction. I want AI to be able to grok the arts.

>>>> C->D[ ] Develop an automated comparison test that returns the
>>>> more intelligent of two given systems.
>
>>> How? The test giver has to know more than the test taker.
>
>> Again, this seems more a criticism of C than of D.
>
> It depends on what you mean by "intelligence". A more general
> definition might be making more accurate predictions, or making them
> faster. But it raises the question of how the evaluator can know the
> correct answers unless it is more intelligent than the evaluated. If
> your goal is to predict human behavior (a prerequisite for
> friendliness), then humans have to do the testing.

This still sounds like you're talking about step C.

Step D says, "Assuming we already have a formal definition of
intelligence, develop a computable comparison test for intelligence". I
don't see why the comparison test requires greater-than-tested
intelligence _in addition to_ whatever level of intelligence the formal
definition created in C constitutes.

The incomputability of Kolgomorov complexity is likely to be the largest
obstacle in step D.

Your website that you linked to above is likely to be about as close as
we'll ever get to completing steps C and D.

>> The whole point of Singularity-level AGI is that it's a nonhuman
>> intelligence. By hypothesis, "humanity" ⊉ "intelligence".
>
> Nonhuman intelligence = human extinction.

What about intelligence that is a proper superset of the human?
Molecular matter != destruction of all atoms.

> I don't mean this in a good or bad way, as "good" and "bad" are
> relative to whatever populates the world after humans are gone. It
> might be your goal to have these agents preserve human memories, but
> it might not be *their* goals, and it's their goals that count. They
> might rationally conclude that if your memories were those of
> somebody else's, you wouldn't notice.

It's my goals that count right now, because I'm the one deciding my
actions. Hopefully, current human goals can shape what the goals of the
future-beings will be.

> You could argue that with somebody else's memories you wouldn't be
> "you". But what are you arguing? If a machine simulates you well
> enough that nobody can tell the difference, is it really you? Would
> you kill yourself and expect your soul to transfer to the machine?

I agree that it may (depending on the nature of the Singularity) be
largely irrelevant what my goals are going to be after the rise of the
Machines. But it matters very much what my goals are now, because the
Machines will be created by pre-Singularity humans.

>> The goal, then, would be to ensure that the Singularity will be
>> Friendly.
>
> Define "Friendly" in 10^17 bits or less.

Cheating answer: shares my value system.





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT