From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Thu May 08 2008 - 13:26:18 MDT
And, if we want to throttle the speed of intelligence increase, all we need
to do is limit the input.
Ross
On Thu, May 8, 2008 at 7:23 PM, Krekoski Ross <rosskrekoski@gmail.com>
wrote:
> Actually, now that I think about it-- this precludes a fast take-off with
> no end.
>
> Any intelligent system cannot meaningfully increase its own complexity
> without a lot of input, or otherwise assimilating complexity and adding it
> to its own.
>
> Consider I have an AI of a given complexity, lets call it takeoff(n) where
> n is the number of iterations we would like it to run for, conceivably a
> very large number. takeoff() itself has a specific complexity.
>
> however, without a large number of inputs, takeoff() cannot increase its
> own complexity, since the complexity it reaches at any point in time is
> describable with its own function, and a specific integer as an argument.
>
> any AI can therefore really only at most, assimilate the complexity of the
> sum of all human knowledge. Without the ability to meaningfully interact
> with its environment and effectively assimilate information that is
> otherwise external to the sum of human knowledge, it will plateau.
>
> Ross
>
>
>
>
> On Thu, May 8, 2008 at 6:43 PM, Krekoski Ross <rosskrekoski@gmail.com>
> wrote:
>
>>
>>
>> On Thu, May 8, 2008 at 8:59 AM, Stuart Armstrong <
>> dragondreaming@googlemail.com> wrote:
>>>
>>>
>>>
>>> What makes you claim that? We have little understanding of
>>> intelligence; we don't know how easy or hard increases in intelligence
>>> will turn out to be; we're not even certain how high the advantages of
>>> increased intelligence will turn out to be.
>>>
>>> It could be a series of increasing returns, and the advantages could
>>> be huge - but we really don't know that. "Most likely scenario" is
>>> much to strong a thing to say.
>>>
>>
>>
>> Yes.
>>
>> I personally dont have a strong opinion on the probability of either
>> scenario just because there are so many unknowns, and we have an effective
>> sample size of 1 (ourselves) with which to base all of our understanding of
>> intelligence. But I think we should realize one thing--- is it only by
>> incredible coincidence that our intelligence is at a level such that we can
>> understand the formal properties of our brain, but are just below some
>> 'magical' threshold that would allow us to mentally simulate what
>> differences in subjective experience and intelligence a slight change in our
>> architecture would entail, but just above the threshold where it would be
>> possible to do so for 'lower' entities?
>>
>> I've mentioned this before in various forms but in general I think its a
>> fairly under-addressed topic: Can an intelligent system of complexity A,
>> perfectly emulate (perform a test run of) an intelligent system of
>> complexity A? (for fairly obvious reasons it cannot emulate one of higher
>> complexity). It seems possible that an intelligent system of complexity A
>> can emulate one of complexity A-K where K is the output of some function
>> that describes some proportion of A (we dont know specifically how
>> complexity in an intelligent system affects intelligence, except that in a
>> perfectly designed machine, an increase in complexity will entail an
>> increase in intelligence). I think that because of natural systemic
>> overhead, it is impossible for any perfectly designed intelligent system to
>> properly model another system of equal complexity. (and indeed no effective
>> way to evaluate the model if it could)
>>
>> This has implications on the rate at which any AI can self-improve-- if K
>> is a reasonably significant proportion of A, even a godlike AI would have
>> difficulty improving its own intelligence in an efficient and rapid way.
>>
>> This is also why evolution by random mutation is a slow, but actually
>> quite efficient way of increasing intelligence--- we dont want a
>> progressively larger, but structurally homogenous system (which actually is
>> not an efficient increase in complexity, only size). We want structural
>> diversity in an intelligent system, and its not clear how a system can
>> 'invent' novel structures that is completely foreign to it. Many of our own
>> advances in science, by analogy, arise from mimicry of for example non-human
>> biological systems.
>>
>> Ross
>>
>>
>>
>>>
>>> Stuart
>>>
>>
>>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT