Re: [sl4] Rolf's gambit revisited

From: Benja Fallenstein (benja.fallenstein@gmail.com)
Date: Tue Jan 06 2009 - 17:08:35 MST


Hi Peter,

Peter de Blanc wrote:
> Benja Fallenstein wrote:
>>> I think "X simulates Y, therefore K(X) > K(Y)," is pretty unambiguous in
>>> its meaning, and it's wrong.
>>
>> Well, given the context, I did read it as equivalent to your "it
>> wouldn't be possible to pick out just one program of high Kolmogorov
>> complexity, and only pay attention to it, while you yourself have low
>> Kolmogorov complexity" :-)
>
> He never said anything like that,

I meant that I did read it (almost) that way, so I can't agree that it
*unambiguously* means something else. (Ok, I didn't read it *exactly*
as your quote; closer to, "if X picks out just one program Y and pays
special attention to it, then K(X) > K(Y)." Which, as I said, is
wrong, but afaict is correct for >=.) But enough, I think, about what
Matt did or didn't mean; I see a more interesting topic to reply to
:-)

> and if he did, then I don't see how it
> applies to an AI. Wouldn't an AI consider lots and lots of hypotheses,
> rather than just one pre-programmed, complex hypothesis?

Well--

Matt's quote of the Norman's scenario he was replying to ended with:

> The AI, being a maverick, doesn't give a flip what the
> programmers intended,
> but it's curious about what would have happened. So, it
> runs a simulation of
> the alternate AI, which we'll call AI(pi). It sees
> AI(pi) turning galaxies
> into computronium, in search of messages hidden in the
> infinite digits of
> pie, messages which in all likelyhood don't exist.
>
> And then it sees AI(pi) run a simulation of ITSELF, of
> AI(pie). And it
> thinks "uh oh, which of us is at the TOP of the
> simulation chain?"

Clearly, AI(pie) is considering more hypotheses than just the
hypothesis "AI(pi)." But also, clearly, AI(pie) is paying special
attention to AI(pi) -- in the sense that based on its simulation, it's
able to conclude that AI(pi) being on top of the simulation chain has
a significant probability, say >=10%, which means that (I'm assuming
AI(pie) assigns consistent probabilities) means that it cannot
consider more than ten hypotheses "like this." Question: Can a Turing
machine X, based on any criteria at all, form a computable probability
distribution that assigns probability >1/n to a specific hypothesis Y
such that K(Y) > K(X) + K(n)? The answer isn't clear to me, although
my intuition is that it is "no."

Of course, if X is an AI that finds the source of Y in its environment
-- as I think Nick may have been suggesting upthread -- then in a
certain sense the answer is of course trivially "yes" if we interpret
K(X) as the complexity of the AIs source -- but then of course we
replace K(X) with K(X) + K(input so far), and the question stays
meaningful.

Any thoughts?

Thanks,
- Benja



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT