From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Mon Jul 14 2008 - 21:25:27 MDT
I have two concerns-- one is one that I've voiced before--- simply,
can any machine emulate itself perfectly? The concern I have is that
if a machine can apparently emulate itself, then it has free capacity
and is not completely emulating itself (i.e. it would have to emulate
itself emulating itself, which of course eventually runs into
problems.) It is important here to keep in mind the question of
whether a machine with spare capacity would behave in the same way as
a machine running at close to maximum capacity or not. I suspect that
two machines of similar complexity and capacity would run into a
parallel concern.
Secondly, if the first concern turns out not to be a problem, then
there is the obvious concern of, well, the source code contributes a
fairly negligible component of overall complexity to the SI in
question (analogous to our DNA's contribution), and sharing the source
code would perhaps yield some insight to the behaviour of that
particular machine, but would not lead to superrationality. We would
need to also convey the sum total of all input that the machine has
received since activation. Perhaps we could consider this to be part
of the 'source code' but then of course its not really source code in
the way that we normally construe it to be-- again, this is a
distinction I've raised here before-- are our common metaphors
'memory' 'processing' 'storage' 'code' even suitable here?
Ross
On Tue, Jul 15, 2008 at 6:14 AM, Wei Dai <weidai@weidai.com> wrote:
> A couple of months ago Lee Corbin and Eliezer Yudkowsky had a conversation
> about superrationality on the extropy-chat mailing list [1]. Eliezer took
> the position that two dissimilar SIs may be capable of superrationality when
> they know each other's source code. He wrote:
>
>> Yes, but in this case a *motive* exists to *deliberately* correlate
>> your behavior to that of your opponent, if the opponent is one who
>> will cooperate if your behaviors are highly correlated and defect
>> otherwise. You might prefer to have the opponent think that your
>> behaviors are correlated, and then defect yourself; but if your
>> opponent knows enough about you to know you are thinking that, the
>> opponent knows whether your behaviors are really correlated or not.
>>
>> I'm thinking here about two dissimilar superintelligences that happen
>> to know each other's source code.
>
> Putting aside the issue of superrationality for now, I wonder if anyone else
> finds it plausible that two dissimilar SIs can know each other's source
> code. If we assume that they start out without such knowledge, but each
> wants the other gain it, what can they do? Can one SI prove to another what
> its source code is? Or, is there some other argument for why SIs might know
> each other's source code (beyond "we don't know what SIs might be capable
> of, so we can't rule it out")?
>
> [1] http://lists.extropy.org/pipermail/extropy-chat/2008-May/043362.html
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT