Re: The GLUT and functionalism

From: Lee Corbin (lcorbin@rawbw.com)
Date: Thu Mar 13 2008 - 23:07:55 MDT


Stathis writes

> Lee wrote:
>
>> But isn't it true that if you follow Putnam in his "Representation
>> and Reality" (extremely un-recommended by yours truly),
>> then you must suppose that any given rock performs the
>> calculations making up Stathis just as well as your organic
>> body does? In other words, if I have a choice of using
>> the Tsar Bomba (50 megatons) on the rock or on your
>> own person, if you come to visit me, then why do you care
>> whether I totally destroy the rock or totally destroy
>> Stathis's everyday human person?
>
> Yes, it's an obvious point. But the idea that any computation can be
> implemented by any system is just the starting point.

Well, that's hardly any way to deal with my "obvious point".
<Insert humorous analogy here about following a "ridiculous"
assumption to its thereby ridiculous consequences.> :-)
Of course, as I said, so many luminaries join you in believing
that a timeless object (a frozen state) can implement computation,
implement, consciousness, and implement information flow, that
of course I'm not really saying "ridiculous". How about just
"suspicious"? :-)

> From this it follows that every computation is implemented
> necessarily by virtue of its status as a Platonic object....

I inquired

> [Lee wrote]
>> [Stathis wrote]
>> > and (b) that it doesn't result in information flow between
>> > the states. But I don't think it's obviously absurd, and I
>> > see the lack of information flow (or inability to handle
>> > counterfactuals) as just making it impossible for us as
>> > external observers to use the system for computation.
>>
>> Could you explain a bit more to me about this? Between
>> perhaps not using "counter-factual" correctly, or whether
>> it makes a fig of difference about "external observers"
>> (it doesn't), I'm not sure I'm following you. Perhaps an
>> example distinct from the Monday/Tuesday one would
>> help me.
>
> I agree with you to an extent about the significance of causality in
> computation. Suppose there are steps in a computation which don't
> follow from the preceding step, but just happen to occur correctly *as
> if* they followed from the preceding step.

Yes, well put. It does seem a tad suspicious to refer to that
as computation. But I do note you just said "I agree with you
to an extent..." :-) Somehow, I don't think that this is going
to degenerate to a flame war.

> For example, imagine a machine M1 into which you input "6*7", gears
> and levers and so forth go clickety-clack, and after 100 steps it
> outputs "42". Next, consider another identical machine, M2, into which
> you input "6*7", but at the 73rd step you destroy it. The next day on
> the other side of the world, by fantastic coincidence, someone else
> builds a machine, M3, which just happens to be in identical
> configuration to M1 (and hence M2, had it not been destroyed) at the
> 73rd step. M3 then goes clickety-clack through steps 74 to 100 and
> outputs "42".
>
> I would agree with you that even though the activity of M2/M3 seen in
> combination might look the same as the activity of M1, they are not
> equivalent computational systems. This is because M1 would
> appropriately handle a counterfactual, but M2/M3 would not: if the
> input to M1 had been "4*5" the output would have been "20", whereas if
> the input to M2 had been "4*5" the output from M3 would have still
> been "42", as the lack of a causal link between M2 and M3 means there
> is no way for the input of M2 to influence the output of M3.

Thanks for the clarification on the usage of terms. I guess I'm
thinking of "counter-factual" okay in our context here.

> The obvious significance of this is that M2/M3 is useless as a
> computational device. It could be made useful by introducing reliable
> information transfer between the two machines, say by an operator
> passing M2's final state to be used as M3's initial state. The new
> M2/M3 system is then equivalent to the intact M1, albeit a bit slower
> and more cumbersome.

Right.

> Now, let's suppose that implementation of the computation 6*7 = 42
> is associated with a primitive moment of consciousness, and for
> simplicity that this is the case only if the computation is implemented in full.
> We would then both agree that M1 and M2/M3 with reliable information
> transfer would give rise to consciousness. You would argue that M2/M3
> without reliable information transfer would not give rise to consciousness.

Yes, I would so argue.

> But what if the information transfer doesn't fall into the all or none category?
> For example, what if the operator transfers the right information some of the
> time based on whim, but never reveals to anyone what he decides? The
> M2/M3 system (plus operator) would again be useless as a computation
> device to an external observer, but on some runs, known only to the
> operator, there will definitely be a causal link.

Very clear.

> Does consciousness occur on those runs or not?

I would say consciousness occurs only on those runs where, yes,
true causality obtained.

> Does it make a difference if the operator lies 99.999% of
> the time or 0.001% of the time? Does the computation know when he's
> lying, or does it know the proportion of time he intends to lie so
> that it can experience fractional consciousness at the appropriate
> level?

My conjecture is that when the operator lies 99.999% of the
time, then consciousness is actually not present that fraction
of the time, etc.

> You will have a hard time defining criteria (let alone a mechanism)
> whereby a computation "knows" that there is a causal link.

I would suggest that a computation merely *is*, and doesn't need
to know anything. But I realize I'm being unfair by putting it so
harshly. In fact, I would put it the other way around: if there is
a causal link, then there can be consciousness, but only if.

> It is simpler to assume that consciousness occurs purely as a result
> of the right physical states being implemented, while the presence of
> a recognisable causal link only determines whether the system can be
> used by an external observer for useful computation.

It may (or may not) be simpler, as you suggest, to suppose that all
that is necessary is that the right physical states occur or are
implemented somehow. I doubt very much that there is a logical
flaw in your suggestion. On the other hand, I doubt that there is
any insoluble problem with mine---just a bit of awkwardness,
e.g., why is a 3+1 dimensional creature conscious, a 2+1 dimensional
creature conscious (as in Flatland or the Life Board), but a 3 dimensional
frozen block that is *completely* isomorphic to the 2+1 structure
not conscious? Basically, I just have to hem and haw, and assert
even more forcibly that I am a time-chauvinist.

Your "awkwardness", on the other hand, is that you cannot really
give (so far as I know) any reason why I should choose to detonate
the Tsar Bomba next to the Stathis guy in Australia, or a rock I
pick up at random. They both emulate my friend Stathis, right?

Lee



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT