From: Stathis Papaioannou (firstname.lastname@example.org)
Date: Fri Mar 14 2008 - 05:55:48 MDT
On 14/03/2008, Lee Corbin <email@example.com> wrote:
> > Now, let's suppose that implementation of the computation 6*7 = 42
> > is associated with a primitive moment of consciousness, and for
> > simplicity that this is the case only if the computation is implemented in full.
> > We would then both agree that M1 and M2/M3 with reliable information
> > transfer would give rise to consciousness. You would argue that M2/M3
> > without reliable information transfer would not give rise to consciousness.
> Yes, I would so argue.
> > But what if the information transfer doesn't fall into the all or none category?
> > For example, what if the operator transfers the right information some of the
> > time based on whim, but never reveals to anyone what he decides? The
> > M2/M3 system (plus operator) would again be useless as a computation
> > device to an external observer, but on some runs, known only to the
> > operator, there will definitely be a causal link.
> Very clear.
Thank-you for following the thought experiment so closely so far.
However, I think I have made an error by writing writing "there will
definitely be a causal link" above. In the extreme case, the operator
might transfer every possible state in sequence, knowing but not
saying which of these is the right one to implement the computation.
Does that count as a causal link on the run in which this occurs? As
far as you can tell by observing him, the operator is no more
knowledgeable than an ignorant person trying out every possible state.
Could the computation possibly divine his mental state in order to
decide whether there is a causal link and thereby become conscious?
> It may (or may not) be simpler, as you suggest, to suppose that all
> that is necessary is that the right physical states occur or are
> implemented somehow. I doubt very much that there is a logical
> flaw in your suggestion. On the other hand, I doubt that there is
> any insoluble problem with mine---just a bit of awkwardness,
> e.g., why is a 3+1 dimensional creature conscious, a 2+1 dimensional
> creature conscious (as in Flatland or the Life Board), but a 3 dimensional
> frozen block that is *completely* isomorphic to the 2+1 structure
> not conscious?
How can you be so sure about that last point?
> Your "awkwardness", on the other hand, is that you cannot really
> give (so far as I know) any reason why I should choose to detonate
> the Tsar Bomba next to the Stathis guy in Australia, or a rock I
> pick up at random. They both emulate my friend Stathis, right?
If a rock emulates anything then blowing it up isn't going to make any
difference, since the point is that it doesn't matter what the rock's
atoms are doing. On the other hand, if you blow up the physical
Stathis, that would mean that at least some branches of the
computations in Platonia simulating me come to an abrupt end. So, even
though whatever will be will be, I prefer that you blow up the rock.
-- Stathis Papaioannou
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:25 MDT