From: Tennessee Leeuwenburg (firstname.lastname@example.org)
Date: Thu Jul 21 2005 - 22:39:38 MDT
Apologies for bad grammar -- pub lunch today. I think the arguments
stand, but sometimes I forget to correct my grammer when I change
On 7/22/05, Michael Wilson <email@example.com> wrote:
> Tennessee Leeuwenburg wrote:
> > I agree with him when he says, "This should be obvious; memorising
> > detailed instructions on how to ride a bicycle does not immediately
> > grant you the ability to ride a bike competently, because you cannot
> > deliberatively modify your neural circuitry with an act of will."
> > But that is precisely what is interesting. A human cannot understand
> > logically everything that they can learn, nor can they describe with
> > phyics everything that is immanent (loosely, "real") to them. This
> > is, per se, interesting. This is where the debate lies.
> Why are you intent on glamourising this relatively straightforward
> cognitive architecture limitation with metaphysics?
In answering that question, I would be implicity accepting your
believe that I *am* doing that. I reject said belief. I truly believe
it to be a genuine possibility that an articifial intelligence might
have no consciousness, or awareness of immanence.
I am intent on making that claim because I believe it to be true. The
intention is not glamour, but argument and truth. If that sounds
pompous, it is only because I am afraid that you actually mean what
you say, and so I am responding as such.
Exactly what is invalid about metaphysical questions, and why are they
irrelevant to questions of cognitive architecture?
> > She can make new qualitative predictions. Even if I were to accept
> > (which I don't), that minds are reducible to brains, perfect physical
> > knowledge could stil only make predictions at the physical level.
> This statement is incorrect. If you accept that a brain could be
> simulated to an arbitrary degree of accuracy, then we can look at
> exactly what is going on in the simulation; we can work out what
> the human would report and what the internal sensations would be at
> any desired level of abstraction, in any desired system of
> cateogrisation/quantisation. We can give the human a verbal
> description that we calculate (via more modelling) will generate the
> closest approximation to what they'd actually experience, and in
> principle we can use invasive methods to directly cause the human
> to experience the relevant sensations, bypassing any irrelevant lower
> sensory areas.
Firstly, while I'm happy to accept your tone as being argumentatively
efficient, the blanket claim "this statement is incorrect" is not
really the kind of thing which is uncontroversial or proven.
Let me accept, temporarily, that the brain is capable of perfect
simulation (and here's the important qualifier) at the physical level.
All predictions are similarly restricted to the physical level.
Meaning is not predicted -- only brain state. If the predicting being
does not understand the meaning of its prediction of physical state,
then it is a meaningless prediction. (by construction)
I am happy for the purposes of further argument to accept that brains
can be perfectly simulated, although in truth I am not convinced that
physics is truly deterministic. As such, I believe that true
randomness may introduce errors into any prediction, even though the
brain response prediction might be perfect.
> We can already do a few of these things, crudely, yet dualists
> persist in ignoring the evidence. I look forward to the wails of
> anguish that will emmenate from them after we develop the capability
> to do truely impressive brain-modelling and self-modification.
I am not a dualist. I believe that mental states do arise from the
physical nature of the brain, and furthermore that other kinds of
machines are capable of hosting minds. But I also believe the
1) That other kinds of machines are capable of mimicking mental
behaviour without a mind
2) That qualia are real, and that physics as such does not capture the
full meaning of state.
Perhaps that is a limitation of my imagination, but I believe I can
argue that I am not otherwise mistaken.
> > Physics, for example, doesn't enable to me understand what language
> > means, nor does merely understanding the grammar and syntax and
> > symbolism of a language allow me to use it.
> This is a limit of your inferential capability, not any flaw in the
> materialist position.
Possibly true. Care to point out the specific error? Or do you just
mean that another person *could* use physics to understand etc etc.
Let me broaden the claim :: physics, in principle, allows no being or
potential being, to understand etc etc, where physics is the study of
matter and its behaviour.
> > If consciousness is our inner life, and qualia is what that
> > consciousness is like, then a machine without qualia is a machine
> > without an inner life.
> 'Inner life' is a near-meaningless term for characteristing cognitive
> architectures. An AGI might and probably will lack the kind of
> reflective shortcomings that make human sensation so mysterious;
> whether this translates to a lack of something fundemental and
> important that human sensation has I can't say yet. I agree that
> snuffing out the illusion of qualia /might/ be a really bad thing
> from the standpoint of human morals, and thus may be an ethical
> issue for transhumans.
Indeed -- by definition. I would simply argue that it is important to
humans that meaning be preserved.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT