From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Tue Jul 26 2005 - 08:03:58 MDT
Looking at the responses, I think several people haven't actually
seriously considered a materialist analysis of subjective sensation.
Yes, they have the general notion that it's all biology and hence
physics, but they haven't considered what sort of intermediate detail
there is or what the corresponding causal mechanisms might look like.
I admitt that if I hadn't thought about this detail the idea that
subjective experience is something special/nonphysical/noncomputable
would seem more plausible.
A lot of the problems here are the result of sticking stubbornly to
folk psychology terms and failing to decompose to a finer resolution.
Ok, we don't have the details to decompose all the way, but we can do
a lot better than generic amateur philosophising. A case in point is
the question 'does actually seeing a colour for the first time bring
new knowledge'. 'Knowledge' here is not well-defined, yet the word is
so familiar we don't even notice. If the question is rephrased in
terms of cognitive capabilities, things become clearer; the human
didn't acquire any new axioms for deductive reasoning (though they
might be able to extract some from their updated sensory subsystem
via indirect reflection), but acquired some other capabilities (e.g.
the ability to immediately recognise the colour, the ability to
visualise objects of that colour). Worse problems come when people
start to talk about 'meaning'; the 'meaning as an ineffable substance'
metaphor seems to persist despite commendable efforts by Eliezer and
others to stamp it out. You can't reasonably discuss the cognitive
basis of subjective experience until you know exactly what your terms
mean (or at least where the areas of ambiguity are). Again 'meaning'
can be translated into specific effects and capabilites (and
information, though 'information' itself is a moderately complex term
to adequately define).
Tennessee Leeuwenburg wrote:
>>> But that is precisely what is interesting. A human cannot
>>> understand logically everything that they can learn, nor can they
>>> describe with phyics everything that is immanent (loosely, "real")
>>> to them.
>>
>> Why are you intent on glamourising this relatively straightforward
>> cognitive architecture limitation with metaphysics?
>
> In answering that question, I would be implicity accepting your
> believe that I *am* doing that. I reject said belief. I truly
> believe it to be a genuine possibility that an articifial
> intelligence might have no consciousness, or awareness of
> immanence.
You've switched arguments. I said that you're making the human
inability to access everything they 'know' in declarative form
unnecessarily mysterious. The question of whether an AI can lack
conscious experience is related but distinct, and I do in fact agree
that consciousness and subjective experience in the human sense are
not necessary in an AGI, or indeed desireable in an FAI.
> Firstly, while I'm happy to accept your tone as being
> argumentatively efficient, the blanket claim "this statement is
> incorrect" is not really the kind of thing which is uncontroversial
> or proven.
I untrained myself from using the 'I belive', 'In my opinion', 'I
am fairly certain of' etc prefixes on SL4 because frankly people here
are generally competent enough to infer them correctly and it wastes
a lot of time and attention in extended arguments.
> Let me accept, temporarily, that the brain is capable of perfect
> simulation (and here's the important qualifier) at the physical
> level. All predictions are similarly restricted to the physical
> level. Meaning is not predicted -- only brain state. If the
> predicting being does not understand the meaning of its prediction
> of physical state, then it is a meaningless prediction
What /exactly/ do you want to know that a perfect physical simulation
wouldn't tell you? Obviously the simulation includes a complete
record of the person's behaviour for the period simulated. Usually
when people talk about 'meaning' in this context they're reffering to
some sort of compressed description that they find convenient for
recall and inference. I would guess that the 'meaning' you want is a
high-level (i.e. highly lossly, but extremely efficient) description
of the person's brain state defined in terms of other high level
concepts you already know and understand, so that you can easily infer
what that person's past, future or otherwise subjunctive behaviour
might be using your own mental toolkit. Amusingly enough the procedure
for brute-forcing the problem is quite straightforward; we simply
simulate /your/ brain after telling you every plausible combination of
words that might be a concise description of the subject's brain, and
then simulate both the subject's brain reacting to various related
situations and your brain trying to guess how they will react.
Whichever description string produced the most predictive success was
the most meaningful for you (by this definition of meaning, but I
challenge you to think up a definition I can't brute force given an
arbitrary but finite amount of computing power). Practically of course
an AGI would extract 'meaning' by performing pattern/compressibility
analysis of the physical level simulation, or more likely skip the
low-level simulation altogether and run a multi-level simulation
extending to whatever bottom level of detail is required for the
desired inferential accuracy.
> I am not a dualist. I believe that mental states do arise from the
> physical nature of the brain...
>
> 2) That qualia are real, and that physics as such does not capture
> the full meaning of state.
This is what you need to pin down. Most materialists do not claim that
the laws of physics include rules about qualia, they claim that qualia
exist as regularities within the structure of the universe as
determined by physics. They are higher-order consequences of physics
in the same way as raindrops, galaxies, natural selection and futures
trading. 'Meaning' isn't even that tangible; it's a human cognitive
construct that's convenient for reasoning about some things but that
has no coherent set of physical referents.
Given a complete physical description of everything you want to
predict, it is /always/ possible to produce the most accurate possible
description given enough computing power. 'Meaning' only comes into it
when we can't afford such extravagent expenditures, and have to reason
about regularities in physics rather than physics itself. AIXI doesn't
even need 'meaning' to be superintelligent, because it has infinite
computing power. Any practical AGI would have to use highly compressed
representations, but hopefully it wouldn't be as confused about the
process as humans tend to be.
>>> Physics, for example, doesn't enable to me understand what
>>> language means, nor does merely understanding the grammar and
>>> syntax and symbolism of a language allow me to use it.
>>
>> This is a limit of your inferential capability, not any flaw in the
>> materialist position.
>
> Possibly true. Care to point out the specific error? Or do you just
> mean that another person *could* use physics to understand etc etc.
'Your' was a bad (ambigious) choice of word; 'human' would be better
as no single human could take a printout of the details of every
atom in your body and work out the rules of English grammer from it.
Maybe an entire civilisation of transhumans with computers far in
advance of ours could do so, as a major public science project. Maybe
a Power could do it in a few milliseconds as a background task. It is
possible in principle, but not in practice; the important point is
/why/ it is not possible in practice. It's not ineffability, it's
simple intractability.
> Let me broaden the claim :: physics, in principle, allows no being
> or potential being, to understand etc etc, where physics is the
> study of matter and its behaviour.
This statement is definitely incorrect, assuming you are not barring
the reasoner from finding more compact descriptions of (i.e.
inferentially useful regularities in) their initial data.
> Indeed -- by definition. I would simply argue that it is important
> to humans that meaning be preserved.
'Meaning' in the sense of compressed descriptions isn't going to be
under threat while computing power is reasonably bounded, and isn't
really what we want anyway. 'The illusion of qualia' is another issue
and one I am actually concerned about, though I acknowledge that this
may seem terribly quaint and silly to far transhumans. Actually the
term 'illusion' is somewhat unfortunate, as I do acknowledge that
qualia can usefully be given a physical (neuroarchitectural) referant.
What's illusory is all the mystery and seeming irreducability involved
that causes people to believe that qualia are ontological primitives,
simply because our brain wasn't designed to answer the question 'so
what /is/ blue?'. A (contemporary) AGI would get the answer 'high
values of byte 3 in RGB format, mapped to flux density of photons in
frequency range X to Y, as transcribed by sensory mechanism Z...'
Note that it's fairly pointless to make statements about how building
one sort of AGI or another would be a bad thing for humanity unless
you actually intend to do something about it, either by building an
AGI yourself first or more realistically convincing people who you
believe are most likely to build an AGI that your ideas are right.
While I disagree with Russell Wallace's domain protection scheme and
his criticisms of CV, I do appreciate the fact that he is trying to
follow up on his principles by convincing at least a few of the
relevant people that he is right.
> I think it is a dangerous leap of faith to assume that *all* good
> physical modelling programs will be conscious just because
> they are imbued with goals.
Aargh. 'Conscious' != 'has qualia'. Do dogs have qualia? Do salmon
have qualia? Do spiders? Are dogs or salmon or spiders conscious?
Do you think 'self aware' == 'conscious' or is it something different?
See the futility of trying to do precision reasoning with the crude,
broken blunt instruments that are folk psychology concepts?
AIs are not 'self-aware' unless they have a sophisticated self-model
with a list of specific capabilities, the extent of which is a subject
for active debate. Qualia are a necessary component of a much smaller
set of capabilities, assuming that you can have qualia without having
the ability to argue about 'the true nature of blue' (a description
that applies to quite a few humans, never mind dogs or AIs).
Mitchell Porter wrote:
> However, epistemologically, qualia are the starting point, and
> the material universe is the thing posited.
I read this section as; 'qualia are what we call the primitive data
structures which all of our consciously accessible sensory data is
constructed from. Note that a self-modifying AGI can create new
'qualia' simply by adding some new sensory processing code.
> I ask you to conceive of point particles possessing a specified
> mass, a specified charge, a specified location, and no other
> properties. Sprinkle them about in space as you will, you will not
> create a 'sensation of color'. Equip them with a certain dynamics,
> and you may be able to construct an 'environment with properties'
> and a 'stimulus classifier'; name some of those environmental
> properties 'colors' and some of the classifier's states 'sensations
> of color', and you may be able to mimic the apparent causal
> relations between our environment and our sensations of color; but
> the possible world you have thereby specified does not contain
> sensations of color as we know them, and therefore cannot be the
> world we are inhabiting.
I'm not clear if you're building a classifier out of the particles
and embedding it in the universe, or tacking something on to the base
physics. I agree that the latter wouldn't work anything like our
universe, where secondary properties are entirely higher-level
regularities in the group of particles that constitute the perceiver.
But in either case, if you immitate /all/ the causal properties of
the human concepts of colour (including intermeidate sensory
processing details that affect our conscious reasoning but which we
can't clearly describe), how is that not sensations of color?
Substrate indepedendence includes both the internal details of
black-box lower level algorithms (assuming they don't systematically
effect the output) and the details of whatever physics you are
implemented it. The more coherent proposals of the people who want
qualia to be part of physics are superficially plausible because in
principle qualia could really work as ontological primitives; it's
just that there's overwhelming evidence that our universe does not
work like that.
> We are faced, not just with a self-denying sensibility which wishes
> to assert that colorless matter in motion is all that exists (in
> which case the secondary properties - the qualia - are either
> mysteriously identical with certain unspecified conjunctive
> properties of large numbers of these particles, or even more
> mysteriously do not exist at all),
People (well, most of them) don't go around saying that 'thoughts do
not exist' because they're happy with the idea that thoughts are
inside their head, and that they exist as patterns that the basic
elements of their brain (e.g. synapses, activation spikes) adopt. The
confusion about qualia exists because people intuitively believe that
qualia are 'out there' rather than 'in here'. This is sensible for
normal reasoning; it saves a deference to just store 'the bus is red'
rather than 'there is an object that reliably causes the red detector
to fire'. As usual though direct intuition is worse than useless -
actively misleading - when trying to unravel how human cognition
works.
> Mathematical physics, as we know it, is both an apex and a dead
> end. No amount of quantitative predictive progress through better
> model-building is going to explain consciousness, because the
> models in question exclude certain aspects of reality *by
> construction*.
I disagree, assuming you allow the search for and use of higher level
regularities (and hence all parts of the unified causal model other
than the physical foundation). I have yet to hear any coherent
question about consciousness on this list which cannot be irrefutably
answered given enough research and computer time (though that amount
may well be sufficient that it won't be so answered any time soon).
> But I do regard it to be a kind of arrow to be shot at physical
> reductionists - which is to say people who believe that talking
> about brain states is the same thing as talking about mental states.
So why isn't this just a question of level of description and
reflective accessibility?
> There is something which pain is like which is not described by
> physics equations, even if physics equations can account for the
> progress of the state of the world.
People are going to keep saying this until we can in fact tell you
exactly how pain works, what structures it effects, and the entire
causal chain from stubbing your toe to saying 'pain sucks, why can't
we engineer it out' at the most convenient abstraction level. Right
now people see physics, see that their mental representation of
physics is nothing like their mental representation of pain, and
automatically disregard the possibility that one could include the
other. Actually some people will keep saying it anyway, but that's
the peverse attraction of the unknowable for you. I realise that it's
difficult to accept 'yes, we can describe it' before we actually have
the description, but that is itself a more direct logical consequence
of the science we already have.
> A stock example of an inexact or vague predicate is baldness. You're
> not bald when you have a full head of hair; you are bald if you have
> none; but what if you have one hair, ten hairs, a thousand hairs;
> where is the dividing line between bald and not-bald? There is no
> reason to think that there is any way to answer that question
> without arbitrary stipulation that, say, 1000 hairs is the most you
> can have and still be bald.
I don't think you need to be so concerned about using exact
predicates. Fuzzy predicates are a direct consequence of the fact that
tractable classifiers are usually unreliable, particularly on human
neural hardware; the probability of returning 'bald' on seeing a head
varies smoothly from 'nearly 0' at some lower threshold of hair to
'nearly 1' beyond a higher threshold. I agree that exact predicates
are usually preferable when constructing a precise theory, but we have
a /long/ way to go before we're at that point with subjective
sensation. Right now the terms people are using have much more serious
clarity issues than being based on simple probabilistic classifiers.
> But if we consider all possible distributions of electrons
> throughout a transistor, there will clearly be marginal cases.
We solve this by adding a third 'undefined' state. We set the
thresholds for '0' and '1' such that all states in those categories
result in predictable behaviour. All marginal cases and in practice
some predictable states for which the distance to the threshold is
within the range of measurement error go in the 'undefined' category.
> You have to draw an exact boundary in configuration space and say,
> these are the ones, these are the zeroes; these are the blue qualia,
> these are the green qualia; this is the thought of an apple, this is
> the thought of a red apple.
In the real world, probabilistic inference is generally Good Enough.
Though I admitt that determined dualists might try to hide behind any
remaining margin of error; on the plus side, that will drive efforts
towards ever-increasing accuracy. :)
> I take it as given, not just that there are qualia, but that there
> is awareness of qualia, and an associated capacity to reason about
> their nature, and that this is what makes phenomenological
> reflection possible in human beings.
Since the human brain is an evolved structure, do you believe that
natural selection discovered qualia or invented them? Either way, if
you think qualia are indivisible primitives please answer the
question 'what use is half a qualia' ?
Ben Goertzel wrote:
> I tend to think that if one builds a software program with the
> right cognitive structures and dynamics attached to it, the qualia
> will come along "for free". Qualia don't need to be explicitly
> engineered as part of AI design, but this doesn't make them any
> less real or any less important.
True on two counts. Firstly 'qualia' in the sense of sensory data
structures exist whenever there is a sensory modality feeing the
reasoning system of a general intelligence. Secondly qualia with
something like human sensation's mysterious primitiveness are likely
to result from any cognitive architecture which uses an opaque design
(or rather, transparent enough at the high level to not be entirely
brittle, but opaque at the lower sensory processing levels), and Ben
generally goes for fairly opaque designs. That said you're not going
to get closely humanlike sensation from anything other than a close
copy of the human sensory processing stack. Again, think of the causal
mechanisms that produce inference and behaviour, and how sensitive
the conclusions of (particularly reflective) inference and resulting
behaviour are to changes in those mechanisms.
Norm Wilson wrote:
> What level of modeling is necessary and sufficient to build a
> program with the "right cognitive structures and dynamics"?
A sufficiently flexible layering system to draw a wide range of
possible inferences from a limited set of object 'secondary
properties'.
> Is it sufficient to model the brain at the level of neurons, atoms,
> quantum mechanics (e.g., Penrose-Hammeroff), or sub-quantum
> features of reality?
All of those are progressively massive levels of overkill for the
problem; I know Ben at least strongly suspects this otherwise he
wouldn't believe that his much-more-abstracted-than-NNs AGI design
would have a chance of producing 'qualia'.
> How can we (or an AI) know, let alone prove, that a sufficient
> model for consciousness has been created? Can we define a
> "Turing Test" for qualia?
Yes, and in principle we can look at the functional structures in the
AGI and compare them to a detailed functional analysis of the human
brain (once we have one).
> I'd like to reformulate the hard problem in purely materialistic
> terms as one of "completeness", in which the burden is on the
> materialists to demonstrate that his or her particular model of
> consciousness is complete.
Generally I put the burden of proof on whoever can't formulate a
well-defined question or answer. If all the model can generate answers
that are well-defined enough to be verifiably correct for all the
well-defined questions people can think of, then it's the best model
we have. Right now you want the materialists to do your homework for
you in translating wishy washy stuff such as 'what is meaning' into
something concrete; we hope to be able to say 'this is the structure
in your brain for the concept of 'meaning', here are the kind of
things it matches, here are what those things do, here is what life
would be like if you didn't have that concept, here is why you
generate the answers you do to questions about meaning'. If you want
more than that you're going to have to specify the question better,
lest I just get frusrated and ask my Oracle which 27 words will
instantly and reliably transform you into a qualified materialist ;)
* Michael Wilson
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT