RE: Edge.org: Jaron Lanier

From: Colin (chales1@bigpond.net.au)
Date: Sun Nov 30 2003 - 04:13:47 MST


Perry E.Metzger
>
> "Colin" <chales1@bigpond.net.au> writes:
> > A model of a thing is not a thing!
>
> Is a thought of a unicorn a real thought?
>
> Or, to throw the spear straight at Mr. Searle, is a
> simulation of an addition somehow different from "really"
> adding two numbers?
>
> (Presumably a Chinese room made up of neurons in a small
> dense volume following a deterministic program can't be
> conscious because none of those neurons, when interviewed,
> experience qualia individually. :)
>
> Sorry to single out one sentence among many for assault -- I
> just see red when Searle's bizarroid argument gets mentioned
> even indirectly.
>
> As for myself, I don't believe we'll solve the problem of
> consciousness -- and we won't care. The problem of producing
> a synthetic construct that passes (or more to the ultimate point,
> surpasses) the Turing Test is not a problem of producing a
> consciousness -- it is a problem of producing a black box
> that has a particularly observable external behavior.
>
> (Indeed, one might easily argue that, from the point of view
> of the Friendly AI people, it is unnecessary that the god
> they wish to create be conscious so long as it acts as though
> it has a conscience, whether it is "aware" of it or not.)
>
>
> Perry
>

> Is a thought of a unicorn a real thought?

Yes. If you mean the subjective visual imagery associated with
reflective thinking of a unicorn.
Yes. If you mean the audio subjective experience of the phonemes of the
word unicorn (that you just had when reading the word).
Yes. If you mean the subjectively void associative processes that
generate behaviour in respect of unicorns such as the muscle triggers
used in speech generation when talking about unicorns.

Brain matter does all of these things. They are all real even if the
unicorn is not. It's time to dump the simplistic linguistic traps from
philosophers and move on.

> Or, to throw the spear straight at Mr. Searle, is a
> simulation of an addition somehow different from "really"
> adding two numbers?

Do you mean the role the subjective experience of the phonetics in
learning the addition? Or the subjective experience of the visual
representation as it relates to the abstractions of quantity and
operators? Or the relatively experienceless process of habituated
addition?

Again too simplistic. Not a basis for taking any position one way or
another.

You fall into the trap that I spent so much time delineating in the
previous post - Discipline blindess - You go anti Searle without being
able to prove conclusively that the subjective experiences are
unimportant in intelligence. How can you do that? Nobody has done that
yet. Has anyone written anything on the proposal that the quale is/is
not the brain's solution to the symbol grounding problem? No. Has
computer science, pumping squillions into AI on the tacit assumption
that the answer is nay, proven it one way or another? No. Has computer
science proven that creating algorithmic models of measurements of
'thing' captures and produces a subjective experience of 'thingness'
and/or that this 'thingness' experience is/is not optional in relation
to understanding objects with the property of 'thingness'? No!

The solution to the conundrum I delineated in my previous email is to
find the place where the viewpoints of these disparit disciplines will
be found to be both true and false in some way when viewed in retrospect
from a position of knowledge of the final solution. To find the solution
you have to look at the real evidence and say after me:

"I/We/They am/are _BOTH_ right and wrong in some way not yet understood.
I must cease alliances with bandwagons, drop dogma and question every
assumption, every convention, every expression. Despite all my attempts
my view of brain matter (maybe all matter) is missing an important
ingredient and this ingredient's importance in what I am doing is not
known".

Nobody can possibly take a totally provable stance for or against searle
or any one else!

> a synthetic construct that passes (or more to the ultimate point,
> surpasses) the Turing Test is not a problem of producing a
> consciousness -- it is a problem of producing a black box
> that has a particularly observable external behavior.

> (Indeed, one might easily argue that, from the point of view
> of the Friendly AI people, it is unnecessary that the god
> they wish to create be conscious so long as it acts as though
> it has a conscience, whether it is "aware" of it or not.)

OK. Again...

That the functionalist/computationalist approach is the one true path to
this 'god' is the tacit assumption by all computer science.
But..........

Where is the proof it will/will not 'understand' what it is like to be
us?
Where is the proof it will/will not have a conception of 'friendly'?
Where is the proof it it will even know it is there?
Where is the proof this approach is/is not just a sophisticated version
of ascription of the style kids have when playing with dolls. For that
is what the turing test is/is not as the case may be.

These proofs do not exist. Yet somehow you bow to the great Turing test
as if it was what is needed to be completely satisfied that real
understanding exists in an artefact. Why would anyone attempt to create
such a creature on the basis of such a level of ignorance? How can
anyone take any one position as proven to an investor? I know you have
to start somewhere. But that somewhere is sitting on axiomatic clouds,
and in 2003 is clearly not producing results and yet advocates cling to
it like a life buoy.

Computer science/AI always assumes, again tacitly, that the humble quale
is optional and/or emergent from representational complexity in any
form. This is blinkered thinking and I hope squadrons of Jaron Laniers
line up to poke the whole of computer science and any other form of
discipline blindness in the eye to get them to wake up see there is a
problem.

If I remember correctly Searle has backed off from the 'biology only'
stance (chinese room era) to a non-biological matter-as-computation
stance. Brain matter is only proven sufficient, not necessary. This is a
reasonable position when surrounded by so much evidence that there is a
subjective experience and that it comes from brain matter in a fashion
without any explanation (and an even less explanation for models of
brain matter run on virtual machines in manufactured silicon rocks).

Is "computer science" even science? The way it acts leaves me with
doubts. It has hallmarks of religion born of a desperate need to
abstract and virtualise away from the real world. This will work really
well until the real world becomes a mandatory component. Something that
an industry based on abstraction is likely to sail by in blissful
ignorance.

The Jaron Laniers of the world are welcome signs of a rising tide of
necessary questioning and we need to wake up and listen. Put models and
Turing tests and virtual machines aside for a moment and really consider
the act of being in the universe and what that tells you about matter.
Clarity will emerge.

Before then any position taken is just dogma. Hume and Kant showed us
how to throw off the shackles of dogmatism centuries ago. What the hell
is wrong with us? Have we learned nothing?

Colin Hales
*ok I'm done :-) *



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT