Re: Edge.org: Jaron Lanier

From: Perry E.Metzger (perry@piermont.com)
Date: Sun Nov 30 2003 - 08:49:45 MST


"Colin" <chales1@bigpond.net.au> writes:
>> Or, to throw the spear straight at Mr. Searle, is a
>> simulation of an addition somehow different from "really"
>> adding two numbers?
>
> Do you mean the role the subjective experience of the phonetics in
> learning the addition? Or the subjective experience of the visual
> representation as it relates to the abstractions of quantity and
> operators? Or the relatively experienceless process of habituated
> addition?

No. I mean if I simulate the addition of two numbers, am I really
adding them?

Searle keeps saying things like "a simulated hurricane doesn't blow
anything over". Fair enough. But if I simulate adding two things (that
is, have a non-human construct follow some algorithms humans use for
adding and add them), is the output just "simulated" addition, or
actual addition?

(If you think I'm being disingenuous in asking this, of course I am.)

I note that you cut out my other comment. I'll bring it back.

   (Presumably a Chinese room made up of neurons in a small dense volume
   following a deterministic program can't be conscious because none of
   those neurons, when interviewed, experience qualia individually. :)

All of Searle's arguments add up (pardon the expression -- I know I
can't really add, merely simulate addition) to a big question he keeps
begging but is never willing to face. If he's right, what makes us
think people are conscious, either? After all, if the man in the
Chinese room (and his pencil and paper and rulebook) following
instructions blindly doesn't have the "Subjective Experience" of
knowing Chinese, why is it any different for our neurons? Hell, it is
even worse -- the man in the Chinese room presumably is conscious
himself, but I strongly suspect none of my neurons are individually
conscious at all.

So why is it that people are conscious, either, if Searle is right?

(In some sense, of course, Dennett takes up this very argument in
"Consciousness Explained"...)

> You fall into the trap that I spent so much time delineating in the
> previous post - Discipline blindess - You go anti Searle without being
> able to prove conclusively that the subjective experiences are
> unimportant in intelligence.

Hardly. I make no claims at all. I don't even claim you are
conscious. In fact, I defy you to prove that you are.

But as I'm discipline blind, perhaps you, as a person who has
discipline sight, would care to guide a poor misguided wanderer into
the direction of a more objective understanding of consciousness?

> "I/We/They am/are _BOTH_ right and wrong in some way not yet understood.
> I must cease alliances with bandwagons, drop dogma and question every
> assumption, every convention, every expression. Despite all my attempts
> my view of brain matter (maybe all matter) is missing an important
> ingredient and this ingredient's importance in what I am doing is not
> known".
>
> Nobody can possibly take a totally provable stance for or against searle
> or any one else!

Indeed, I would agree with you that there is no way to prove or
disprove Searle's stance. It is what some people call
"non-falsifiable". Are you familiar with what that implies?

>> a synthetic construct that passes (or more to the ultimate point,
>> surpasses) the Turing Test is not a problem of producing a
>> consciousness -- it is a problem of producing a black box
>> that has a particularly observable external behavior.
>
>> (Indeed, one might easily argue that, from the point of view
>> of the Friendly AI people, it is unnecessary that the god
>> they wish to create be conscious so long as it acts as though
>> it has a conscience, whether it is "aware" of it or not.)
>
> OK. Again...
>
> That the functionalist/computationalist approach is the one true path to
> this 'god' is the tacit assumption by all computer science.
> But..........

I said nothing of the sort. I merely said that the Friendly AI folks
seek function in their construct. Whether that function requires
certain things (like consciousness in the Friendly AI) is irrelevant
to their goal, which is not to construct a conscious god but to
construct a god.

> Where is the proof it will/will not 'understand' what it is like to be
> us?
> Where is the proof it will/will not have a conception of 'friendly'?
> Where is the proof it it will even know it is there?

Who said it would or would not? I merely noted that it would be
irrelevant to the goal if it did or did not.

> These proofs do not exist. Yet somehow you bow to the great Turing test
> as if it was what is needed to be completely satisfied that real
> understanding exists in an artefact.

I think you miss my point completely. I did not say (in this
discussion) that a Turing test proves anything about internal
experience at all -- merely that if something passes a Turing test it
is externally/functionally behaving as though it were intelligent. I
leave alone the question of the construct's internal experience
entirely.

Have you ever read "The Unfortunate Dualist" by Smullyan, by the way?

> Computer science/AI always assumes, again tacitly, that the humble quale
> is optional and/or emergent from representational complexity in any
> form.

I don't believe any such assumption is "always" made. Perhaps you
could provide us with a proof of this?

> If I remember correctly Searle has backed off from the 'biology only'
> stance (chinese room era) to a non-biological matter-as-computation
> stance. Brain matter is only proven sufficient, not necessary.

But he's proven so well that nothing is sufficient already, hasn't he?
I mean, there is no way that you could possibly be conscious, either,
if he's correct.

> Is "computer science" even science?

Can one write haiku about toothbrushes?

Perry



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT