From: Woody Long (ironanchorpress@earthlink.net)
Date: Mon Oct 31 2005 - 01:54:49 MST
Yes indeed, let's delve into this a little more.
First a little android humor. "Android Builders do it with the lights on."
[Original Message]
> From: Olie Lamb <olie@vaporate.com>
>
> >
> I was never trying to disprove the Dual Sound Source Experiment. The
> experiment is interesting, and "valid".
>
> I was saying that the inferences that you draw from it are bogus.
>
> >For the musical experience, my subjective experience is strongly
otherwise.
Let us consider for a moment that the minimum duration of experience is
1/10th of a second - 100ms. */this seems to be a reasonably widely held
idea, but the best source I've found is in Tononi/Edelman's "A Universe
of Consciousness", and is not supported by research or footnoting. Does
anyone know any sources on this idea? It would be extremely helpful for
my PhD thesis!/*
Now, I can play scales in contrary motion at 6 notes per second (170ms
per note). My mother can play contrary motion scales 4:3 at 120 bpm,
which is 125ms and 170 ms. When I play these scales, I make a very
deliberate effort not to concentrate on either scale; I think of the two
scales together moving against each other. Furthermore, I definitely
think of two notes as I play them. If one accepts the 100ms minimum
experience duration, there isn't enough time to switch from one note to
the other.
What the hell my mother's brain does to play 4 against 3 contrary
motion, I don't know.
> >
OK, that's disputing the experimental data, not my inferences, so you ARE
trying to subjectively disprove the experiment.results. My approach is to
accept that a scientific experiment has been conducted, and the resulting
objective data shows that the subject could only focalize on and remember
one sound source at a
time. It was impossible to attend to both. Then my task is to understand
what these experimental results mean for android engineering, where the
goal is to build a human-shaped, human functioning robot. In building these
androids, the design task is to mimic natural human functioning as closely
as possible, not build dual processing non-human consciousness systems.
That is not our field.
These are the inferences of my Long Artificial Self Theory --
Experimental data: The subject could only focalize on and remember one
sound source at a time. It was impossible to attend to both.
Inference 1. The subject is acting as a focalizing agent, focalizing on and
remembering one sound source at a time, never able to attend to both.
Inference 2. This human focalizing agent is generally termed the self.
Inference 3. The root of consciousness, being the source memories, was this
focalizing agent, or self, that had no option but to switch its focalizing
attention to one single source or the other.
Inference 4. But therein arises the phenomenon of choice, of human willed
behavior. The subject has no option but to switch to one source or the
other, but within this unwilled, detirministic system, the subject has the
choice to attend to the right ear or left ear, i.e., the ability to form
the willed behavior of attending to one or the other. And so the mystery of
human willed behavior is finally revealed.
Inference 5. In accordance with the experimental data, all human
functioning androids, by industry standards, should possess an artificial
self, functioning as the root of artificial consciousness.
---------------------------------
These inferences require no stretch of the imagination. In the android
industry this is all we are interested in - making androids as human-like
as possible, based on experimental kowledge about humans. Androids
absolutely will include an artificial self autonomously driving the
artificial consciousness and tasks of the android. An android with a self,
referring to itself as I, having a personal memory of its experiences, of
its life, having a self-awareness, is, if nothing else simply a more
interesting and human-like android, and will be included ultimately as a
standard for this reason alone. Thing is, this conscious, motivationally
autonomous, cognitive, artificial self standard for androids can't be built
without infringing on my patent.
>
> Just say that an intelligence can pass an elaborate Turing-test test in
> such a way that it becomes difficult to deny that it is exhibiting
> conscious awareness.* "It Passes." This happens to be a transparent
> digital AI on a parallel platform, and we can retropsectively see its
> computation. (I think most SL4ers won't have too many problems with
> such a scenario). Now just say that we slow it down its mental
> processes (clock speed) so that it has few to no spare operations, and
> ask it to carry on two conversations at once, and it is able to, and we
> examine the computational states, and find that the computational
> processors are not sequentially switching, but rather parallel
> structures are working simultaneously on the two different conversations.
>
As I said, I'm not disputing that a non-human dual processing consciousness
can be built. That is called switching the subject. My subject here is
human consciousness and its computer implementation, not the subject of
non-human conciousness implementations. I never made a single argument
about non-human consciousness systems.
Ken Woody Long
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT