From: Colin Hales (firstname.lastname@example.org)
Date: Wed Jan 01 2003 - 21:43:38 MST
Ben Goertzel wrote:
> Colin Hales wrote:
> > Imagine that inside our heads, intimately attached to and
> > driven by the
> > neural/glial activity we observe, is an as yet unspecified
> > emergent effect.
> > pinching Penrose/Hameroff stuff either :-) >>.
> I don't agree with this, and I suspect most folks on this
> list do not...
If you mean the Penrose/Hameroff (QM) thing? Me either. IMO way too
complicated, assumes this weird proto-quantum-consciousness stuff
(ultimately) and probably too computationally brittle although my mind is
open to the possibility. It does, however, make the list of potential
'phenomena X' have a length of at least 1.
If you mean 'physicalist' solutions in general, then I'm not convinced, yet.
I can't rule it out. Not enough work has been done on it by physicists. The
philosophers have given us colourblind mary, red fred, duplicate earths,
zombies and a chinese room. Fun, but nothing I can build!
The other main physicalist candidate X is the 'electromagnetic theory' (EM).
That makes the X-list of length 2. My list length will remain 2 unless
someone can see something else in there. The former (QM) uses the
cytoskeleton. The latter (EM) uses everything that isn't the cytoskeleton.
Kind of limits our list a little, doesn't it? Nothing left except maybe a
grab-bag of mysterianism. If both EM and QM are wrong we are left with the
computational (knowledge/functionalism) emergence option.
My bet? If I have to choose a physicalist X, it's X = EM. I am unable to
rule out EM yet.
Any other candidates? Anyone?
> I do agree that there is a lot we don't understand about
> consciousness & its connection to physical reality.
> However, I am not at all sure that we need to fully
> understand the nature of
> consciousness, in order to create an AGI.
> Similarly, we don't need to fully understand the nature of
> energy to create
> an engine. And we don't need to fully understand the nature
> and origin of life to create an artificial organism
> (a project Craig Venter and his team are now working on).
Assuming X is real and needed, I'm sure the lack of a full description of X
will not stop anybody making something that has it. Like the engine, we
built a lot of them to work out how they worked. We just made really bad
ones for a while.
> In another decade, you and I may be chatting on this list with an AGI
> system, mutually pondering the mysterious and beautiful nature of the
> awareness we all share...
I am so looking forward to that day! Let's make is WILL, not a MAY. :-)
> Having said that, I do have my own speculations regarding
> which I'll briefly share. I stress that these speculations
> are only loosely related to my own practical AGI work.
> Firstly: As for the "X-factor" underlying consciousness, my
> money is on
> randomness, which I believe must be considered subjectively
> -- "randomness
> with respect to a particular observer." [Roughly speaking, X
> is random with
> respect to observer O if O cannot produce a better program
> for computing X
> than "list the components of X". This has been extensively
> formalized in algorithmic information theory.]
OK. So we have a new possible X. When I look at it, however, qualia remain a
functionalist emergence pumped by a physical randomness. The qualia are not
expressed directly by the X, so it doesn't really fit the requirements for
being an X. Qualia are still a computational emergence albeit sitting on a
physical phenomenon (randomness). The solution is - from the point of view
of production of qualia - a functionalist solution.
Have I characterised it correctly? If so the number of physical Xs remains
2, but we have another "arrow in the functionalist quiver".
> More specifically...
> The mind of a physical system is a fuzzy set whose elements
> are drawn from
> the set of patterns in that system. The set of patterns in
> the system form
> an "emergent dynamical system" related to, but different
> from, the dynamical
> system of the physical system itself.
> The dynamics of the mind-system associated with a physical
> system, may be to
> some extent unpredictable (i.e. random) with respect to *the
> itself*. This leads toward the phenomenon of consciousness.
> The *intensity* of consciousness has to do with the rate of
> flux between the
> random and ordered realms, occurring in a mind-system. The
> emergence of
> patterns in portions of the mind-system that were previously
> opaque to the
> mind-system, and the decent into opacity of portions of the
> mind-system that
> were previously patterned (in the mind-systems's own perspective).
> This is yet another one of those interesting lines of thought
> that I've put
> on the back-burner for a while, focusing instead on revising
> the in-process
> Novamente book, and on Novamente-based product engineering...
> But if this line of thinking is at all on-target, then
> consciousness is a systemic, emergent phenomenon, not tied to any
> physical phenomenon.
The semantics of the word 'consciousness' have to be carefully crafted here.
You and I probably think different things. In my little parochial analytical
world, consciousness is a process, not a phenomenon. When it's fully running
we are awake :-). It has contents which are 'awarenesses', some of which are
coloured/labelled with qualia. I suspect that is not what you mean by
consciousness. Messy one. I think we understand each other, though. :-)
That said, it comes down to whether you would have pure computational
artifacts express qualia or physical phenomenon X express qualia. The former
makes it a whole bunch easier for a computer scientist to do an AGI.
I agree with you that if the qualia are a pure computational artifact, that
randonmess could play a significant role in driving the patterns. A little
computational universe with qualia forming and precipitating out of
nothingness. Equivalent to particles from quantum froth, yes?
Try as I might, though, I can't make myself buy into the pure computational
emergence solution yet. The EM world has a lot of undiscovered country.
The telling thing will be in the early AGI models. Remember the 3rd or 4th?
'Alien' movie? There was a single remnant 'synthetic' that survived a
product recall. The rest were trashed because they 'felt too much' and
became useless as slave/drones. They had better moral backbone than humans!
They truly felt existential angst and acted on it so much it stopped them
being useful. Humans preferred their homer simpson drones. :-) It may
actually turn out a bit like that.
If the X phenomenon is truly needed, early AGI work that excludes it will
produce a sort of greyed out, not quite there creature that we'll all think
just doesn't get it. However, as a product maybe that's what we want. As
long as it shuts up and does the vacuuming, who cares :-). I think there's
room for all outcomes. Why would an AGI drone that maintains a road
intersection and lives in a hole in the ground want a fully fledged
sentience with all the trimmings of a human?
If there is no physical phenomenon X, then we'll get our artificial poet
with computational emergence, somehow. The AGI will have to 'see' and react
to patterns in the shape of its executing code (some of which is created by
sensing)- from outside the code - then feed the 'experience' back inside
the code. This the nature of the required computational emergence, isn't it?
(It's the emergence you described above, expressed slightly differently).
Exactly analogous to the physical X version, but without a physical carrier
in which the 'experience' is expressed - a 'virtual carrier' of qualia is
created, if you like. As long as there is no human involvement
(coding/prescription) in the loops... out pops qualia ..pain, red, cold..
Hmmm. I Can't see this happening in traditional software at all. A
hardware'd functional language - CA maybe. Hardware Haskell? Definitely not
in an 'imperative' type language in any form.
Thinking out loud again. Sorry.
If an X is needed and not obvious, the 'functionalist emergence' solution
may actually have to be tried and fail in order to force us to accept one of
the X candidates. What a gut wrenching experience! I'd better find and prove
an X then, and quickly, eh?
> In fact, I suspect that a correct understanding of consciousness as a
> systemic, emergent phenomenon *may* be helpful in our
> understanding of the
> physical universe -- the quantum theory of measurement, Grand
> Unified Field
> Theory, etc. etc. Rather than looking to physics to save cognitive
> psychology and AGI, I'm more inclined to look to cog psych
> and AGI to save physics ;-)
> -- Ben Goertzel
When we get qualia sorted out, I think we'll get emergence sorted as well
and a new version of 'reality' thrown into the bargain.
Bruno et al on the 'everything' list have physics, at its very highest
limits, looking a bit like psychology. Ben, you may get your wish yet!
The year is afoot! Gotta go play. Good luck with your physicalist phenomena
Whatever they are.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT