From: Ben Goertzel (ben@goertzel.org)
Date: Mon Oct 10 2005 - 20:03:04 MDT
Mungojelly,
You wrote this:
> One last thing: The shape of emergence of computer intelligence is not
> going to be anything like our intuitions that we draw from the emergence
> of biological intelligence. Our biological intelligence emerged very
> slowly by incremental changes, methodically exploring not the
> whole space
> of what's possible to compute, but rather the space of what computations
> are sufficient to keep a monkey alive long enough to fuck. Electronic
> intelligence is not going to emerge gradually through generations
> of whole
> cohesive system. It's not a baby.
I wrote something somewhat pertinent to this earlier today. It was part of
a longer essay on immortality, but, even out of context it may be
comprehensible.
This passage occurs right after some text discussing the neuroscience proofs
that free will, the phenomenal self and the continuity of consciousness are
all basically illusions constructed by the mind in its attempt to
rationalize and understand itself and generally make its life easier...
"
One can perceive the preservation unto eternity of the human illusions of
free will, self and continuity of consciousness as a good thing - or one can
view it as a burden, like the preservation unto eternity of stomachaches and
bad tempers and pimples. An equally valid, alternate perspective holds that
human-style individual minds, ridden with illusions as they are, are merely
an intermediary phase on the way to the development of really interesting
cognitive dynamics.
Among humans, illusions like will, self and consciousness-continuity are
just about inevitably tied in with intelligence. Highly rigorous long-term
routines like Zen meditation practice are able to whittle away the
illusions, but they seem to have other costs - I don't know of any Zen
masters who make interesting contributions to science or mathematics, for
example. Among humans, the reduction of these illusions on a practical
day-to-day basis seems to require so much effort as to absorb almost the
entire organism to the exclusion of all else. Yet the same will not
necessarily be the case for superhuman AI's, or enhanced human uploads, or
posthuman humans with radical brain improvements. These minds may be able
to carry out advanced intellectual activity without adopting the illusions
that are built into the human mind courtesy of our evolved brains.
A mind without the illusions of self, free will or continuity of
consciousness might not look much like a "mind" as we currently conceive
it - it would be more of a "complex, creative, dynamical system of
inter-creating patterns." FutureBen and FutureBush, as envisioned above,
are actually fairly unadventurous as prognostications of the future of
mind - as described above, they're still individuals, with individual
identities and histories; but it's not at all clear that this is what the
future holds in store. If one's value system favors general values like
freedom, growth and joy (Goertzel, 2004a), rather than primarily valuing
humanity as such, such a posthuman relatively-illusion-free mind may be
considered superior to human minds . and the prospect of immortality in
human form may appear like a kind of second-rate "booby prize."
All these issues center around one key philosophical point: What is the goal
of immortality? What is the goal of avoiding involuntary death? Is it to
keep human life as we know it around forever? That is a valid, respectable,
non-idiotic goal. Or is it to keep the process of growth alive and
flourishing beyond the scope painfully and arbitrarily imposed on it by the
end of the human life?
Human life as it exists now is not a constant, it's an ongoing growth
process; and for those who want it to be, human life beyond the current
maximum lifespan and beyond the traditional scope of humanity will still be
a process of growth, change and learning. Fear of death will largely be
replaced by more interesting issues like the merit of individuality and
consciousness in its various forms -- and other issues we can't come close
to foreseeing yet.
It may be that, when some of us live long enough and become smart enough, we
decide that maintaining individuality and the other human illusions unto
eternity isn't interesting, and it's better to merge into a larger posthuman
intelligent dynamical-pattern-system. And it may be that others of us find
that individuality still seems interesting forever. Resource wars between
superhuman post-individuals and human individuals can't be ruled out, but
nor can they be confidently forecast -- since there will likely so many
resources available at the posthuman stage, and diversity may still seem
like an interesting value to superhuman post-individuals (so why not let the
retro human immortal individuals stick around and mind their own business?).
These issues are fairly hard to "feel out" right now, stuck as we are in
this human form with its limited capacity for experience, intelligence and
communication. For me, the quest for radical life extension is largely
about staying around long enough, and growing enough, to find out more about
intriguing (philosophically, scientifically and personally fundamental)
issues like these.
"
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:04 MST