From: Chris Capel (pdf23ds@gmail.com)
Date: Wed Jul 20 2005 - 07:56:17 MDT
On 7/20/05, pdugan <pdugan@vt.edu> wrote:
> >We have access to consciousness through introspection. Can we identify
> >which elements of consciousness are arbitrary, and which are not? To
> >put it another way - can we identify which elements of ourselves might
> >be preserved, or perhaps even necessarily must be preserved, in
> >another kind of mind.
>
> Before you answer that question you have to consider through introspection
> this question: to what extent does human wetware cognition preserve
> non-arbitraty components? For instance, I have rational structures into which
> I plug symbolic data gleaned from sensory modality, if my sensory modality
> were to change, say in a simulated (or subjectively real) universe with
> different physics regarding just photon dynamics, would my symbolic
> interpretations become radically different from all prior earthly ontologies?
> Would my rational structures cease to be useful and be discarded? Would I
> enter a cognitive dimension where Bayes' lost all meaning? Would this
> transition be temporary or permanent? Would this transition make me crazy or
> enlightened? Or both? My inclination is that these questions are undecidable,
> leading me to conclude an inability to identify any non-anthropomorphic value
> worth keeping.
Whether we would be completely overwhelmed by a radically new
environment depends on how much our brain is able to change to
accomodate that new environmnent. If the brain is able to find some
foothold in the new world that enables it to make sense of at least
some of its input, and can gradually build up from there, it's
possible that the "human" mind, greatly changed, could have some
meaningful interaction with such another world.
My own inclination is that it's fairly likely that the human cognitive
adaptations would completely fail us in these circumstances, as the
brain has evolved not as a general purpose computing device (except
inefficiently at the very highest levels) but as a device with a
number of processes specifically designed to process information from
our particular subjective universe. To what extent is it meaningful to
say "if my sensory modality were to change"? Modalities determine what
format information about the world will be stored in by our memories,
and brain modules geared toward processing some specific aspect of our
cognitive model of reality are often (always, in some combination)
reused for new purposes when we think abstractly. So we would fail in
this other world, because too much is hard-wired.
Would it be useful for a FAI to have more generality in the way its
modalities function? Well, not at the beginning. Since we don't know
of any worlds where Bayes' doesn't hold, there's not much point in
trying to build an AI that would work in it. If the AI is able to find
such a world verself, ve can self-modify if ve decides ve wants to
bother with it.
> >Is emotion, for example, a natural byproduct of the combination of
> >intelligence, consciousness and experience? Perhaps it is not - but
> >perhaps there are some identifiable examples.
>
> Intelligence, as we've discussed, can be thought of as a utlity function or
> optimization process, consciousness is a nueral feed-back loop (though a
> mysterious one indeed) and experience is sense data compressed to symbolic
> autopoiesis and highly selective memory. Emotions are nuero-chemical functions
> which interact with these mental components. I don't think this implies a
> chemical or "emotional" context to electronic cognition to be be inherently
> incompatable with Turing computation. If we could get the kinks out of fluid
> quantum computing this would be an engineering option worth considering.
Eh? What about emotion is so special that it would require anything
more than a Turing machine to implement as part of an GAI? (That begs
the question of whether it's even desirable for Friendliness. That one
seems to be emphatically NO.) How would quantum computing help
anything?
> I'm a proponent of the notion that irrationality is rationality if construed
> in an autopoetic system with different underlying rules and axioms. As I
> suggested above, a mind privy to worlds with utterly different ontologies
> might not give much a damn for human logic. Whether this translates into our
> annihilation or the gentle amusement of the AI is the six billion person
> question.
I don't quite understand what kind of threat you could see concerning
an AI suddenly understanding a different ontology and going crazy. How
likely would this be?
On the other hand, the question of what "attitudes" to instill in an
AI seems to depend on a conception of "attitudes" as having a
meaningful relation in application to a particular AI design, which
presumes a considerable knowledge of that design, it seems like.
Chris Capel
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT