From: pdugan (firstname.lastname@example.org)
Date: Wed Jul 20 2005 - 00:44:04 MDT
>Before leaping from one example of someone who failed to consider that
>a different kind of mind would be, well, different, we should try to
>establish some bounds on the problem.
>We have access to consciousness through introspection. Can we identify
>which elements of consciousness are arbitrary, and which are not? To
>put it another way - can we identify which elements of ourselves might
>be preserved, or perhaps even necessarily must be preserved, in
>another kind of mind.
Before you answer that question you have to consider through introspection
this question: to what extent does human wetware cognition preserve
non-arbitraty components? For instance, I have rational structures into which
I plug symbolic data gleaned from sensory modality, if my sensory modality
were to change, say in a simulated (or subjectively real) universe with
different physics regarding just photon dynamics, would my symbolic
interpretations become radically different from all prior earthly ontologies?
Would my rational structures cease to be useful and be discarded? Would I
enter a cognitive dimension where Bayes' lost all meaning? Would this
transition be temporary or permanent? Would this transition make me crazy or
enlightened? Or both? My inclination is that these questions are undecidable,
leading me to conclude an inability to identify any non-anthropomorphic value
>Is emotion, for example, a natural byproduct of the combination of
>intelligence, consciousness and experience? Perhaps it is not - but
>perhaps there are some identifiable examples.
Intelligence, as we've discussed, can be thought of as a utlity function or
optimization process, consciousness is a nueral feed-back loop (though a
mysterious one indeed) and experience is sense data compressed to symbolic
autopoiesis and highly selective memory. Emotions are nuero-chemical functions
which interact with these mental components. I don't think this implies a
chemical or "emotional" context to electronic cognition to be be inherently
incompatable with Turing computation. If we could get the kinks out of fluid
quantum computing this would be an engineering option worth considering.
>What do you think? Do you think that what we have access to as
>intelligent beings is not even the same kind of thing that another
>intelligence might have access to? Does it make sense to call FAI
>intelligent if the mind cannot be in any sensible way called "the
I think you are alluding to the Penrose hypothesis, which begs the vital
question: Is perfectly robust Friendliness (through unrestrained
self-modification) possible with Turing computation? Is fluid quantum
computation nessecary to instill "emotion" resembling systemic attractors? Or,
as Penrose suggested, is the elusive quantum gravity computer needed to give
the FAI the capacity for the existential questions regarding infinity?
>> > I know people have posed race conditions between FAI and paperclips,
>> > but there seems to me to be a kind of contradiction inherent in any AI
>> > which is intelligent enough to achieve one of these worst-case
>> > outcomes, but is still capable of making stupid mistakes.
>> Actions are only "stupid mistakes" relative to a cognitive reference frame.
>Partial point. Obviously irrationality is irrationality however you
>slice it. I don't care who you are, (A -> B) -> (!B -> !A) is going to
>stay a logical law. But you rightly didn't respond like that, my point
>was the "stupidity", the disconnect between the logic and the sense if
>you will, of being able to formulate paperclips as a goal, and
>believing it to be a good idea.
I'm a proponent of the notion that irrationality is rationality if construed
in an autopoetic system with different underlying rules and axioms. As I
suggested above, a mind privy to worlds with utterly different ontologies
might not give much a damn for human logic. Whether this translates into our
annihilation or the gentle amusement of the AI is the six billion person
>One thing about humans - we ask existential questions. Why are we
>here? What shall we do now? In a sense, human intelligence is a defeat
>of instinct and of mindless goal-following. A superintelligence poses
>a strange reversal. By being able to remove uncertainty, it is
>supposed that in answering the questions "Why am I here?" and "What
>shall I do now?", the AI will return to a near-instinctual level of
>behaviour, with deadly efficiency if it so chooses.
Much like Colonel Kurtz in "Apocalypse Now".
>Do you think that's a fair analysis? Some kinds of AI are scary
>because they might come to a doubt-free conclusion?
A better question is "Are some AI's scary because they might experience Bayes
irrelevant realities and be completely beyond human probable analysis?"
>Or do you perhaps not think that existential questions are like that.
>It might be that the greater the intelligence, the less cosmic
>certainty one has. We should have the opportunity to interrogate
>merely somewhat superintelligent beings about this question at some
>point before singularity.
I point to the idea of a Taoist sort of AI, one who learns and plays with
all sorts of potentialities, only to renormalize on its certainty of its
complete lack of objective knowledge. This is the paradox of objective
subjectivity. The principal follows that you can't go crazy if you aren't
attached to the issues threatening philosophical crisis. Your last sentance
nicely reflects the notion of "Singularity Steward" proposed in Ben's essay on
Positive Transcension, I strongly suggest to SIAI and anyone else working on
AGI that the existential risks of unfriendliness can be marginalized through
AI's who don't take themselves too seriously and the through the guidance of a
Singularity Steward. Of course this seems a bit of a paradox on its own, the
way around that paradox is to extend discourse such as this list to the level
of a "global brain", to ensure that the wisdom of transhuman intelligences is
distributed like a safety net against both existential risk and philosophical
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT