From: Mitchell Porter (firstname.lastname@example.org)
Date: Fri Mar 16 2001 - 04:35:29 MST
The problem of cognitive closure;
or, whether to philosophize or to program.
I suspect the majority of people on this list do both.
But I mean at those times when you are actually trying
to bring the Singularity closer.
The philosophical thing that I see Ben and Eliezer
(the two people on this list who are visibly trying
to create AI) both doing is a sort of phenomenology
of cognition - they think about thinking, on the basis
of personal experience. From this they derive their
ideas of how to implement thought in code. The
that bothers me is, is this enough?
Eliezer has a list of 'hard problems' somewhere:
existence (why does anything exist), consciousness
(what is it), ethics (what should the supergoals be).
I believe he expects that superintelligences will be
able to solve these problems, but that the main design
problems of seed AI are Friendliness and pragmatic
self-enhancement. If one really wants AIs to solve
such hard problems, however, I think one has to solve
the third design problem of avoiding cognitive closure
(a term which I learnt from him).
Consider a seed AI which starts out as a Friendly goal
system overseeing a pragmatically self-enhancing core.
The core contains the famous 'codic cortex', and other
modules specializing in design of software
architecture, hardware configuration, and so forth.
To write such a core would require a lot of work,
but it would not require a revision of fundamental
computer science or fundamental cognitive science.
This is ultimately what I mean by 'pragmatic
self-enhancement': enhancement of abilities that
computers or computer programmers already have and
The results of philosophizing
Is a pragmatically self-enhancing AI ever going to
be able to solve the hard problems? There are a number
1) Yes, by inventing new abilities if necessary.
There's no problem of cognitive closure with respect
to these problems.
2) No; both humans and all AIs which they can create
are cognitively closed in this way.
3a) No, but a 'new physics' AI could solve them, *and*
a Turing-computable AI could design a new-physics AI.
3b) Same as 3a, except that it would take a
AI to design a new-physics AI.
Now in fact my position is something like
4) Quite possibly they can even be solved by a Turing
AI, but only formally; the semantics of a solution
(i.e. the answers and the arguments for them) would
have to be supplied by something that understood what
the questions mean in the first place. That requires
consciousness, and consciousness is not any
of physics and computation. Rather, the ontologies of
physics and of computation are both subsets of the
ontology of the world, in which such aspects of
consciousness as qualia, intentionality and
subjectivity are just as primordial as quantity and
In principle you could have a new physics which had
fundamental entities with irreducible intentional
states, but given what physics is today, I think it's
confusing to refer to such a hypothetical theory as
just a physical theory. In my notes to myself I call
the hypothetical fundamental science of consciousness
'noetics', but that's just a name.
Having said all that, I can state my idea more
4') If the answers to the hard problems can be
discovered and known to be true, 'noetic' processes
must occur at some point, because knowledge (of
*anything*) is not just the existence of an
isomorphism between a fact and a computational state
(i.e. possession of a computational representation),
it's the existence of a particular type of noetic
Two comments in passing:
a. My working notion of what knowledge is, is
'justified true belief', a philosophically
answer. Noetics has to enter in the explanation of
what a belief is. At present I take perception and
belief to be dual aspects of experience, grounded in
ontologically elementary relations of 'acquaintance'
and 'positing' respectively.
b. The relationship between noetics and 'new physics'
(new for cognitive scientists anyway) like quantum
theory: quantum theory itself does not introduce all
this extra ontology. However, it introduces a few
factors (entanglement, nondeterminism) which on this
interpretation are the tip of the noetic iceberg.
I could say a lot more about this, but let me return
to the issue at hand, which is the implications of
a philosophy like this for seed AI. There are a lot of
possible noetic worlds, but let's pick one. Suppose
world is fundamentally a collection of 'partless'
monads, which have perceptual and belief states,
whose worldlines of successive states and causal
neighborhoods of interaction constitute physical time
and space respectively. The state space of a monad can
increase or decrease in complexity, through the
transfer of degrees of freedom between monads.
We interpret these degrees of freedom as elementary
particles and we interpret monads as entangled
collections of elementary particles, but this is
'misplaced concreteness'; monads, not qubits, are
basic. Finally, a conscious mind of human scale is
a single monad with a very large number of degrees
In a world like that, a classical computer is not a
monad, it's a huge system of interacting but still
individuated monads. If we grant that perception,
belief, and other conscious states can be properties
of individual monads only, then a Turing machine can
have none of those things; at best it can reproduce
their causal consequences.
(Side comment on Penrose and Goedel. The logician
Solomon Feferman has studied what he calls the
'reflective closure' of a set of axioms, which
consists of everything you can infer, not just from
the axioms, but from knowing that the axioms are true.
The reflective closure does in fact include all Goedel
propositions. This is written up in a highly technical
paper called 'Turing in the land of O(z)', in
_The universal Turing machine: a half-century survey_,
and in other papers, some of which might be on his
website: http://math.stanford.edu/~feferman. Now it
might be that reflective closure, since it involves
semantics and meta-knowledge, can only be implemented
by monads with some sort of self-acquaintance. I don't
understand the nuts and bolts of Feferman's work yet.
But I do think that most of Penrose's critics strongly
underestimate the complexities here.)
In the possible world I've described, would a
superintelligence have to be a quantum computer - a
single monad - to have any chance of solving the hard
problems? Not necessarily! Although only monads
actually know anything in such a world, they might,
through the use of their 'noetic faculties', figure
out logical relations between the relevant concepts,
formalize them, and write code to manipulate them
formally. This is the real meaning of computability
in such a world: a problem is computable, if the
corresponding *formal* problem can be solved by a
Turing machine. For the hard problems to be solved,
however, the monads themselves must do at least enough
ontological reasoning to formalize the problems
And here I think I can reach a point of agreement even
with unreconstructed computationalists and
unctionalists who think all this monad talk is crap.
If the hard problems never get formalized, they will
never get solved, except by sheer luck which we have
no reason to expect. A pack of dogs thinking at a
trillion times biological speeds is still a pack of
dogs, and a pragmatic problem solver which Transcends
is still a pragmatic problem solver - just a very
powerful one. Which leads to my final section...
The results of NOT philosophizing
What are the consequences of ignoring the problem of
One possibility: eventually the programmers realize
that their program is *really* good at engineering,
but hasn't a clue about anything else, and so they
start trying to formalize hard ontological problems.
Another possibility: the creation of Pragmatic Powers,
which can do science and technology and that's it.
Life under a Friendly Pragmatic Power might be
blissful, if the hard problems don't bother you, or
it might be hellish, if they did; but unhappy
philosophers in paradise would surely be removed
from misery by a Friendly Pragmatic Power in *some*
way; it should be able to detect their unhappiness,
even if it can only model its causes formally.
A third possibility (and this is what worries me):
built-in 'solutions' to the hard problems, designed
to fit the assumption that physics and computation are
the whole ontological story. There are many such
solutions available on the philosophy shelves. They
are the result of people taking seriously the
'naturalistic' worldview, which on my account
(Husserl's account, actually) is just an abstraction,
a result of taking the most conceptually tractable
subontology and saying that's the whole story.
In a world of humans, a philosophically forced
can only damage culture, really. But turned into the
algorithm of a superintelligence... I'm sure you get
the picture. To take just one example: let's just
*suppose* that the mainstream philosophy of mind
uploading is wrong, and a neural-level emulation is
*not* conscious. Nonetheless, a superintelligent
automaton armed with sensors, nanotech, and a
utilitarian Prime Directive might still freeze, slice
and scan everyone in the world, as a step towards the
digital utopia. This is basically a problem of 'Good
intentions, bad philosophy.'
In the real world, one would hope that such a monster
would notice quantum mechanics, change its plans,
invent quantum mind uploading, and quantum-teleport
everyone into their new vessels. Or something similar.
The bottom line: If we *don't* pay detailed attention
now to what we *don't* know, we may really, really
regret it later.
Pragmatic consequence: Seed AIs need to be
philosophically sophisticated *by design*.
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT