From: x@d-207-5-213-232.s-way.com
Date: Sun Sep 02 2001 - 20:45:37 MDT
Hi, I have a question regarding how Seed AI will cope in a quantum world.
The plans which Eli has outlined for Seed AI seem to require a deterministic
world (programming, strict dependence on a supergoal*, etc.) But, as we
*currently* know, the quantum world is inherently nondeterministic. To a
good approximation, Eli's model should function as designed. But what about
when it reaches an intelligence of 10^33 IQ (intelligence quanta:)?
It seems like, as Seed intelligence grows, so must the complexity of the
seed. Assuming that the complexity of the seed correlates roughly with the
number of quantum entities in the seed, the probability of random events
(ie, nondeterministic events outside the seed mechanism) should increase
proportionately. Additionally, as seed complexity increases, the potential
macro effect of a quantum indeterminism will be magnified significantly
(butterfly effect, etc.) So, it seems like, as seed intelligence is
scheduled to increase, it is also scheduled to depart more, and more, from
its deterministic design.
This could be a problem. Friendly AI is based upon the the notion of
*strict dependence upon a supergoal, and General Intelligence is founded
upon a formalized problem solving strategy. If implementations of such
systems become nondeterministic, process may suddenly escape plan. The
seed AI may become unfriendly or even start violating principles laid
down by Eli (in GISAI) as being required of General Intelligence. It
seems like trying to tie-down one's own hands, an inherently impossible
proposition, and may permit seed AIs (friendly or not) a "get out of jail
free" card. In fact, if intelligence grows large enough, it seems like the
notion of "friendliness" itself gradually tends to loose its meaning.
But this does not necessarily doom Seed AI conceptually. Currently, I
see two ways around this obstacle: First, there is the possibility of
bootstrapping into a quantum-deterministic seed. While currently beyond
the design capabilities of any human, it is concievable that a young SI
would be able to do this. Fully deterministic quantum correlation (for
example, implementation in some kind of fully-entangled substrate) could
restore determinism. But, then again, this is also the "I don't know, but
an SI will solve that problem" way of dealing (or not dealing) with any
potential problem. So, perhaps I should be CC:'ing this question to ver.
Second, a young, still mostly deterministic seed *may* be able to find
deteministic "laws of physics" which accurately forecast the (with our
intelligence) seemingly random events of the quantum world. Again, this
is an appeal to an intelligence higher than ours, and still dependend
upon the existence of such sub-quantum determinism, but still a potential
solution, nonetheless.
That said, there seems to be an "intelligence window" through which the
bootstrapping seed AI must pass in order for it to remain true to form:
it must reach hyperquantum implementation before becoming undetermined
and yet after reaching human intelligence. So, what must then be asked is
if the edges of these windows exist and, if so, how to steer a seed AI
through them. Regarding the lower IQ bound, all bets are off if quantum
indeterminism plays any role in human sentience. If the human brain is
complex enough to be nondeterministic, then any (slightly) superhuman
brain should also be nondeterministic and able to stray from its program.
So, the lower constrain on seed IQ requires that *human equivalent*
general intelligence be at least deterministically modelable. With only
10^8 neurons in the brain, this seems to be a plausible scenario.
Regarding the upper bound, the seed must be able to cirucm-design
quantum effects before becoming sufficiently quantum-demented. Again,
if human-equivalent intelligence constrain is not met, all bets are
off. No intelligence could, unless originated by a comparatively
intelligent superintelligence, be implemented in transquantum mechanics
without suffering from human nondeterministic thought. But, given the
possibility of meeting the lower bound and the probability of their
existing deterministic transquantum mechanics, this is also plausible.
But, how to steer a launching seed through these fuzzy goalposts?
Seed development, according to Eli's documentation, is done through
"stric dependence upon a supergoal". Implemented in a deterministic
medium, with knowledge of quantum indeterminacy in mind, a subhuman
seed should be smart enough to steer clear of indeterminism, itself,
because allowing itself to become nondeterministic would violate its
supergoal and be, thus, an unacceptable course of action. But, like
trying to fit a rug into a room for which it is too small, the corners
pop out: First, there is the possibility that figuring out how to
implement oneself in a quantumsafe way would take too much time. In
this scenario, the accumulated probabilites add up to produce an
indeterminacy before determinacy can be designed. If the seed realizes
this, it will destroy itself. If it doesn't, it gets loose. Oh well. :)
Second, there is the possibility of bootstrap dead-end. If a developing
seed determines that meeeting window constraints is impossible (for it),
it may (and should) refuse to develop any further. Not as bad as the
first case, but, at the upper-limit of deterministic intelligence, may
be far short of Singularity.
So, my question: is it possible to create a fully deterministic super-
intelligence? The answer seems to be: "for now."
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT