From: Ben Goertzel (ben@goertzel.org)
Date: Tue Aug 02 2005 - 00:44:43 MDT
> It seems like the "this" argument just means bringing up a few more
> possibilities.
It involves bringing up broader theoretical possibilities, as well as
specific technical possibilities...
For instance, quantum physics can be derived from the assumption that
uncertainty should be quantified using complex-valued probabilities (cf Saul
Yousseff's work). Mathematically it seems consistent that there are more
general physics theories that use quaternionic and octonionic
probabilities; and if this kind of understanding were possible, it would
lead to a lot of interesting phenomena ... which I have some speculations
about, that I won't go i nto now...
(This is just one among many examples I could give, and I realize the above
paragraph doesn't explain much, but is just an allusion...)
>If all of those possibilities were known to not have
> exploits (or,
> assume for the sake of argument that those possibilities do not
> have exploits):
> would you still assert that magic is an issue worth worrying
> about? It seems
> like you would. If so, then you still have to answer my arguments
> about why I
> don't think it's rational to believe in "magic," and you can't
> use any of your
> suppositions about incomplete theories of physics.
If physics and science in general seemed more complete than they do, then my
estimate of the probability of a superhuman AI finding a box-exploit would
be significantly lower than it is now -- but still not as low as your seems
to be.
So you're right. The argument from the known (empirical and conceptual)
incompleteness of physics is only PART of my reason for believing a
superhuman AI could find a box-exploit. The other part is the part you
don't agree with, which is a general argument that if X is a lot smarter
than Y, then X can probably find a way out of any box that Y creates.
It occurs to me now that it might be possible to prove a mathematical
theorem to this effect. One could look at an average over all possible
physical universes (assuming some probability distribution on them), and
over all pairs of organisms X and Y within them, then try to prove that "If
X is much smarter than Y, then X can escape from most boxes Y could create."
Now, turning the previous paragraph into a real theorem would involve
formalizing "intelligence" and "organism" and "box" in useful ways (which we
have currently only made limited progress towards), and then proving a
possibly very hard theorem. But I submit that if we did prove something
like this, it would be decent evidence for the "other part" of my reason for
believing a superhuman Ai could find a box-exploit.
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT