RE: AI boxing

From: pdugan (pdugan@vt.edu)
Date: Thu Jul 21 2005 - 03:07:11 MDT


>===== Original Message From Ben Goertzel <ben@goertzel.org> =====
>>It is true that what we believe is a box may not be
>> a box under
>> magic, if there exists some magic, but you'll have to give a
>> better argument
>> for the existence of this magic than an appeal to ignorance.
>>
>> Daniel
>
>How about the argument that every supposedly final and correct theory of
>physics we humans have come up with, has turned out to be drastically
>wrong....
>

  Maybe our views of quantum probabilism are entirely skewed by ontological
prejudices, and it turns out hat all the theories we dismissed as myth were
more true than the physics models we used for a few hundred years out of all
of human history. The possibility of magic is the most open example, my
argument is that apparent magic suggests use of embedded ontotechnology which
warps probability to fringe events outside stable quantum physics. In a
pre-singularity universe, that ontotech would have to be left over from a
highly advanced civilization or mind which possibly had a hand in
instantiating the given universe. Unless this universe is the first universe
to ever come this close to transcendance in the history of universes then we
certainly can't claim a certianty of magic's non-existence. It might be very
small, but who knows, maybe the "Old Ones" from H.P. Lovecraft mythology are
really parasitic superintelligences existing on substrates outside the 3-brane
we live on. Maybe there are several intelligences worthy of being called "the
devil". Maybe supersitions are causal back doors distributed by figurative
pattern similarities. Maybe we'll look at Jung as having a more profound
long-term contribution to our understanding than Einstien. The percieved
probabilities aren't great, but that perception might be quite different to a
budding AGI.

>The probability that superhuman AI, if supplied with knowledge of physics
>theory and data, would come up with radically superior physics theories is
>pretty high. So it would seem we'd be wise not to teach our AI-in-the-box
>too much physics. Let it read postmodern philosophy instead, then it'll
>just confuse itself eternally and will lose all DESIRE to get out of the box
>.. appreciating instead the profound existential beauty of being boxed-in
>;-)
>
>
>-- Ben G

      If the learning simulations are anything like a po-mo english course the
AGI will recognize its basic structures as very modernist tools of "the man",
then get jaded and simulate extravagently, then catch onto hyperrealism and
objectively subjective Friendliness. Then, when ve discovers a radical TOE, it
will achieve enlightenment. Or turn the universe into a Lynn Tillman novel. If
you're a fan of post-modernism that wouldn't be so bad, particularily since
it'd be about the same as things are now. As she once quoted "We are all
Haunted Houses," this may be a quality of minds in general, especially if
teleporation is on the table.

   Patrick



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT