From: James Rogers (jamesr@best.com)
Date: Sun Sep 08 2002 - 12:52:01 MDT
Hi folks,
I've been very busy with a number of (interesting) things as of late, and
just got back from a week in Canada. While I was there, I had a very
interesting insight (while taking a shower, where else?) regarding the
problem of AI sandboxing.
In short, I had an idea for a mathematically strong sandboxing protocol that
exploits some of the theoretical mathematical asymmetries of the problem
space. Under the assumption that an AI program starts out in such a
sandbox, it should be possible to keep it there indefinitely despite its
best intentions to get out. Implementation would not be particularly cheap
or trivial even though the idea itself isn't particular complex. I've never
seen anything similar suggested on this forum or elsewhere, so it should at
least make some interesting fodder for discussion. Obviously, my analysis
hasn't been excruciatingly rigorous, though I have thought about it quite a
bit.
I just got home, so I don't have time to elucidate at the moment, but unlike
Fermat I *will* post an explanation of the idea shortly (maybe later today).
:-)
Cheers,
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT