From: Durant Schoon (firstname.lastname@example.org)
Date: Wed Aug 01 2001 - 11:50:12 MDT
CORRECTION (to my previous post):
> All we have is tactical battlefield analysis and SecureCYC, which
> sounds like an auotmated AI hacking program (or was it CYCSecure?)
Sorry, I was thinking "cracking" and typed "hacking". I meant it sounds
like a system designed to break into other networks, perhaps a scaled
down version of a serious "cracking" program that is being used to turn
a profit. If you know common sense and have a vague idea of human
psychology, brute force might not be necessary. Guessing birthdays?
Mathematicians like certain numbers. Trying "Gandalf".
Someone told me that Poland had cracked an early version of the Enigma
with an Enigma machine (replica?) and transmissions from the current
day and the day before. Then they worked on a simple assumption that
the operator turned the rotors only once because that was the easiest
thing to do. Could be folklore. Anyone else heard this before?
I'd still like to know if CYC succeeds as a common sense reasoning
Here's a puzzle: Could we (anyone the outside) determine if CYC has
succeeded in the following two cases:
1) CYC works as its creators had hoped, but *applying* CYC properly
is incredibly difficult. RESULT: looks like it fails.
2) CYC works as its creators had hoped, but they don't want anyone
to know, so they present a disabled version to the public. RESULT:
looks like it fails.
This conundrum of being on the outside and trying to determine
"success" applies to any other AI project, like WebMind's or
Here's one illegal way to finance WebMind. From what I've heard,
the financial community uses neural nets to look for abberations
in trading to catch illegal activities. One might hire an expert
in this field and use something like the financial-savvy WebMind1.0
to try to manipulate stocks outside the tolerance of detection.
Obviously this risks far too much for someone like Ben G. who
might (ought to!) be able to finance his project legitamately.
For the record, I'll state that I think he's too smart to do
something as silly and risky as that.
But it does raise the interesting specter of non-detectability.
Let's suppose someone unknown to all of us beats Ben and Eli to
the punch and develops and AI first. A young AI leans to siphon
money from the world's monetary systems undetectably. Early on
this system might make mistakes, but, ever-paranoid, gets better
and better at it so that even a naive AI wouldn't be able to
This parasitic AI just siphons off a small enough amount of money
to support it's creators on a private island populated with visiting
super models and exotic albino flamingos (they just like albino
flamingos, ok?). Also this PAI slowly and increasingly non-detectably
looks for all the hardware overhang (think all the idle PC's in
the world), every increasing its bases, er, I mean resources.
Now Let's say that either Eli or BenG or Peter create an AI that
starts exploring the world on the web. Our Loki-esque PAI is on
the look out for any new AI that might come along and spoil its
secret domination and is sitting ready and waiting to confuse and
mislead the new AI? It's goal might be to secure its position by
secretly preventing any new AI from bringing the world to singularity
by stalling nascent AI's.
Ok, it's time to ask if this scenario is more than a bad Sci-Fi
plot. I think it relates to a class of problems such as:
1) Should an SI spend time looking for other hidden AI's as soon
2) Should an SI spend time looking for ET?
3) Should an SI spend time trying to determine if it is already
in a simulation?
Of course only an SI should decide these things, but we as humans
already pour resources into #2. Maybe the DOD spends money on #1.
And #3...well, that's deliciously undetectable isn't it ;-)
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT