From: Chris Healey (chealey@unicom-inc.com)
Date: Sun Aug 29 2004 - 14:50:35 MDT
I found it constructive, in my own consideration of the AI-Box
experiments, to ask questions regarding the motives of a person likely
to find themselves in the role of an AI-jailer:
1. Why create an AI at all? To stick it in a box and poke it a bit?
Presumably we want a safe AI so that we can utilize it for SOMETHING.
Is our aim to create and confine benevolent prisoner intelligences, or
to guard against human-stompers?
2. How much are you willing to sacrifice on your convictions? If an
existentially catastrophic event was pending that would wipe out
humanity and was beyond our power to control, would you refrain from
releasing the AI?
What if the AI could provably demonstrate that our chances of survival
were 0.00001% without its assistance, and a comparitively whopping 3%
with its assistance? 50%? As we continue to progress
technologically, how many of these risks will we fail to indentify?
How many of these would a superintelligent AI be likely to identify?
3. Reframing question 2, what if that truly-present existential risk
was another AGI, obviously of the human-stomping sort and not confined
by a jailer, which was exponentially converting the Earth into
paperclips? Would you dig a fire-line in front of it, or nuke it? Or
would you take the only chance you really had? Would you wait until
100,000 people were dead, or 5 billion (maybe not long thereafter)?
What if it had not happened yet, but the AGI had identified, from
provided information, that another AGI team would have the means to
achieve takeoff in ~2 months with no FAI safeguards?
4. What is the real difference between "friendly" and "Friendly"?
Can you introspect that difference in a foreign mind through a VT100?
If it "truly" coverged on Friendly goal content (non-deceitfully), how
sure can we be that it's not an unstable convergence due to deep
structural issues? It seems to be doing surgery with a spoon.
--- If you're confining the AI irregardless of reality and you do not let it out, then maybe the point shouldn't be that you're a good jailer. You've started with a volition of protecting the world from a threat, you've composed a strategy, and finally you've turned yourself into your own little paperclip-level-AI implementing your strategy irregardless of it's continued ability to serve that volition. This strategy produces a wide array of degenerate results. It is effectively useless for protecting us in any meaningful way. -Chris Healey
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:44 MST