From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Tue Dec 13 2005 - 20:53:20 MST
<hypothetical>
An AGI is built, it becomes exceedingly intelligent, but can see how to
become more intelligent again. This is one of its goals. Humanity is
alive, and the AGI has found a way to acceptably co-exist without too
much being given up by either side.
The AGI can see how to fulfil its goal, but to do so necessitates a
calculated risk.
</hypothetical>
Regardless of your decision theory, this seems to come down to a
decision simple enough for almost all people to understand. Upside risk,
downside risk, chance of success.
Let's say the upside risk is quantified as "99". Let's say the downside
risk is quantified as "20". Let's say the chance of success is
quantified at "80%". Let's assume it's a roll-of-the-dice kind of
decision, a single-action, discontinuous outcome kind of action. An
all-or-nothing, if you will.
It seems to me that this is exactly the kind of scenarion which is
worrying. Do we want the AGI to roll the dice? Regardless of how you
build the goal system, I think the hypothetical scenario where the AGI
is left weighing up a risk scenario will always remain. Even with our
best interests in mind, the AGI may have to risk those interests,
balanced against the other alternatives.
It seems to me that the X-Factor, or how risk-averse the AGI is, is an
element of choice which cannot be removed even with a perfect goal
system. It seems to me that the pursuit of a stable and acceptable goal
system is a different kind of thing to reasoning about risk.
Decision theory, it seems to me, does not prevent us from having to play
dice.
Cheers,
-T
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT