Re: [agi] Future AGI's based on theorem-proving

From: Jef Allbright (
Date: Wed Feb 23 2005 - 13:00:51 MST

Ben Goertzel wrote:

>The purpose of ITSSIM is to prevent such decisions. The purpose of the
>fancy "emergency" modifications to ITSSIM is to allow it to make such an
>decision in cases of severe emergency.
>A different way to put your point, however, would be to speak not just about
>averages but also about extreme values. One could say "The AI should act in
>such a way as to provably increase the expected amount of puppy-niceness,
>and provably not increase the odds that the probability of puppy-niceness
>falls below 5%." That would be closer to what ITSSIM does: it tries to
>mitigate against the AI taking risks in the interest of maximizing expected
>The problem is that this relies really heavily on the correct generalization
>of puppy-niceness. Note that ITSSIM in itself doesn't rely on
>generalization very heavily at all -- the only use of generalization is in
>the measurement of "amounts of knowledge." I think that a safety mechanism
>that relies heavily on the quality of generalization is, conceptually at
>least, less safe than one that doesn't. Of course this conclusion might be
>disproven once we have a solid theoretical understanding of this type of
>generalization. I see no choice but to rely heavily on generalization in
>the context of "emergency measures", though, unfortunately...
>-- Ben
When a proposed system design turns out to require fancy emergency
patches and somewhat arbitrary set points to achieve part of its
function, then perhaps that's a hint that it's time to widen-back and
re-evaluate the concept at a higher level.

- Jef

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT