**From:** Jef Allbright (*jef@jefallbright.net*)

**Date:** Wed Feb 23 2005 - 13:00:51 MST

**Next message:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Previous message:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**In reply to:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Next in thread:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Reply:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Ben Goertzel wrote:

*>The purpose of ITSSIM is to prevent such decisions. The purpose of the
*

*>fancy "emergency" modifications to ITSSIM is to allow it to make such an
*

*>decision in cases of severe emergency.
*

*>
*

*>A different way to put your point, however, would be to speak not just about
*

*>averages but also about extreme values. One could say "The AI should act in
*

*>such a way as to provably increase the expected amount of puppy-niceness,
*

*>and provably not increase the odds that the probability of puppy-niceness
*

*>falls below 5%." That would be closer to what ITSSIM does: it tries to
*

*>mitigate against the AI taking risks in the interest of maximizing expected
*

*>gain.
*

*>
*

*>The problem is that this relies really heavily on the correct generalization
*

*>of puppy-niceness. Note that ITSSIM in itself doesn't rely on
*

*>generalization very heavily at all -- the only use of generalization is in
*

*>the measurement of "amounts of knowledge." I think that a safety mechanism
*

*>that relies heavily on the quality of generalization is, conceptually at
*

*>least, less safe than one that doesn't. Of course this conclusion might be
*

*>disproven once we have a solid theoretical understanding of this type of
*

*>generalization. I see no choice but to rely heavily on generalization in
*

*>the context of "emergency measures", though, unfortunately...
*

*>
*

*>-- Ben
*

*>
*

*>
*

*>
*

When a proposed system design turns out to require fancy emergency

patches and somewhat arbitrary set points to achieve part of its

function, then perhaps that's a hint that it's time to widen-back and

re-evaluate the concept at a higher level.

- Jef

**Next message:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Previous message:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**In reply to:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Next in thread:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Reply:**Ben Goertzel: "RE: [agi] Future AGI's based on theorem-proving"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:50 MDT
*