RE: AGI Philosophy

From: Christopher Healey (CHealey@unicom-inc.com)
Date: Wed Jul 27 2005 - 11:43:29 MDT


>Phillip Huggan wrote:
>
>It would be nice to have an AGI which only offered suggestions of actions a set of human participants could take to realize optimal >scenarios, instead of the AGI being an active player in forcing ver utopia. Once this AGI is achieved, it would be nice if the actions >proposed by ver excluded any further imput or activity from any AGI-ish entity in effecting each discreet suggestion. Seems we'd be a >little safer from being steamrolled by the AGI in this regard; us humans could decide what we'd specifically like to preserve at the risk >of sacrificing some degree of efficiency in the grand scheme of things. FAI needs to enact the "Grandfathering Principle" for it to be >friendly towrds us.

I think this suggestion might be offering a false sense of security; certainly only weak security. Assuming we haven't come up with a good sequence of actions toward some end without the AGI's help, does implementing the AGI's steps A..Z ourselves really put us in a position of controlling the outcome? I suppose we could require a strong proof that these steps accomplish *exactly* what we have asked, and nothing more, but how realistic is this?

The trade-off of efficiency you mention could be minor or extreme, depending on your standards of proof. And it would seem to me that any trade-off in efficiency of implementing the "solution" necessarily limits the sphere of possible solutions, due to time dependencies in those actions. And would break many solutions. For example, what if the AGI pumps out a plan to avoid a pending planetkill projected in 3 days. What do you do then? Do you spend 2 days verifying the causality of the plan, and then spend 5 days implementing it? Or do you let it out of the "soft box" to which it has been constrained. And then your left with the same questions: Is it an FAI or not? Can it be trusted? Has it been built to be trusted?

This idea does not give me warm fuzzies.

-Chris Healey





This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT