RE: AGI Philosophy

From: Phillip Huggan (
Date: Wed Jul 27 2005 - 13:20:42 MDT

Christopher Healey <> wrote:

>Phillip Huggan wrote:
>It would be nice to have an AGI which only offered suggestions of actions a set of human participants could take to realize optimal >scenarios, instead of the AGI being an active player in forcing ver utopia. Once this AGI is achieved, it would be nice if the actions >proposed by ver excluded any further imput or activity from any AGI-ish entity in effecting each discreet suggestion. Seems we'd be a >little safer from being steamrolled by the AGI in this regard; us humans could decide what we'd specifically like to preserve at the risk >of sacrificing some degree of efficiency in the grand scheme of things. FAI needs to enact the "Grandfathering Principle" for it to be >friendly towards us.

I think this suggestion might be offering a false sense of security; certainly only weak security. Assuming we haven't come up with a good sequence of actions toward some end without the AGI's help, does implementing the AGI's steps A..Z ourselves really put us in a position of controlling the outcome? I suppose we could require a strong proof that these steps accomplish *exactly* what we have asked, and nothing more, but how realistic is this?

The trade-off of efficiency you mention could be minor or extreme, depending on your standards of proof. And it would seem to me that any trade-off in efficiency of implementing the "solution" necessarily limits the sphere of possible solutions, due to time dependencies in those actions. And would break many solutions. For example, what if the AGI pumps out a plan to avoid a pending planetkill projected in 3 days. What do you do then? Do you spend 2 days verifying the causality of the plan, and then spend 5 days implementing it? Or do you let it out of the "soft box" to which it has been constrained. And then your left with the same questions: Is it an FAI or not? Can it be trusted? Has it been built to be trusted?


The problem is that humans are evil/unfriendly. Any AGI which acts to invasively alter us or create conscious entities of vis own, will almost certainly modify humanity out of existence to free up resources for enitities which will likely not preserve our memories or identities. I didn't mean to suggest "grandfathering" as a safeguard against a deceptive AGI but as part of the actual framework of an operating FAI. An AGI should only tile volumes of the universe unlikely to otherwise come under jurisdiction of future human (or ET) civilizations. This would mean no computronium until ve sprints away to co-ordinates where the cosmological constant ensures only non-rival resources are being consumed by the AGI. AGI territory would be a hollow sphere expanding outwards in this cosmological topography. Same idea here on earth. If an AGI suggests the re-education of an unwilling Amish community to catch up with post-singularity realitites, the advice should not be taken and a
 "wildlife preserve" should be constructed around the community.

 Start your day with Yahoo! - make it your home page

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT