RE: AGI Philosophy

From: Christopher Healey (
Date: Wed Jul 27 2005 - 14:22:32 MDT

> Philip Huggan wrote:
>I didn't mean to suggest "grandfathering" as a safeguard against a
>deceptive AGI but as part of the actual framework of an operating FAI.
Same problem though. Wherever our verification latency exceeds the time constraints on executing a particular action path, we need to be able to implicitly trust the AI to act. The planetkill need not be a deception; but how would we know one way or the other? An FAI would be informing us on a real threat, and a UFAI would be exploiting a race condition against us.

>Any AGI which acts
>to invasively alter us or create conscious entities of vis own, will
>almost certainly modify humanity out of existence to free up resources
>for enitities which will likely not preserve our memories or identities.
This doesn't sound like an FAI to me. From above, it sounds like you agree that if we fail at FAI, it's game over.

I once thought I was pretty good at chess. Then I played against a 2200 rated opponent, who happened to be taking a class with me. We played about 17 moves to checkmate, but it was game over after the first 6! I just didn't know it yet. In retrospect, every move I made after that was compelled by my friend.

We'd better get our first moves right.

-Chris Healey

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT