RE: About "safe" AGI architecture

From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 13 2004 - 21:40:41 MDT


> Well, so far you can verify that the program corresponds to the
> description of intent written in Z and verified by as a reasonable
> description by the same fallible humans who wrote the program. I
> don't see how you can get a meaningful "verification of your entire
> layered architecture" from such a process.

Because if the layered architecture is implemented according to
specification, then there is no way for the AI to affect the physical
world except via the means approved by its masters, and no way for it to
modify it lower-layer source-code; all it can do is modify it's
higher-level cognitive code and interact with the world via the approved
means. Presumably these approved means do not include anything giving
it control over the machines it's running on.

What this doesn't rule out, for example:

1) the AI achieving fuller self-mod powers by exploiting bugs in the OS,
the program verifier, etc.

2) the AI achieving fuller self-mod powers indirectly, e.g. by
convincing its masters to give it to them

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT