From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Thu Jul 17 2008 - 02:38:08 MDT
> The trusted third party could be in control of and inform you about the
> hardware.
> http://en.wikipedia.org/wiki/Trusted_Computing#Remote_attestation
That seems eminently hackable.
> I don't see why the history would be necessary, although knowing that a
> hostile agent tampered with the source could make you more careful of
> obfuscated nastiness.
Precisely. But knowing, for example, that the SI was a complicated
evolved entity, or was designed by an entity with a certain reputation
- that is relevant. (in my GodAI idea, I detail a method of checking
an AI's trustworthiness; the process could only work because we know
the origins of the AI, and are confident that there are no deliberate
poison pills. A generic AI could not be tested in that way)
> Either way, though, you'd want to rigorously prove
> cooperation if possible.
Probably not possible. The tampering might be in the hardware; the
core instructions of the AI may be set to degrade over time, and be
replaced by others (there are many other variants possible).
It might be a requirement that you (or a trusted third party) get to
copy the SI onto new hardware. Then you're pretty certain that there
are no hardware hacks, and you can inspect the full data as you do so.
Then if software cooperation can be proved, you can trust.
Of course, this is assuming terms like hardware and software have a
clearly separate meaning for the SI.
Stuart
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT