From: Johnicholas Hines (firstname.lastname@example.org)
Date: Wed Oct 14 2009 - 16:00:50 MDT
Hi. To some extent, I think this list is appropriate for discussing
the current state of the art for software safety.
One important strategy is to stay small - for example, I've read that
Ontario Hydro used a 6000 loc nuclear power plant shutdown system.
It's sometimes possible to leverage this by building systems
containing small "kernels" - components that are intended to assure
safety, despite the system as a whole containing untrusted code.
The recent seL4 kernel is an example:
It's also a standard strategy in automated theorem proving - to have a
tiny, trusted proof checker that checks the output of the rest of the
system. Appel has a proof checker that is 2700 loc:
McCune has another, called Ivy:
To some extent, we can think of a prediction market as a safety
mechanism - if you're unfamiliar with the concept:
Suppose there was a piece of widely available software that acted as a
prediction market for multiple reinforcement-learning AI agents. AI
projects might use it, in order to get "wisdom of crowds" aggregate
performance from multiple agents.
Would this increase or decrease existential risk?
This archive was generated by hypermail 2.1.5 : Sat May 18 2013 - 04:01:14 MDT