From: Johnicholas Hines (johnicholas.hines@gmail.com)
Date: Wed Oct 14 2009 - 16:00:50 MDT
Hi. To some extent, I think this list is appropriate for discussing
the current state of the art for software safety.
One important strategy is to stay small - for example, I've read that
Ontario Hydro used a 6000 loc nuclear power plant shutdown system.
http://www.safeware-eng.com/system%20and%20software%20safety%20publications/High%20Pressure%20Steam%20Engines.htm
It's sometimes possible to leverage this by building systems
containing small "kernels" - components that are intended to assure
safety, despite the system as a whole containing untrusted code.
The recent seL4 kernel is an example:
http://www.nicta.com.au/news/current/world-first_research_breakthrough_promises_safety-critical_software_of_unprecedented_reliability
It's also a standard strategy in automated theorem proving - to have a
tiny, trusted proof checker that checks the output of the rest of the
system. Appel has a proof checker that is 2700 loc:
http://www.ingentaconnect.com/content/klu/jars/2003/00000031/F0020003/05255627
McCune has another, called Ivy:
http://www.cs.unm.edu/~mccune/papers/ivy/
To some extent, we can think of a prediction market as a safety
mechanism - if you're unfamiliar with the concept:
http://hanson.gmu.edu/ideafutures.html
Suppose there was a piece of widely available software that acted as a
prediction market for multiple reinforcement-learning AI agents. AI
projects might use it, in order to get "wisdom of crowds" aggregate
performance from multiple agents.
Would this increase or decrease existential risk?
Johnicholas
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT