From: Bradley Thomas (brad36@gmail.com)
Date: Wed Oct 14 2009 - 16:35:58 MDT
>Suppose there was a piece of widely available software that acted as a
prediction market for multiple reinforcement-learning AI agents. AI projects
might use it, >in order to get "wisdom of crowds" aggregate performance from
multiple agents.
>Would this increase or decrease existential risk?
It may decrease if it there is an economic incentive for AIs to be plugged
into the market and therefore (presumably) regulated.
Brad Thomas
www.bradleythomas.com
Twitter @bradleymthomas, @instansa
-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Johnicholas
Hines
Sent: Wednesday, October 14, 2009 6:01 PM
To: sl4@sl4.org
Subject: [sl4] prediction markets
Hi. To some extent, I think this list is appropriate for discussing the
current state of the art for software safety.
One important strategy is to stay small - for example, I've read that
Ontario Hydro used a 6000 loc nuclear power plant shutdown system.
http://www.safeware-eng.com/system%20and%20software%20safety%20publications/
High%20Pressure%20Steam%20Engines.htm
It's sometimes possible to leverage this by building systems containing
small "kernels" - components that are intended to assure safety, despite the
system as a whole containing untrusted code.
The recent seL4 kernel is an example:
http://www.nicta.com.au/news/current/world-first_research_breakthrough_promi
ses_safety-critical_software_of_unprecedented_reliability
It's also a standard strategy in automated theorem proving - to have a tiny,
trusted proof checker that checks the output of the rest of the system.
Appel has a proof checker that is 2700 loc:
http://www.ingentaconnect.com/content/klu/jars/2003/00000031/F0020003/052556
27
McCune has another, called Ivy: http://www.cs.unm.edu/~mccune/papers/ivy/
To some extent, we can think of a prediction market as a safety mechanism -
if you're unfamiliar with the concept:
http://hanson.gmu.edu/ideafutures.html
Suppose there was a piece of widely available software that acted as a
prediction market for multiple reinforcement-learning AI agents. AI projects
might use it, in order to get "wisdom of crowds" aggregate performance from
multiple agents.
Would this increase or decrease existential risk?
Johnicholas
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT