Re: [sl4] prediction markets

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Fri Oct 16 2009 - 14:56:13 MDT


Regarding the seL4 kernel, it took 25-30 person years to prove 7500 lines of code.
http://ertos.nicta.com.au/research/l4.verified/numbers.pml

A global, friendly AI that encompasses all human knowledge is equivalent to about 10^16 lines of code.
(10^10 people x 10^9 bits per person x 3% non-overlap x 0.03 lines per bit).

A distributed development effort will certainly be required. My AI proposal is P2P network of agents that trade data as if in an economy where information has negative value. Intelligence comes from lots of narrow specialists and a distributed index that routes messages to the right experts. Market forces reward local intelligence and good behavior. http://mattmahoney.net/agi2.html
 -- Matt Mahoney, matmahoney@yahoo.com

----- Original Message ----
From: Johnicholas Hines <johnicholas.hines@gmail.com>
To: sl4@sl4.org
Sent: Wed, October 14, 2009 6:00:50 PM
Subject: [sl4] prediction markets

Hi. To some extent, I think this list is appropriate for discussing
the current state of the art for software safety.

One important strategy is to stay small - for example, I've read that
Ontario Hydro used a 6000 loc nuclear power plant shutdown system.
http://www.safeware-eng.com/system%20and%20software%20safety%20publications/High%20Pressure%20Steam%20Engines.htm

It's sometimes possible to leverage this by building systems
containing small "kernels" - components that are intended to assure
safety, despite the system as a whole containing untrusted code.

The recent seL4 kernel is an example:
http://www.nicta.com.au/news/current/world-first_research_breakthrough_promises_safety-critical_software_of_unprecedented_reliability

It's also a standard strategy in automated theorem proving - to have a
tiny, trusted proof checker that checks the output of the rest of the
system. Appel has a proof checker that is 2700 loc:
http://www.ingentaconnect.com/content/klu/jars/2003/00000031/F0020003/05255627
McCune has another, called Ivy:
http://www.cs.unm.edu/~mccune/papers/ivy/

To some extent, we can think of a prediction market as a safety
mechanism - if you're unfamiliar with the concept:
http://hanson.gmu.edu/ideafutures.html

Suppose there was a piece of widely available software that acted as a
prediction market for multiple reinforcement-learning AI agents. AI
projects might use it, in order to get "wisdom of crowds" aggregate
performance from multiple agents.

Would this increase or decrease existential risk?

Johnicholas



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT