Re: [sl4] A hypothesis for what our world might simulate

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon Jan 12 2009 - 08:56:23 MST


--- On Mon, 1/12/09, Aleksei Riikonen <aleksei@iki.fi> wrote:

> Suppose that we don't really learn how to build FAI before we learn
> e.g. to scan human brains and construct simulated humans in simulated
> universes that we can run at huge subjective speedups.
>
> What would be the safest (realistic) thing to do?

Ban AI (not that I advocate that position).

> One option would be to run a rather large amount of simulations of a
> rather large amount of humans (and various modifications of humans),
> observe which simulated humans/tweaks appear to be the most reliable
> "good guys", and let those out into the real world as weakly
> superhuman AIs.

One possibility is that if you simulate a superhuman AI, then it might figure out that it is in a simulation and behave nicely for awhile. You wouldn't know if it did, because it would be smarter than you. Also (by assumption) we have not solved AGI, so you can't just go look into its mental state and know what it is thinking.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT