Re: [sl4] A hypothesis for what our world might simulate

From: Vladimir Nesov (robotact@gmail.com)
Date: Tue Jan 13 2009 - 04:22:10 MST


On Tue, Jan 13, 2009 at 10:03 AM, Krekoski Ross <rosskrekoski@gmail.com> wrote:
>
> yes, I said infinite with the implication that our nested simulated
> researchers would possibly create nested simulated researchers of their own
> ad infinitum. well actually ad until our nested researchers spread computing
> resources so thinly that we run out of RAM or something. We still might not
> get our FAI though. Its not clear to me actually how creating a series of
> nested reality simulations actually "solves" anything other than having too
> much computing resources...
>

The premise is that a groups of FAI researchers has a chance to
succeed, depending on happenstance and ability to work for long enough
time. Simulation is a way to improve the chances for success, compared
to a setting where researchers continue to work in the real world.
First, simulation speeds things up, which lowers the risk of running
into the external end of the world. Second, using selected individuals
to research in the simulation should allow to make the risk of
creating unFriendly AI originating from the simulation lower than the
same risk in the outside world. Peer review should allow to lower the
risk of unFriendly AI further, even as the number of simulated
researchers increases. Different combinations of researchers increase
diversity, which may enable to find the solution faster where one
combination won't be successful. And finally, nested simulations are
mentioned only in the context of starting to implement the AI, at
which point they can serve as intelligent firewalls that could
somewhat mitigate the risk of AI unexpectedly developing not according
to plan, so that upper-level simulation FAI programmers would be able
to terminate simulations on the lower levels. Each measure doesn't
give a solution to FAI, only contributes to increasing the chances of
success (and of course, this whole scenario should be allowed to
reflectively reorganize itself, if simulated researchers agree it's a
good idea).

-- 
Vladimir Nesov
robotact@gmail.com
http://causalityrelay.wordpress.com/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT