Re: [sl4] A hypothesis for what our world might simulate

From: Krekoski Ross (rosskrekoski@gmail.com)
Date: Tue Jan 13 2009 - 17:36:29 MST


On Tue, Jan 13, 2009 at 7:22 PM, Vladimir Nesov <robotact@gmail.com> wrote:

>
> The premise is that a groups of FAI researchers has a chance to
> succeed, depending on happenstance and ability to work for long enough
> time. Simulation is a way to improve the chances for success, compared
> to a setting where researchers continue to work in the real world.
> First, simulation speeds things up, which lowers the risk of running
> into the external end of the world. Second, using selected individuals
> to research in the simulation should allow to make the risk of
> creating unFriendly AI originating from the simulation lower than the
> same risk in the outside world. Peer review should allow to lower the
> risk of unFriendly AI further, even as the number of simulated
> researchers increases. Different combinations of researchers increase
> diversity, which may enable to find the solution faster where one
> combination won't be successful. And finally, nested simulations are
> mentioned only in the context of starting to implement the AI, at
> which point they can serve as intelligent firewalls that could
> somewhat mitigate the risk of AI unexpectedly developing not according
> to plan, so that upper-level simulation FAI programmers would be able
> to terminate simulations on the lower levels. Each measure doesn't
> give a solution to FAI, only contributes to increasing the chances of
> success (and of course, this whole scenario should be allowed to
> reflectively reorganize itself, if simulated researchers agree it's a
> good idea).
>

1. The researchers ARE AIs themselves, how do we know their definition of
friendly is the same as ours?
2. Do the researchers know they are simulated? If not, then I dont see how
this eliminates my scenario of silly recursion. -- we can consider ourselves
to be researchers now, and we are proposing making a set of simulated
researchers to solve this problem--- I dont think that our simulated
researchers can have an idea that is not hard-wired of how clock cycles
affect their own cognition, and I can conceive of situations where they
create simulated researchers that run for 1000 ticks for every tick they
run-- to us this is inefficient, but its subjectively sped up for our
researchers since they have no concept of time outside the simulation.

3. what if they decide to recursively improve themselves in order to speed
up making the FAI?
4. Since its subjectively sped up, if the "intelligent firewall" gets
compromised, its still over before we can do anything about it.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT