From: Aleksei Riikonen (aleksei@iki.fi)
Date: Mon Jan 12 2009 - 07:26:48 MST
Suppose that we don't really learn how to build FAI before we learn
e.g. to scan human brains and construct simulated humans in simulated
universes that we can run at huge subjective speedups.
What would be the safest (realistic) thing to do?
One option would be to run a rather large amount of simulations of a
rather large amount of humans (and various modifications of humans),
observe which simulated humans/tweaks appear to be the most reliable
"good guys", and let those out into the real world as weakly
superhuman AIs.
I mean, if we don't have FAI, we anyway need to have imperfect
humans(/non-humans) in positions of power. Instead of a real human, I
would much rather vote for a simulated being on whom I have thousands
of years of pseudo-historical data of how it has acted in simulated
situations where it was tempted to be corrupted etc.
Could it be argued that if we are in an ancestor simulation, one of
the above kind is of a comparatively high probability? Sounds like one
of the better reasons to run ancestor simulations.
PS: I'd be glad to hear if I'm actually not saying anything new here.
-- Aleksei Riikonen - http://www.iki.fi/aleksei
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT