From: Stuart Armstrong (firstname.lastname@example.org)
Date: Mon Jan 12 2009 - 08:25:03 MST
One problem that springs to mind is that if we want to know how humans
would react as superhuman AI's, we need to run them as superhuman AI's
(a nice neighbour is not a nice supreme president in charge of the
world). Thus we'd need to ensure the simulation was believable enough
to deceive a superhuman AI (very tricky, as we can't use superhuman
AI's ourselves) and take the usual precautions to 'keep the AI in the
box (the simulated universe)'. Which is nearly impossible (and recall
that a superhuman AI who merely suspects it might be in a box, may
divert a small part of its energies to getting out of it - enough to
convince us, anyway).
2009/1/12 Aleksei Riikonen <email@example.com>:
> Suppose that we don't really learn how to build FAI before we learn
> e.g. to scan human brains and construct simulated humans in simulated
> universes that we can run at huge subjective speedups.
> What would be the safest (realistic) thing to do?
> One option would be to run a rather large amount of simulations of a
> rather large amount of humans (and various modifications of humans),
> observe which simulated humans/tweaks appear to be the most reliable
> "good guys", and let those out into the real world as weakly
> superhuman AIs.
> I mean, if we don't have FAI, we anyway need to have imperfect
> humans(/non-humans) in positions of power. Instead of a real human, I
> would much rather vote for a simulated being on whom I have thousands
> of years of pseudo-historical data of how it has acted in simulated
> situations where it was tempted to be corrupted etc.
> Could it be argued that if we are in an ancestor simulation, one of
> the above kind is of a comparatively high probability? Sounds like one
> of the better reasons to run ancestor simulations.
> PS: I'd be glad to hear if I'm actually not saying anything new here.
> Aleksei Riikonen - http://www.iki.fi/aleksei
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:37 MDT