From: Charles Hixson (firstname.lastname@example.org)
Date: Mon Jan 12 2009 - 14:36:14 MST
Aleksei Riikonen wrote:
> Suppose that we don't really learn how to build FAI before we learn
> e.g. to scan human brains and construct simulated humans in simulated
> universes that we can run at huge subjective speedups.
> What would be the safest (realistic) thing to do?
> One option would be to run a rather large amount of simulations of a
> rather large amount of humans (and various modifications of humans),
> observe which simulated humans/tweaks appear to be the most reliable
> "good guys", and let those out into the real world as weakly
> superhuman AIs.
> I mean, if we don't have FAI, we anyway need to have imperfect
> humans(/non-humans) in positions of power. Instead of a real human, I
> would much rather vote for a simulated being on whom I have thousands
> of years of pseudo-historical data of how it has acted in simulated
> situations where it was tempted to be corrupted etc.
> Could it be argued that if we are in an ancestor simulation, one of
> the above kind is of a comparatively high probability? Sounds like one
> of the better reasons to run ancestor simulations.
> PS: I'd be glad to hear if I'm actually not saying anything new here.
I consider that implausible, because I believe that by the time there is
enough computer power to run simultaneous multiple mini-universe
simulations (Earth would probably be enough. No reason to simulate the
rest of the galaxy. Just fake the perceptions of it that would be
observable.) then either an FAI or a non-FAI will be created.
However, given the scenario...
That does seem like a good reason to run the simulations, but you are
assuming that the person in charge is a good guy who can be trusted with
that kind of power, in which case why not use him as your first upload?
Given human political organization the only things that keep the
power-hungry psychopaths from scrambling to be an upload are:
1) They wouldn't see the upload as themself, and neither would the
upload see itself as them.
2) They don't believe it's possible anyway.
3) They don't understand the amount of power that a computer running
the country (under nominal external direction) would have.
If such a thing happens, and we're very lucky, the first upload will be
someone like Craig Ventner. He's an egomaniac, but not a psychopath.
Simulations as you propose would be a good rational solution. I just
don't see them as the kind of thing our political systems are good at
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT