From: Norman Noman (firstname.lastname@example.org)
Date: Sun Aug 26 2007 - 18:00:51 MDT
On 8/26/07, rolf nelson <email@example.com> wrote:
> > And how could one construct the threat such that the UFAI cannot
> rationally claim to believe that the simulation threat is a bluff?
> From the FAI's point of view, running UFAI* is, indeed, a pointless
> waste of time. However, the following conditions would probably be
> sufficient for FAI to run UFAI*:
> 1. A pre-singularity human promised to ask FAI to run UFAI*;
> 2. Post-singularity, instead of reneging, the human asks FAI to run
> UFAI* (because humans are stubborn and illogical, or because there's
> social value to be gained by being perceived as "someone who keeps his
> 3. Running UFAI* requires an insignificant amount of FAI's resources; and,
> 4. Human autonomy is one of the things the FAI values in its utility
I don't think you need the human component at all. If this plan will work,
both the FAI and RAI will figure it out on their own, the FAI will run the
simulations, and the RAI will respond to them.
It is, as I've said in the other thread, a variation of newcomb's paradox.
The fact that the RAI can predict whether or not the FAI WOULD run the
simulation is what motivates the FAI TO run the simulation.
This archive was generated by hypermail 2.1.5 : Mon Jun 17 2013 - 04:01:01 MDT