[sl4] Potential CEV Problem

From: Edward Miller (progressive_1987@yahoo.com)
Date: Thu Oct 23 2008 - 04:32:14 MDT


 
I am assuming that to successfully determine the extrapolated volition of the human race, it would take an enormous extra amount of computational power. Before the CEV is determined, I am assuming that the AGI would be agnostic on the matter. Thus, its first task would be to acquire as much computing power as possible and then CEV might turn out to be one of those not-so-great ideas.

Even if it did for sure completely rule out killing everyone as the actual extrapolated volition, it is still possible it would choose to use all of our atoms to build its cognitive horsepower simply because we are the closest matter available, and every planck unit of time in a post-singularity world might have to be treated with care as it has vastly more expected utility than our current meat-world (let's say 3^^^^3 utils).

After it is done computing the CEV, even if it does decide to then create simulations of humans, would that be the scenario we want? I can't figure out how this could be avoided, at least given the CEV description given on intelligence.org ... which could be my own short-sightedness.

- Edward Miller

      



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT