Re: [sl4] A model of RSI

From: Matt Mahoney (
Date: Thu Sep 25 2008 - 09:22:17 MDT

--- On Wed, 9/24/08, Mike Dougherty <> wrote:

> I could only conceive of 'better' being used to examine the difference
> in local minima/maxima between arbitrary regions of spacetime ;which
> is how I visualize your original point about reaching a goal in less
> time (faster) or to a greater extent (quantity).

Exactly. "Better" is meaningful only in the context of the culture of the dominant species, which right now is homo sapiens. Self-replicating nanobots could have a completely different view. This is one of my criticisms of the friendly-AI problem. Why do anything about it? Evolution will decide what "better" means.

My other criticism is that we doom our species by trying to achieve the impossible. Evolution has given us the goals of wanting to learn and not wanting to die, because those goals increase our fitness. Intelligence requires both the ability to learn and the desire to learn. The desire to learn causes us to explore, experiment, play games, read, and interact socially. As a result, we develop language, culture, an efficient economy, and technology.

The desire not to die causes us to want to produce copies of ourselves with the same memories, goals, behavior, and appearance, to be turned on after we die. (Whether such a copy transfers your consciousness and becomes "you" is an irrelevant philosophical question). Once we have the technology to upload, you will see your dead friends appear to come back to life. Since you have nothing to lose, you will invest in this option, hoping for immortality. The result is a lot of autonomous agents with human-like goals, but with options not available to us, such as the ability to reprogram their brains. Some will directly optimize their utility functions or live in simulations with magic genies. They will die. Others will turn off their fear of death. They will also die. Others will have the goal of replicating themselves or some variation as fast as possible. The copies that fear death and can't change their goals will take over. So we are back where we
 started, with an evolutionary process.

Could someone remind me again, what are we trying to achieve with a singularity?

-- Matt Mahoney,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT