Re: [extropy-chat] Two draft papers: AI and existential risk; heuristics and biases

From: Peter de Blanc (
Date: Tue Jun 13 2006 - 13:17:56 MDT

On Tue, 2006-06-13 at 10:57 -0700, Jef Allbright wrote:
> The problem is in the concept of "works better". Where does the
> knowledge defining what is better (necessarily more refined than
> present internal knowledge) come from, if not from some form of
> competition with that which is external to the present system?

Let's say I want to learn how to play Go well, but I don't have anybody
to play with.

I can start by solving the game on small boards. Then I can look for
fast, compact rules that guess the correct move, or accurately predict
the final board position from the current position. Then I can start
playing games against myself on larger boards. Any time I find a clever
move, I look for other positions where I could apply it. Any time my
depth-1 analysis disagrees with my depth-5 analysis, I figure my depth-5
analysis is probably more accurate, and I can look for heuristics that
would make my depth-1 analysis agree with it. I can see which openings
win most often. I can prove theorems about Go like Benson's Rule. I can
do all this without ever playing Go against somebody else.

Of course, some problems require more data than others. Generally, if
you're trying to figure out the implications of a small set of axioms,
you don't need to spend a lot of time gathering data. If you're trying
to work backwards from the implications to figure out which axioms they
came from, more data would certainly help. Math is a problem space of
the former type, and physics is a problem space of the latter type.

Then there's stuff like chemistry, where you already know the axioms and
you're trying to figure out their implications, but you just don't have
enough computing power to do it on your own. Well, if you know how to
make computronium, this isn't a problem, but let's say you don't.
Fortunately, nature provides us with a lot of computing power that's
already configured for figuring out the implications of the axioms of

I actually think you could "solve chemistry" without that kind of brute
force if you're clever enough, but even if you can't, it's not all that
important to RSI. I think recursive self-improvement is more like math
or Go than like chemistry. You already know what optimization is about,
but you don't know the computationally cheap way to do it.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT