Re: FAI (aka 'Reality Hacking') - A list of all my proposed guesses (aka 'hacks')

From: Russell Wallace (russell.wallace@gmail.com)
Date: Fri Jan 28 2005 - 06:33:23 MST


On Fri, 28 Jan 2005 16:05:26 +1300 (NZDT), Marc Geddes
<marc_geddes@yahoo.co.nz> wrote:
> Last post for me here for a while I think. The Sing
> Inst team members (all two of them!) are pretty
> condescending to other people if you ask me. Someone
> should tell them to stay in the lab and let others
> handle PR.

*shrug* The SL4 list is for high-grade discussion, not PR. SIAI's PR
is on their web site, which strikes me as well written. But
regardless...

> (10) General intelligence will turn out to subsume
> the Friendliness problem. So morality will turn out
> to be inseparable from general intelligence after all.

I hope nobody who's actually working on AI believes this. If anyone
here is and does, speak up and I'll try to talk you out of it ^.^

Most of the other items on the list have been covered before, but I'll
address this one:

> (2) The specific class of functions that goes FOOM is
> to be found somewhere in a class of recursive
> functions designed to investigate special maths
> numbers known as 'Omega numbers' (Discoverer of Omega
> numbers was mathematician Greg Chaitin)

This is a substantive hypothesis. Here's why I disagree with it.

Let I(P) = the best way to solve problem P given infinite computing power.

Let L(P) = the best way to solve problem P given limited computing
power; for the sake of definiteness, say a nanotech supercomputer,
which is the most we can plausibly hope to get our hands on in the
foreseeable future.

Consider chess as an example.

We know what I(chess) is: the minimax function.

What about L(chess)? We have good candidates in the form of a
collection of very strong chess programs. What do they look like?
Essentially tweaks (alpha-beta, iterative deepening, NegaScout etc) to
the minimax function.

Maybe there's some very clever algorithm that could beat Deep Blue
while not relying much on minimax, but there's no evidence for such a
thing thus far, and my guess for what it's worth is that there isn't
any such.

So I'll conjecture that L(chess) ~= I(chess).

What about Go? I(Go) = I(chess) = the minimax function.

L(Go) is a lot trickier. Go has a lot more possible moves at each
point than chess, and position evaluation is much less well
approximated by a simple count of material, and in practice while
programs of reasonable strength make some use of minimax, they don't
rely heavily on it. Again, maybe there's some trick to tweaking
minimax for this job that we just haven't stumbled on, but it doesn't
look that way.

So I'll conjecture that L(Go) != I(Go). In other words, as we move to
a more subtle and complex game, L(P) is diverging from I(P).

What about real life?

We have candidates for (or at least plausible steps in the direction
of) I(real life); AIXI et al. And we note that some formulations of
these do, as Marc conjectures, relate to Chaitin's omega. But as I
remarked in a previous discussion a little while ago, there are good
reasons AIXI is PDFware rather than running code.

Of course, we don't have candidates for L(real life) - finding one is
precisely the ultimate goal of AI research! The best we do have thus
far is the human mind - which looks nothing at all like AIXI and has
nothing to do with omega.

Again, maybe there's some trick to making an AIXI-like algorithm
computationally tractable, which would make L(real life) ~= I(real
life). But the trend thus far suggests otherwise, and therefore I'll
conjecture this is not the case, and that L(real life) has no
significant connection to AIXI, omega etc.

- Russell



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:52 MST