**From:** Mitchell Porter (*mitchtemporarily@hotmail.com*)

**Date:** Sat Feb 10 2001 - 05:12:40 MST

**Next message:**Eliezer S. Yudkowsky: "Re: Learning to be evil"**Previous message:**Gordon Worley: "Re: Learning to be evil"**Maybe in reply to:**Mitchell Porter: "Six theses on superintelligence"**Next in thread:**Anders Sandberg: "Re: Six theses on superintelligence"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

I said

*>2. Self-enhancement: It seems likely to me that there is
*

*>an optimal strategy of intelligence increase which
*

*>cannot be bettered except by luck or by working solely
*

*>within a particular problem domain, and that this
*

*>strategy is in some way isomorphic to calculating
*

*>successive approximations to Chaitin's halting
*

*>probability for a Turing machine given random
*

*>input.
*

Anders said

*>Why is this isomorphic to Chaitin approximations? I
*

*>might have had too
*

*>little sleep for the last nights, but it doesn't
*

*>seem clear to me.
*

If you know the halting probability for a Turing

machine, you can solve the halting problem for

any program on that machine. ("... knowing Omega_N

[first N bits of the halting probability] enables

one to solve the halting problem for all N-bit

programs" --http://www.cs.umaine.edu/~chaitin/nv.html)

The idea is that a superintelligence would have

a 'computational core' which spends its time

approximating Omega, and modules which take general

problems, encode them as halting problems, and look

them up in Approximate Omega.

What I don't know yet is the most rapid way of

approximating Omega. Chaitin says somewhere that

you cannot know how rapidly you are converging,

it's another aspect of the noncomputability.

I think the Resource Bounded Probability method

of induction (http://world.std.com/~rjs/isis96.html)

might amount to an Omega approximation strategy,

but I'm not sure yet.

*>I'm not as certain as you are that there exists an
*

*>unique optimal
*

*>strategy. Without working within a certain problem
*

*>domain the no free
*

*>lunch theorems get you. Taking the problem domain to
*

*>be 'the entire
*

*>physical universe' doesn't really help, since you
*

*>also have to include
*

*>the probability distribution of the environment, and
*

*>this will be very
*

*>dependent not just on the interests but also actions
*

*>of the being.
*

I think approximating Omega is precisely the sort of

task where a no-free-lunch theorem is likely to apply.

The optimal strategy probably involves nothing more

intelligent than simulating all possible programs, and

incrementing Approximate Omega appropriately when one

is seen to terminate. The no-free-lunch theorem might

be: even if you have an approximation strategy which

outperforms blind simulation in calculating some finite

number of Omega bits, its asymptotic performance can't

beat blind simulation.

Even if you decide to approximate Omega by blind

simulation, you still have decisions to make - you can't

let all the nonterminating programs run forever. If

there's no free lunch, that might mean even if you cull

them randomly, you'll still be converging on Omega as

fast as possible.

*> > 3. If this is so, then once this strategy is
*

*>known,
*

*> > winning the intelligence race may after all boil
*

*>down
*

*> > to hardware issues of speed and size (and possibly
*

*>to
*

*> > issues of physics, if there are physical processes
*

*> > which can act as oracles that compute trans-Turing
*

*> > functions).
*

*>
*

*>What if this strategy is hard to compute
*

*>efficiently, and different
*

*>choices in initial conditions will produce
*

*>noticeable differences in
*

*>performance?
*

If the No-Free-Omega Hypothesis :) is correct, then

such differences in performance will disappear

asymptotically (assuming hardware equality, and assuming

no-one pursues a *sub*optimal strategy).

*> > 5. Initial conditions: For an entity with goals or
*

*>values,
*

*> > intelligence is just another tool for the
*

*>realization
*

*> > of goals. It seems that a self-enhancing
*

*>intelligence
*

*> > could still reach superintelligence having started
*

*>with
*

*> > almost *any* set of goals; the only constraint is
*

*>that
*

*> > the pursuit of those goals should not hinder the
*

*>process
*

*> > of self-enhancement.
*

*>
*

*>Some goals are not much helped by intelligence
*

*>beyond a certain level
*

*>(like, say, gardening), so the self-enhancement
*

*>process would peter
*

*>out before it reached any strong limits.
*

Only if self-enhancement was strictly a subgoal of

the gardening goal. But perhaps this is more precise:

self-enhancement will not be hindered if it is a

subgoal of an open-ended goal, or a co-goal of just

about anything.

Ben Goertzel said

*>My intuition is that there's going to be a huge diversity of possible ways
*

*>to achieve intelligence increase by self-enhancement, each one with its own
*

*>advantages and disadvantages in various environments.
*

This is surely true. Assuming that calculating Omega

really is a meta-solution to all problems, the real

question is then: What's more important - solving

environment-specific problems which Approximate Omega

can't yet solve for you, by domain-specific methods,

or continuing to calculate Omega? My guess is that in

most environments, even such a stupid process as

approximating Omega by blind simulation and random

culling always deserves its share of CPU time.

(Okay, that's a retreat from 'You don't have to do

anything *but* approximate Omega!' But this is what

I want a general theory of self-enhancement to tell me -

in what sort of environments will you *always* need

domain-specific modules that do something more than

consult the Omega module? Maybe this will even prove

to be true in the majority of environments.)

_________________________________________________________________________

Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.

**Next message:**Eliezer S. Yudkowsky: "Re: Learning to be evil"**Previous message:**Gordon Worley: "Re: Learning to be evil"**Maybe in reply to:**Mitchell Porter: "Six theses on superintelligence"**Next in thread:**Anders Sandberg: "Re: Six theses on superintelligence"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:35 MDT
*