**From:** Anders Sandberg (*asa@nada.kth.se*)

**Date:** Sat Feb 10 2001 - 14:26:08 MST

**Next message:**Gordon Worley: "Re: Learning to be evil"**Previous message:**Eliezer S. Yudkowsky: "Re: Learning to be evil"**Maybe in reply to:**Mitchell Porter: "Six theses on superintelligence"**Next in thread:**Durant Schoon: "Codic Cortex WAS: Six theses on superintelligence"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

"Mitchell Porter" <mitchtemporarily@hotmail.com> writes:

*> Anders said
*

*>
*

*> >Why is this isomorphic to Chaitin approximations? I
*

*> >might have had too
*

*> >little sleep for the last nights, but it doesn't
*

*> >seem clear to me.
*

*>
*

*> If you know the halting probability for a Turing
*

*> machine, you can solve the halting problem for
*

*> any program on that machine. ("... knowing Omega_N
*

*> [first N bits of the halting probability] enables
*

*> one to solve the halting problem for all N-bit
*

*> programs" --http://www.cs.umaine.edu/~chaitin/nv.html)
*

Aha!

*> The idea is that a superintelligence would have
*

*> a 'computational core' which spends its time
*

*> approximating Omega, and modules which take general
*

*> problems, encode them as halting problems, and look
*

*> them up in Approximate Omega.
*

Ah, that's where the rub is: how do you convert a general problem into

a halting problem in an efficient way? For example, how does "What

actions will give me the largest probability of having more than one

million dollars within ten years?" or "How do I build a

nanoassembler?" convert into halting problems?

I would guess that the sum of work often remains constant in the

general case: the amount of work needed to encode a problem into a

form solvable by an algorithm and the amount of work in using the

algorithm tend to be fairly constant. Intelligence is about finding

ways of getting around this by exploiting patterns that make the

problem non-general, such as in mathematical tricks where a simple

transformation makes a hard problem simple.

*> >I'm not as certain as you are that there exists an
*

*> >unique optimal
*

*> >strategy. Without working within a certain problem
*

*> >domain the no free
*

*> >lunch theorems get you. Taking the problem domain to
*

*> >be 'the entire
*

*> >physical universe' doesn't really help, since you
*

*> >also have to include
*

*> >the probability distribution of the environment, and
*

*> >this will be very
*

*> >dependent not just on the interests but also actions
*

*> >of the being.
*

*>
*

*> I think approximating Omega is precisely the sort of
*

*> task where a no-free-lunch theorem is likely to apply.
*

*> The optimal strategy probably involves nothing more
*

*> intelligent than simulating all possible programs, and
*

*> incrementing Approximate Omega appropriately when one
*

*> is seen to terminate. The no-free-lunch theorem might
*

*> be: even if you have an approximation strategy which
*

*> outperforms blind simulation in calculating some finite
*

*> number of Omega bits, its asymptotic performance can't
*

*> beat blind simulation.
*

Sounds possible.

*> >What if this strategy is hard to compute
*

*> >efficiently, and different
*

*> >choices in initial conditions will produce
*

*> >noticeable differences in
*

*> >performance?
*

*>
*

*> If the No-Free-Omega Hypothesis :) is correct, then
*

*> such differences in performance will disappear
*

*> asymptotically (assuming hardware equality, and assuming
*

*> no-one pursues a *sub*optimal strategy).
*

Ah, egalitarian transcendence! I wonder what we libertarians on the

list should make of it :-)

*> >Some goals are not much helped by intelligence
*

*> >beyond a certain level
*

*> >(like, say, gardening), so the self-enhancement
*

*> >process would peter
*

*> >out before it reached any strong limits.
*

*>
*

*> Only if self-enhancement was strictly a subgoal of
*

*> the gardening goal. But perhaps this is more precise:
*

*> self-enhancement will not be hindered if it is a
*

*> subgoal of an open-ended goal, or a co-goal of just
*

*> about anything.
*

Beings with closed goals will eventually run out of expansion, I

think. Only beings with open-ended goals will be motivated to grow and

persist indefinitely. Playing Carse's "infinite games" might be a

survival trait for posthumans.

*> (Okay, that's a retreat from 'You don't have to do
*

*> anything *but* approximate Omega!' But this is what
*

*> I want a general theory of self-enhancement to tell me -
*

*> in what sort of environments will you *always* need
*

*> domain-specific modules that do something more than
*

*> consult the Omega module? Maybe this will even prove
*

*> to be true in the majority of environments.)
*

A very interesting question. I'll have to think hard on that one, it

seems to relate to some of my own issues with how to set learning

parameters dependent on the information learned from the environment.

-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y

**Next message:**Gordon Worley: "Re: Learning to be evil"**Previous message:**Eliezer S. Yudkowsky: "Re: Learning to be evil"**Maybe in reply to:**Mitchell Porter: "Six theses on superintelligence"**Next in thread:**Durant Schoon: "Codic Cortex WAS: Six theses on superintelligence"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:35 MDT
*