**From:** Durant Schoon (*durant@ilm.com*)

**Date:** Tue Feb 13 2001 - 12:43:16 MST

**Next message:**Michael LaTorra: "RE: How does one publish a short book?"**Previous message:**Spudboy100@aol.com: "Re: How does one publish a short book?"**In reply to:**Mitchell Porter: "Six theses on superintelligence"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

Responding specifically to:

--------------------------

*> From: Anders Sandberg <asa@nada.kth.se>
*

*> Date: 10 Feb 2001 22:26:08 +0100
*

*>
*

*> Ah, that's where the rub is: how do you convert a general problem into
*

*> a halting problem in an efficient way? For example, how does "What
*

*> actions will give me the largest probability of having more than one
*

*> million dollars within ten years?" or "How do I build a
*

*> nanoassembler?" convert into halting problems?
*

*>
*

*> I would guess that the sum of work often remains constant in the
*

*> general case: the amount of work needed to encode a problem into a
*

*> form solvable by an algorithm and the amount of work in using the
*

*> algorithm tend to be fairly constant. Intelligence is about finding
*

*> ways of getting around this by exploiting patterns that make the
*

*> problem non-general, such as in mathematical tricks where a simple
*

*> transformation makes a hard problem simple.
*

What a perfect segue into the topic of the Codic Cortex!

This is a fascinating thread, but a bit too abstract, so let's

concretify it:

Suppose I have a specific problem in a specific domain: I want what

to assign teachers to classrooms in some effiecient way. There are a

limited number of classrooms and each teacher has a particular

schedule (the actual details of this example aren't too important).

Let's also imagine that we have an SI which devotes some fraction of

its time "approximating Omega" and has "modules which take general

problems, encode them as halting problems, and look them up in

Approximate Omega" (see thread snippets below).

As some of you probably know, solving the scheduling problem for

teachers is isomorphic to coloring a graph, that is, one can

translate the problem of teachers, classrooms and schedules into

nodes, arcs, and colors (not necessarily in that order). You can

"color" the graph in the right way and then translate that colored

graph solution back into the domain of scheduling teachers.

So now. Let's say we have a module called a Codic Cortex (this is

Eliezer's term, of course). What does this Codic Cortex do? (I'm

hoping for corrections here where ever I misstep) In the same way

that one's visual cortex processing visual information and

recognizes traits, the codic cortex processes codic (code, as in

programming "code") information and also recognizes traits.

My understanding of the Codic Cortex concept is that it can look at

some code and "intuitively" (in the same sense that my visual cortex

can "intuitively" recognize my mother's face) know what it does and

what kind of problem it is, so that it might be optimized.

We have a problem and a problem solver. Let's go through the steps!

Somehow the symbols of teachers, classrooms and schedules are all

grounded, that is, attached to a vast database of deep meaning

(eg. Cyc?). Excess information is thrown away (we don't care that

"Most teachers are older than 13 years old"). That is, we abstract

only the salient features.

Let's say we also just happen to have solved the class of problems

characterized by graph-coloring and can look up the right bit of

code in our Approximate Omega Index (am I actually making sense?)

Everyone should see that we have 1) a problem encoded in terms we

recognize and 2) a solution (sitting in our Omega Index) which we can

apply if we are only clever enough to recognize that they go together.

Here's my question (which is actually asking for a clarification):

Is it also the Codic Cortex which recognizes that the teacher

scheduling problem can be mapped to the graph coloring problem,

which has a known solution?

If that is not the module, then what module does that? (that is the

module that interests me most personally). What do the details of

this module look like?

This module might do such things as take a signal processing

problem expressed in the time domain and be able to recognize that

it can be translated into the frequency domain (where it has a known

solution involving Fourier transforms and simple addition - an

important feature being that the solution can be remapped into the

original problem space).

Note that these problem spaces and solution spaces are pretty big

(right?). Finding matches might involve *combinations* of solution

transformations, which take longer to find the first time, but can

fortunately be rembered once found...in fact, can we encode this

search (this mapping of problem spaces) as a halting problem, and look

for it in Approximate Omega?

So now let's say I have a fixed amount of work to do and a highly

tantalizing thread on sl4 is obstructing my path...

Related Thread Snippets:

-----------------------

*> From: "Mitchell Porter" <mitchtemporarily@hotmail.com>
*

*> Date: Fri, 09 Feb 2001 02:49:35
*

[snip - 5 other interesting theses]

*> 4. I haven't at all addressed how to apply superintelligence
*

*> in the abstract to specific problems. I would guess that
*

*> this is a conceptual problem (having to do with grounding
*

*> the meanings of inputs, symbols, and outputs) which only
*

*> has to be solved once, rather than something which itself
*

*> is capable of endless enhancement.
*

*> From: "Mitchell Porter" <mitchtemporarily@hotmail.com>
*

*> Date: Sat, 10 Feb 2001 05:12:40
*

*>
*

*> The idea is that a superintelligence would have
*

*> a 'computational core' which spends its time
*

*> approximating Omega, and modules which take general
*

*> problems, encode them as halting problems, and look
*

*> them up in Approximate Omega.
*

[...]

*> Assuming that calculating Omega
*

*> really is a meta-solution to all problems, the real
*

*> question is then: What's more important - solving
*

*> environment-specific problems which Approximate Omega
*

*> can't yet solve for you, by domain-specific methods,
*

*> or continuing to calculate Omega? My guess is that in
*

*> most environments, even such a stupid process as
*

*> approximating Omega by blind simulation and random
*

*> culling always deserves its share of CPU time.
*

*>
*

*> (Okay, that's a retreat from 'You don't have to do
*

*> anything *but* approximate Omega!' But this is what
*

*> I want a general theory of self-enhancement to tell me -
*

*> in what sort of environments will you *always* need
*

*> domain-specific modules that do something more than
*

*> consult the Omega module? Maybe this will even prove
*

*> to be true in the majority of environments.)
*

-- Durant x2789

**Next message:**Michael LaTorra: "RE: How does one publish a short book?"**Previous message:**Spudboy100@aol.com: "Re: How does one publish a short book?"**In reply to:**Mitchell Porter: "Six theses on superintelligence"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:35 MDT
*