**From:** Richard Loosemore (*rpwl@lightlink.com*)

**Date:** Wed Feb 22 2006 - 07:54:43 MST

**Next message:**Philip Goetz: "Knowledge considered harmful (was Re: Faith-based thought vs thinkers)"**Previous message:**Ben Goertzel: "Re: ESSAY: Program length, Omega and Friendliness"**In reply to:**Ben Goertzel: "Re: ESSAY: Program length, Omega and Friendliness"**Next in thread:**Eliezer S. Yudkowsky: "Re: ESSAY: Program length, Omega and Friendliness"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

I want to expand on Ben's comments (even though he may or may not agree

that this constitutes an expansion).

Correct me if I'm wrong, but the whole point of the "Halting Problem" is

that we can ask a slew of interesting questions about whether certain

programs can ever be proved to run to completion and stop. All well and

good for programs that are supposed to compute something useful and then

stop, but what if the program is not designed to stop? An AGI program

is such a program: it does not compute a final result and then

terminate, it just does certain kinds of things indefinitely.

An AGI program can be structured (and probably would be) so that it

could add to or subtract from to its own code routinely, and do so in a

way that was sensitive to its interactions with the environment, in such

a way that these changes were non-deterministic. So now we have a

program that is not only not designed to halt, but which spends its life

doing something that looks like a random walk through the space of

programs of arbitrary length (this is a variant of what Ben was saying,

below and in other posts). The concept of "halting" is obviously

getting stretched well beyond breaking point here: the structure of the

program is time-dependent and non-deterministic.

But of course, none of us really cares if this kind of program "halts",

so this is all fine, and nobody disagrees.....

But now we seem to be discussing another question about these programs:

we are trying to find out if they are "Friendly". Oh, and by the way,

they must also be "Intelligent." We are looking for ways that a given

example of an Intelligent system can be proved to be Friendly, and to

not depart from being Friendly even though it might be able to self modify.

The concept of "Friendly" is vague. The concept of "Intelligent,"

ditto. I am not even sure someone could *ever* come up with a clear

definition of either of these concepts that we would all feel compelled

to subscribe to.

The concept of "Halting," on the other hand, is extremely well defined -

but even *it* does not seem applicable to the general class of AGI programs.

So here we have the story:

1) There are some programs for which the concept of Halting is perfectly

sensible, and of these, Godel, Turing and Chaitin say: don't hold your

breath, folks, because you will not always be able to prove that they

actually do halt.

2) Then there are some programs (like the AGI class) of which the

concept of Halting is not even meaningful. Godel, Turing and Chaitin,

at this point, are heading out the door saying "Too rich for us: we're

outta here."

3) Then there are some properties of programs, like Intelligent and

Friendly, that we cannot even define in a crisp way, and maybe never

will be able to define. G, T and C are now cracking each other up, in

some other coffee house across town, picking random words from a

dictionary - Hermeneutic? Smelly? Virginal? Obstreperous? - and asking

if programs provably have these properties.

*****

Joking aside, folks, I think the concept of Friendliness is interesting,

but not if it is treated as something provable, like Halting.

Richard Loosemore

Ben Goertzel wrote:

*> On 2/22/06, Jeff Medina <analyticphilosophy@gmail.com> wrote:
*

*>> William Pearson wrote:
*

*>>> So the maximum number of bits of Phi and hence number of known or
*

*>>> provably friendly programs is bounded by the length of your starting
*

*>>> program.
*

*>> As Eliezer mentioned, the number is infinite, and hence unbounded
*

*>
*

*> It seems more interesting to me to observe that the maximum
*

*> algorithmic information of any program in the set of provably friendly
*

*> programs is finite, and is bounded by the length of your starting
*

*> program. (here "provably" is taken to mean "provably by your starting
*

*> program")
*

*>
*

*> (The algorithmic information of X is defined, loosely, as the length
*

*> of the shortest program for computing X)
*

*>
*

*> On the other hand, if the starting program is allowed to increase its
*

*> algorithmic information over time via adding new physical compute
*

*> resources, then things get more complicated -- but this doesn't get
*

*> around the basic problem and obvious I've cited above.
*

*>
*

*> Chaitin has summarized Godel's theorem nicely as "You can't use a ten
*

*> pound formal system to prove a twenty pound theorem."
*

*>
*

*> The application of this to Friendly AI would seem to be: "A ten pound
*

*> intelligence can't prove the Friendliness of a twenty pound
*

*> intelligence."
*

*>
*

*> Furthermore, it seems to me that "A ten pound intelligence can't prove
*

*> any solid probabilistic bounds on the Friendliness of a twenty pound
*

*> intelligence" either....
*

*>
*

*> This suggests that *perhaps* mathematical proof is not the right
*

*> paradigm for us to be using to think about these issues... (Remember,
*

*> "mathematical proof" is just an abstract construct found useful by us
*

*> humans in particular situations we have encountered so far...)
*

*>
*

*> -- Ben
*

*>
*

*>
*

**Next message:**Philip Goetz: "Knowledge considered harmful (was Re: Faith-based thought vs thinkers)"**Previous message:**Ben Goertzel: "Re: ESSAY: Program length, Omega and Friendliness"**In reply to:**Ben Goertzel: "Re: ESSAY: Program length, Omega and Friendliness"**Next in thread:**Eliezer S. Yudkowsky: "Re: ESSAY: Program length, Omega and Friendliness"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ] [ attachment ]

*
This archive was generated by hypermail 2.1.5
: Wed Jul 17 2013 - 04:00:55 MDT
*