Re: Changing the value system of FAI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed May 10 2006 - 10:54:46 MDT


Ben Goertzel wrote:
> Eliezer,
>
> You wrote:
>
>> The problem I haven't been able to solve is *rigorously* describing how
>> a classical Bayesian expected utility maximizer would make changes to
>> its own source code, including the source code that changes the code;
>> that is, there would be some set of code that modified itself.
>> Classical decision theory barfs on this due to an infinite recursion.
>
> I'd like to get a clearer idea in my mind of the precise question
> you're thinking about...

Schmidhuber's Godel machine doesn't actually solve any of the questions
it poses, but it poses a fairly good version of the question - not
exactly the question I use, but a useful question.

> Is it like this?
>
> Suppose we have a computer C with a finite memory (containing a
> program of size N). Suppose that the computer has the power to
> write down a program of size N in its hard drive, and then push a
> button causing this new program to be implemented in its main memory,
> overwriting the current contents. (I.e., the computer has complete
> self-modification power on the software level.)
>
> Suppose that at time 0, the computer has a certain state, which
> embodies a program P doing Bayesian expected utility maximization/
>
> Specifically, suppose the program P solves the problem: given a
> computer C1 with memory size N1, and a utility function U (with
> algorithmic information <= N1), figure out the optimal state for
> the computer C1's memory to be written into, in terms of optimizing
> the expected utility over time.

I would emphasize that the utility function is over external outcomes.
Thus, maximizing expected utility deals in the external consequences of
internal code; the utility function is not over internal code.
Schmidhuber's Godel machine also (allegedly) has this property; albeit
Schmidhuber doesn't go into any of the questions I find interesting from
an FAI perspective, such as how the AI derives an external model on the
basis of sensory data and then applies the utility function to the
external model.

Also I am not thinking in terms of discovering optimal, maximizing
states, but of discovering satisficing states, or relative improvements
that are probably good and knowably noncatastrophic. Schmidhuber talks
about probably good improvements.

> There is then the question whether P can be applied to C ... or
> rather, in what generality can P be applied to C ... ?
>
> Is this close to the question you are thinking about?

To be honest it doesn't feel at all similar to the way I think about the
question. For me the sticking point is relating the consequences of
internal code to external reality, and proving the safety of changes to
the code that proves the safety.

Btw, I'm preparing for the Summit, and when I'm done with that, for your
own AGI conference, so my answers may be slowed for the next couple of
weeks.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT