Re: Changing the value system of FAI

From: Ben Goertzel (
Date: Wed May 10 2006 - 11:14:25 MDT


> > Specifically, suppose the program P solves the problem: given a
> > computer C1 with memory size N1, and a utility function U (with
> > algorithmic information <= N1), figure out the optimal state for
> > the computer C1's memory to be written into, in terms of optimizing
> > the expected utility over time.
> I would emphasize that the utility function is over external outcomes.
> Thus, maximizing expected utility deals in the external consequences of
> internal code; the utility function is not over internal code.

I was assuming that the utility function contained a bunch of data
regarding the external environment.

Otherwise, one can assume the computer C1 contains a utility function
U, and also a dataset D giving info regarding the external

> Also I am not thinking in terms of discovering optimal, maximizing
> states, but of discovering satisficing states, or relative improvements
> that are probably good and knowably noncatastrophic.


> > Is this close to the question you are thinking about?
> To be honest it doesn't feel at all similar to the way I think about the
> question.

Which of course doesn't mean it's not mathematically closely related
to your question, though...

>For me the sticking point is relating the consequences of
> internal code to external reality, and proving the safety of changes to
> the code that proves the safety.

At some point, when you find the time, it would be interesting if you
would post a purely mathematical formulation of the question that you
are thinking about. I am interested in such things, and I believe
that Shane Legg, who is a PhD student working with Juergen Schmidhuber
(and a former Webmind and A2I2 staff member) is also on this list (if
not, I'm sure he's on the AGI list); and Shane is even more interested
in thinking about such things...

-- Ben G

-- Ben

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT