Re: [agi] A difficulty with AI reflectivity

From: Christian Szegedy (
Date: Thu Oct 21 2004 - 04:23:35 MDT

Eliezer Yudkowsky wrote:

> For example, suppose that we start with a Godel Machine that,
> meta-level aside, starts with an object-level program that repeatedly
> presses a button delivering the equivalent of a nasty electrical
> shock. If the meta-level speeds up the object level so that we get
> nasty electrical shocks twice as fast, this is not an expected utility
> improvement.

It depends on your utility function. If the utility function is simply the
number of received electrical shocks, then it would be a good idea to
press the buttons as fast as possible.

Typically, people working on algorithm theory want to create programs that
produce solutions to well defined problems as quickly as possible, where
the objective function is typically easy to compute. This and nothing else
is the goal of the Goedel machine Find solutions to hard problems as fast
as possible. The utility function may be a bit different, but it is
always about
performance (speed, memory consumption, quality of solution, etc...)

This may become clear to you if you read the page about the example

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT