Re: Maximize the renormalized human utility function!

From: Michael Anissimov (
Date: Thu Aug 10 2006 - 06:27:03 MDT


> How's that for a slogan? That is - is that an acceptable synopsis of what we
> want the first superintelligence to do, or is there a better way to put it?

I'm not sure that we want to maximize things - for example, would we
want the world instantly transformed into the world we would want if
we were Jupiter Brains, or do we want the world around us to change
incrementally, as we do? I'd want the first superintelligence to
satisfice the renormalized human utility function, not maximize it.

Also, we obviously have to keep in mind that there is a near-infinite
number of possible renormalizations of the human utility function -
some we might want a superintelligence to maximize, others to

Michael Anissimov
Lifeboat Foundation

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT