From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 24 2006 - 15:34:16 MDT
Scott Yokim wrote:
>
> Not only does one need to (learn how to) specify goal preservation in
> decision theory, but also to learn how to preserve more than one
> (conflicting!) goal at a time (do no harm to the human race, fulfill the
> universe's destiny, etc.).
The normative way to do this is utility functions that specify tradeoffs
between commensurable goals. The human idiom is more complex; I think
these days it goes by the name of "aspiration adaptation", which
involves tweaking your goal-parts one at a time through qualitative
brackets, which does not require that goal-parts be commensurable.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT