From: Christian Rovner (cro1@tutopia.com)
Date: Mon Sep 27 2004 - 23:32:50 MDT
Eliezer Yudkowsky wrote:
>
> If it's not well-specified, you can't do it.
I understand that an AI steers its environment towards an optimal state
according to some utility function. Apparently you say that this
function must be well-specified if we want the AI to be predictably
Friendly.
This contradicts what I understood from CFAI, where you proposed the
creation of an AI that would improve its own utility function according
to the feedback provided by its programmers. This feedback implies a
meta-utility function which is unknown even to them, and it would be
gradually made explict thanks to the AI's superior intelligence. This
sounded like a good plan to me. What exactly is wrong about it?
-- Christian Rovner Volunteer Coordinator Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT