From: Christian Rovner (cro1@tutopia.com)
Date: Tue Sep 28 2004 - 16:58:25 MDT
Eliezer Yudkowsky wrote:
>
> (...)
>
> *But* - and this is the key point - Fred has got to *know* all this
> stuff. He has got to know the nature of the problem in order to solve
> it. Fred may be able to build an FAI without a complete printout of
> U-h(x) in hand. Fred can't possibly build an FAI without knowing that
> he is an "expected utility maximizer" or that there *is* a U-h(x) behind
> his choices.
>
> This is what I mean by saying that you cannot build an FAI to accomplish
> an end for which you do not have a well-specified abstract description.
Aha, right. We don't need a well-specified utility function, but a
well-specified problem description. There's no conflict with CFAI's
general approach (though it may have been obsoleted in many details). My
question was pretty dumb in retrospect. Oh well, aren't they all. Thanks
for the clarification.
-- Christian Rovner Volunteer Coordinator Singularity Institute for Artificial Intelligence http://www.intelligence.org/
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST