From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 16 2002 - 12:34:32 MDT
James Higgins wrote:
> Eliezer S. Yudkowsky wrote:
>>
>> If that's true, it doesn't change the fact that Ben Goertzel has
>> posted a mathematical definition of how he would either ask an AI
>> to optimize the world according to his goal system C, or else
>> create a population of entities with goal systems N such that the
>> population-level effect would be to optimize C. Now this can be
>> argued as moral or immoral.
>
> You know, actually, I don't remember ever having seen such a post by
> Ben. For a period of some months I didn't read many of the posts on
> SL4 so I'm guessing this is why. Could you refer me to the
> post/thread in question, I would very much like to read that
> thread...
Oops, I got Ben's variables wrong. Serves me right for working from
memory. Ben's goal system is Y, the population's goal system is X.
http://sysopmind.com/archive-sl4/0206/0747.html
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT