Re: The Future of Human Evolution

From: Maru (marudubshinki@yahoo.com)
Date: Sun Sep 26 2004 - 19:29:01 MDT


I think I'm misunderstanding something here: If you can't construct an AI to 'accomplish an end for which you do not possess a well-specified abstract description', then what is the point of Collective Volition? Isn't the idea for that to have the AI make the decision you would have made if you had understood just what you actually wanted (which sounds like something not very well-specified)?
~Maru

Eliezer Yudkowsky wrote:

The root mistake of the TMOL FAQ was in attempting to use clever-seeming
logic to manipulate a quantity, "objective morality", which I confessedly
did not understand at the time I wrote the FAQ. It isn't possible to
reason over mysterious quantities and get a good answer, or even a
well-formed answer; you have to demystify the quantity first. Nor is it
possible to construct an AI to accomplish an end for which you do not
possess a well-specified abstract description.

-- 
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
		
---------------------------------
Do you Yahoo!?
vote.yahoo.com - Register online to vote today!


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT