From: Norm Wilson (web64486@programmar.com)
Date: Thu Jun 03 2004 - 11:09:59 MDT
Metaqualia wrote:
> If increased intelligence and knowledge made opinions on
> morality converge to a certain configuration space, the
> AI itself could just do whatever it wanted without
> asking our opinion since it is the most knowledgeable
> and intelligent system on earth. Why try to analyze us
> individually and make the opinions converge by
> extrapolating individual mental progress?
Because morality is an abstract concept that affects human behavior, but is not itself physically measurable by the FAI. The FAI cannot (so far as we know) directly "perceive" morality, so it considers humans to be the only available measuring devices and assumes that smarter humans who know more are better at measuring (or at least describing, or behaving in accordance with) the concept of morality. To remove humans from the process would be analogous the throwing out the thermometer and extrapolating the current temperature based on past results. By teaching us more, the FAI would effectively be turning us into better "morality thermometers".
Of course, I may well be reading something into Eliezer's paper that wasn't there...
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST