From: Norm Wilson (firstname.lastname@example.org)
Date: Thu Jun 03 2004 - 11:09:59 MDT
> If increased intelligence and knowledge made opinions on
> morality converge to a certain configuration space, the
> AI itself could just do whatever it wanted without
> asking our opinion since it is the most knowledgeable
> and intelligent system on earth. Why try to analyze us
> individually and make the opinions converge by
> extrapolating individual mental progress?
Because morality is an abstract concept that affects human behavior, but is not itself physically measurable by the FAI. The FAI cannot (so far as we know) directly "perceive" morality, so it considers humans to be the only available measuring devices and assumes that smarter humans who know more are better at measuring (or at least describing, or behaving in accordance with) the concept of morality. To remove humans from the process would be analogous the throwing out the thermometer and extrapolating the current temperature based on past results. By teaching us more, the FAI would effectively be turning us into better "morality thermometers".
Of course, I may well be reading something into Eliezer's paper that wasn't there...
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT