From: Metaqualia (firstname.lastname@example.org)
Date: Thu Jun 03 2004 - 12:28:42 MDT
>measurable by the FAI. The FAI cannot (so far as we know) directly
"perceive" morality, so it considers
So how can it decide how human moral choices would change if the human was
smarter and wiser ?
>To remove humans from the process would be analogous the throwing out the
thermometer and >extrapolating the current temperature based on past
results. By teaching us more, the FAI would >effectively be turning us into
better "morality thermometers".
how can the AI tell us "if you were better thermometers you would measure
37.89128412424 not 38" when it has absolutely no knowledge of what
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT