Re: another objection

From: Norm Wilson (web64486@programmar.com)
Date: Thu Jun 03 2004 - 14:07:16 MDT


Metaqualia wrote:

> How long does it need to look at humans to know how to
> compute morality? Once it's done computing it why does
> it need to keep looking at humans?

I think it would be a mistake to ever remove humans entirely from the process. Instead, the FAI should always assume that it has an *approximation* of morality, and never replace the "territory" with a "map of the territory", no matter how accurate that map seems.

> how can the AI tell us "if you were better thermometers
> you would measure 37.89128412424 not 38" when it has
> absolutely no knowledge of what temperature is?

I don't think the AI should extrapolate beyond the data that it used to formulate its computations (or, at a minimum, it should assign exponentially decreasing amounts of confidence, the further out its extrapolations extend). The AI might extrapolate a convergence towards 37.89128412424, but should consider that result to be a hypothesis which can then be tested by further interaction with humans. IMO, convergences would indicate promising places to explore for additional data about morality, but should not be confused with morality itself.

 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT