Re: Question about CEV

From: Matt Mahoney (matmahoney@yahoo.com)
Date: Mon Oct 29 2007 - 15:39:04 MDT


--- Thomas McCabe <pphysics141@gmail.com> wrote:

> CEV is a metamorality system. It doesn't say XYZ is good or bad: it
> defines a procedure for how to determine if XYZ is good or bad. Apples
> and oranges.

>From the paper: "...our coherent extrapolated volition is our wish if we knew
more, thought faster, were more the people we wished we were, had grown up
farther together; where the extrapolation converges rather than diverges,
where our wishes cohere rather than interfere; extrapolated as we wish that
extrapolated, interpreted as we wish that interpreted." (I hope this isn't
taken out of context).

My objection is that "were more the people we wished we were" makes CEV
undefined if we allow the AI to reprogram our motivational systems. The AI
could make us want to be the kind of person that allows the AI to tell us what
we want. But if we disallow reprogramming the motivational system then we
could not treat many mental illnesses. I gave examples where neither the
person's wish before or after the change would be a reliable indicator of what
a rational person would wish for. The only other alternative would be a
complicated rule that we would probably get wrong.

-- Matt Mahoney, matmahoney@yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT