Re: Measuring(quantifying) morality?

From: Patrick Crenshaw (patrick.crenshaw@gmail.com)
Date: Tue Aug 15 2006 - 14:41:14 MDT


The idea (which I obviously didn't go into) is that an agent ought to
choose the action that is most likely to increase the final total
Value of the universe by the greatest amount. This requires only
regular old inference. Also, while the near and medium term changes in
Value can fluctuate, in the long term this would be damped, and the
fluctuating effect would also be smeared out by the uncertainty in the
effect of a particular action, so these things would, in most cases,
leave the agent with a function that can more easily be reasoned with.

There are certain cases where a short term drop in Value leads to a
long term increase (e.g. killing someone who is about to kill a bunch
of people).

On 8/15/06, Philip Goetz <philgoetz@gmail.com> wrote:
> On 8/15/06, Patrick Crenshaw <patrick.crenshaw@gmail.com> wrote:
> > I've done some thinking about this.
> >
> > The first conclusion I came to is that the morality of an action has
> > to do with the amount that it changes the integral of some Value
> > function over all space at t=infinity. If you give me any moral
> > system, I can give you a Value function like this that would describe
> > it.
>
> This is one of the problems - these functions typically have no
> integral over infinity, or at least we can extrapolate that, because
> the total value over time can oscillate wildly, and if you count
> greatest good to the greatest number, then the moral value of an
> action depends more on its impact hundreds of years in the future than
> its impact at present.
>
> Almost any "bad" deed of the past now has obvious good consequences,
> and vice-versa.
>

-- 
Patrick


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT