Re: Morality simulator

From: Jef Allbright (jef@jefallbright.net)
Date: Tue Nov 06 2007 - 10:58:28 MST


On 11/6/07, Joshua Fox <joshua@joshuafox.com> wrote:

> Under "I'm-sure-someone-must-have-done-this-before":
>
> What about the idea of an morality simulator.

I've been rather persistently suggesting for several years that we
must implement moral decision-making via a framework modeling the
interaction of (1) increasingly coherent (subjective) human values and
(2) increasingly effective (objective) principles for the promotion of
those values into the future. In overly simple terms, increasing
morality corresponds to increasing agreement on values over increasing
context, promoted by implementation of principles effective over
increasing scope.

> Just as computer models of
> weather or car crashes -- however imperfect -- allow researchers to test
> their assumptions, why not do this for morality?

Yes, but I will highlight the crucial but popularly misconceived point
that it's not about testing and refining for a particular desired
outcome, but testing and refining the essentially scientific model
that facilitates increasingly reliable prediction. The actual target
is orthogonal to this part of the process and relative to necessarily
(inter)subjective values.

> You would use various simulated worlds, whether a symbolic model, simple 2D
> worlds, and or full Second-Life-style shared online worlds.

I'm afraid you've begun to over-simplify.

> You could assign a closed-form morality-evaluation functions directly, or
> take an implicit function from the input of users in shared online worlds.

Any closed-form representation of morality is a recipe for failure in
an evolving context. This gives rise to some well-known paradoxes of
utilitarian ethical theory, and is related to how we (sometimes)
recognize the importance of implementing the spirit, rather than the
letter of the law.

As for taking "an implicit function from the input of users", this is
closer to what will work, but what we really need is a model of their
latent **values**, and the moral function will emerge a result of the
simulation, testing the promotion of the evolving and increasingly
coherent set of values via an evolving and increasingly coherent model
of "reality". Wash, rinse, and repeat. It may be useful here to
point out that the moral function delivered by this framework will
easily exceed the conceptual capacities of any individual.

It's all about improving the process of decision-making in the social
domain, delivering increasingly rational outcomes promoting the
increasingly coherent values of an increasing context of agents over
increasing scope of consequences, by actions therefore seen as
increasingly moral.

Thanks for keeping our attention on this.
</soapbox>

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT