RE: Understanding morality (was: SIAI's flawed friendliness analysis)

From: Ben Goertzel (ben@goertzel.org)
Date: Sat May 10 2003 - 17:24:43 MDT


> > What I envision coming out of the process you describe is a kind of
> > practical formal deductive system dealing with morality. A priori value
> > judgments may be provided as axioms, and the practical moral judgments
> based
> > on them emerge as derivations from the axioms.
>
> You're pretty much dead on (except that it gets a lot more
> complicated when
> the system isn't provided with clean input and needs to deal with
> conflicting axioms, incomplete reasoning, and the rest of the real world)

So the "derivation" aspect of your proposed system requires something like
Novamente's logical inference system, which deals with incompleteness,
uncertainty and inconsistency in a mostly-probabilistic context. That
sounds like fun to work on ;)

> At any rate, I don't think that I'd want to push it as a moral
> decision-making system. I mainly used that as a hook for this
> group. It's
> a group decision-making under uncertainty system which could be used to
> solve a problem that a Friendly AI desperately needs solved.

Could you summarize how your proposed system differs from existing decision
support tools?

Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT