Re: Understanding morality (was: SIAI's flawed friendliness analysis)

From: Mark Waser (mwaser@cox.net)
Date: Sun May 11 2003 - 09:55:38 MDT


Ben said:
> So the "derivation" aspect of your proposed system requires something like
> Novamente's logical inference system, which deals with incompleteness,
> uncertainty and inconsistency in a mostly-probabilistic context. That
> sounds like fun to work on ;)

No, actually I was initially planning on focusing on a good, very
user-friendly "my world view" input system that could help novice users get
their opinions and reasoning (on a bounded issue) into a good, solid
Bayesian format (and incidentally, maybe teach them some Bayesian logic as
they go through it) and then some sort of "resolution" system that allows
for the combination of world views in an extremely structured format/process
that indicates where the views are the same and where they differ and walks
the users through a process of further supporting views where they differ.
The system should ensure that a given person's views are internally
consistent but must allow for multiple people to agree to disagree (though
it would be interesting to see how far down people are willing to explain
their views - - i.e. go back and support their initial axioms with
supporting axioms/facts).

I am certainly not proposing to try to invent anything like a fully
automated machine process. All my knowledge of machine learning and machine
reasoning says
that we aren't anywhere near being able to do that yet (though many people,
including you, are making good progress along that path). What I want is a
good formal process that can become well-researched, debugged, and accepted
and "plugged-in" as appropriate later. In particular, I very much feel the
need to make this a social and/or community process that can tolerate very
different world views and facilitate reasoned debate. I am VERY nervous
about the idea of even a fully Friendly AI with a single "correct" world
view. While I understand the math/reasoning why this isn't truly the case
with a correctly designed AI, there are going to be far too many people in
both the political and programming arenas who WILL either latch onto that
meme or try to make it a reality.

> Could you summarize how your proposed system differs from existing
> decision support tools?

I haven't been able to find many current decision support tools that start
with the concept of resolving multiple world views with structured debate
and mutual exploration other than a lot of "groupware" semi-solutions that
really don't provide enough structure. Could you (or anyone else) give me
any pointers to anything that you think I've overlooked? Thanks.

        Mark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT