From: Michael Roy Ames (firstname.lastname@example.org)
Date: Sun Nov 24 2002 - 01:03:32 MST
Dear Ben Goertzel (and SL4)
> Michael, as I see it, ethical/moral values cannot be tested
> according to "how usefully they describe reality."
> They are prescriptive rather than descriptive.
There are two separate 'questions' here that can become confused if
mixed. Firstly there is question if whether ethical/moral values can
be tested against reality. Secondly there is the question as to
whether they are prescriptive or descriptive.
For the first... I am positing that moral Rightness is an *absolute*
quality of the universe, therefore if you are correct in saying that
ethical/moral values cannot be tested against reality, then my whole
argument falls. Indeed, if you are correct, then *any* argument that
suggests there is absolute right and wrong will fall, and all
morality is arbitrary - merely a random side-effect of human social
consciousness. I suggest that any action taken by a sentience can be
judged against this absolute value, by the sentience itself, viewed
through a window of ver own intelligence and understanding of the
situation. This implies that a variety of Rightness judgements made
by any given set of independent sentients, starting off with
differing 'windows', will gradually converge as intelligence and
situational-knowledge increase. This 'convergence' is mentioned in
Eliezer's work (peripherally) and I have been uncomfortable with the
idea. However, if we are to create a self-enhancing machine that will
outstrip us in every way, including ethical and moral values, then it
simply due-diligence to attempt to understand in what direction it
will develop *eventually*, and give it a good head-start in that
direction. Thus: Friendly AI. The thing is, Friendly AI has to be
Right... not just for us, but for whatever we might become in the
future as well. It has to be Right, not only for our biosphere, but
for any other life we find in the universe also. Its a tall order, to
*have* to be Right. But there is no point shying away from the
question... that won't help anyone. Sorry for the melodrama... but
sometimes things have to be said - so they are not forgotten.
For the second... moral values can be both prescriptive (thou shalt
not murder) and descriptive (murder hurts the community). I have no
interest in prescriptive morality... I'll leave that to religions and
lawmakers. The descriptive kind of morality is what I'm interested
in. What morals work to achieve our goals? How is a given ethic
useful? These are more interesting and pertinent questions to me.
> Of course, some ethical/moral systems could be logically
> inconsistent -- that is one way of narrowing down the set
> of all possible ethical/moral systems ... iff one believes
> that ethical/moral systems *should* be logically
> consistent. Most human ethical/moral systems don't seem
> to me to be very logically consistent...
If you are correct, then is HIGH TIME that we made a logically
consistent, and empirically verifiable ethical/moral system. If
Friendly AI isn't it, then I'm betting on the wrong horse.
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT