From: Michael Roy Ames (firstname.lastname@example.org)
Date: Mon Nov 25 2002 - 18:23:00 MST
> It [ethical/moral system] should be grounded in what is
> actually the best acheivable by the sentients concerned.
> Not best in terms of something outside themselves
> but in the context of themselves. Morality only exists
> in that context.
When one considers e/m systems in terms of the sentients concerned,
and in that context *alone*, I think there is something missing from
the 'equation'. Defining what is "best" is going to be a matter of
opinion, unless "best" is grounded in reality in some way. I would
guess that there are many ways of accomplishing such a grounding... I
proposed my 'measure' (Rightness) as one possible way. I'm looking
for others as well.
> You are mixing systems in arbitrarily it seems to me.
> It adds nothing and actually detracts from the discussion.
Actually, the topic is about a system (or systems) and how well (or
badly) they might work.
> This is highly confused as it addressed and inanimate
> and/or unconscious or inaccesible-if-conscious aggregate
> as if it is conscious and accessibly so and as if what it
> is "doing" is at all relevant to what is the best basis
> for a moral system for us here and now. What for?
Ahh... a turn-of-phrase:
"I asked the <inanimate-object>..." means I observed, studied,
reasearched, questioned-myself-about the <inanimate-object>.
> >Even now we, as humans, are attempting to push the
> >complexity of our environment and ourselves to ever
> >greater heights.
> This is not the primary goal or center of morality though.
> It is a by-product. It cannot be made the primary goal
Oh, well of course nothing should be *made* the primary goal, I would
never think of forcing anyone... but I am attempting to come up with
a useful definition, a measure. It wouldn't have to be the only (or
even primary) measure. The question I am trying to answer is: is it
a useful one? Would it help us understand difficult moral situations
with greater ease?
> Testability is a separate issue from moral absoluteness.
> On what grounds would you posit that Rightness is a
> universal absolute?
Well, while testability and moral absoluteness *are* separate issues,
I would like to find a way of *testing* a set of morals to see if
they are as right as can be. If such a thing can't be done, then...
well... shucks! But there is no harm in trying :)
> It is not at all obvious that a brand-spanking new
> FAI has to be [R]ight for all sentients for all time.
> I will be quite satisfied if it maximizes the local
> intelligence quotient and insures room for us to
> survive and grow. That is quite sufficient for now.
I would also be very satisfied with that situation... we would have
*made it*. Although, I can easily imagine myself asking and FAI: "In
what direction will you improve your ethical/moral system?" "How
will you know your new system is better than the old one?" "By what
measure or measures will you compare the systems?"
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT