From: Samantha Atkins (samantha@objectent.com)
Date: Sat Nov 30 2002 - 00:07:30 MST
Michael Roy Ames wrote:
> Dear Samantha,
>
> You wrote:
>
>>It [ethical/moral system] should be grounded in what is
>>actually the best acheivable by the sentients concerned.
>>Not best in terms of something outside themselves
>>but in the context of themselves. Morality only exists
>>in that context.
>>
>
>
> When one considers e/m systems in terms of the sentients concerned,
> and in that context *alone*, I think there is something missing from
> the 'equation'. Defining what is "best" is going to be a matter of
> opinion, unless "best" is grounded in reality in some way. I would
> guess that there are many ways of accomplishing such a grounding... I
> proposed my 'measure' (Rightness) as one possible way. I'm looking
> for others as well.
>
>
Well yes and no. But the most important part of that grounding
is the context of the sapients the e/m system is for. You
cannot define "best" in any meaningful way without that context.
Reality is what it is including the reality of the real
nature and needs of said sapients. There are no ways of
grounding such systems that don't start with and build on that
reality. "Rightness" is not a standalone or context-free
quality or existent.
>
>>This is highly confused as it addressed and inanimate
>>and/or unconscious or inaccesible-if-conscious aggregate
>>as if it is conscious and accessibly so and as if what it
>>is "doing" is at all relevant to what is the best basis
>>for a moral system for us here and now. What for?
>>
>
>
> Ahh... a turn-of-phrase:
> "I asked the <inanimate-object>..." means I observed, studied,
> reasearched, questioned-myself-about the <inanimate-object>.
>
But you use language about said inanimate object that is only
appropriate for systems capable of conscious choice.
>
>
>>>Even now we, as humans, are attempting to push the
>>>complexity of our environment and ourselves to ever
>>>greater heights.
>>>
>>
>>This is not the primary goal or center of morality though.
>>It is a by-product. It cannot be made the primary goal
>>meaningfully.
>>
>
>
> Oh, well of course nothing should be *made* the primary goal, I would
> never think of forcing anyone... but I am attempting to come up with
> a useful definition, a measure. It wouldn't have to be the only (or
> even primary) measure. The question I am trying to answer is: is it
> a useful one? Would it help us understand difficult moral situations
> with greater ease?
>
Complexity as such is no meaningful measure of "rightness" at
all. It certainly will not help us make moral decisions,
difficult or otherwise.
>
>
>>Testability is a separate issue from moral absoluteness.
>>On what grounds would you posit that Rightness is a
>>universal absolute?
>>
>
>
> Well, while testability and moral absoluteness *are* separate issues,
> I would like to find a way of *testing* a set of morals to see if
> they are as right as can be. If such a thing can't be done, then...
> well... shucks! But there is no harm in trying :)
>
Sounds like a search for the Philosopher's Stone to me. But
hey, best of luck. :-) The only way I know to test sets of
morals is in the context of the parts of reality relevant to
judging moral systems which very much includes the sapients
affected and attempting to use the e/m system to guide their
choices.
>
>
>>It is not at all obvious that a brand-spanking new
>>FAI has to be [R]ight for all sentients for all time.
>>I will be quite satisfied if it maximizes the local
>>intelligence quotient and insures room for us to
>>survive and grow. That is quite sufficient for now.
>>
>
>
> I would also be very satisfied with that situation... we would have
> *made it*. Although, I can easily imagine myself asking and FAI: "In
> what direction will you improve your ethical/moral system?" "How
> will you know your new system is better than the old one?" "By what
> measure or measures will you compare the systems?"
>
That seems reasonable.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT