From: Jef Allbright (jef@jefallbright.net)
Date: Wed Apr 26 2006 - 08:55:07 MDT
BillK provided some useful feedback. Yes, it does come across like
idealistic college-age save the world thinking. I need to work on
that, and consider whether it is even practical to convey properly in
email discussion format. Yes, I'm idealistic, but that doesn't mean
I'm naïve.
Fundamentally, I'm saying that with regard to morality, evolutionary
selection prevails (and there's nothing intrinsically nice about that)
and I'm also saying that we have reached a level of development where
subjective agents can actively and intentionally contribute to the
process.
I am also saying that commonly accepted metaethical theories based on
such ideas as "greatest happiness/good for the greatest number" or
Kant's categorical imperative, or the Golden Rule, or even "First, do
no harm" are all incomplete and fail to include or account for
evolutionary aspects.
Very recently, work has begun exploring along the lines of social
darwinism and evolutionary ethics, but it seems to me (and my own
studies are far from complete) that they are missing, or at least not
giving proper emphasis to, the essential dual components of morality
in that it requires both an expanding subjective element (values) and
an expanding objective element (what works.)
When I say "increasingly shared values that work over increasing scope
are increasingly seen as good", that's shorthand for saying that:
* Values are necessarily subjective, they require a "Self".
* Values are necessarily local, they evolve through a process of
selection within a competitive environment.
* Values "that work" (in other words, persist and spread) are
necessarily considered good (within that local environment.)
* Values that work over increasing scope (of interactees, type of
interactions, and time) are necessarily seen as increasingly good
(within that local, but expanding environment.)
So, for example, when we consider the conflicting values of Islam
versus the west, there is interaction between the two systems, there
is competition, and there is a *tendency* for the values that work
best over increasing scope to persist and grow. This can be seen as
competition at one level, and as cooperation and growth from a more
evolved level. Yes, there will most certainly be losers at one level,
but the tendency is toward a more effective system overall (for any
given environment.)
On a very practical level, it concerns me when I see intelligent and
inquiring minds suggesting faith in concepts such as stagnant or
static happiness, moral decision-making based on some narrow, isolated
or extrapolated model of human values, democratic thinking that
assumes every agent's ideas have equal "value" without regard to
fitness, libertarian thinking that assumes that effective action can
be truly independent, and so on.
For increasingly effective decision-making, leading to actions that
will be seen as increasingly good over increasing context, we need to
be increasingly aware of our subjective values and increasingly aware
of what increasingly objective principles apply to promoting our
evolving values into the future. It's all about evolutionary growth,
but we've just reached the point where we can play an intentional role
in the process.
[If someone can suggest how to factor out the "increasingly's" without
losing the meaning of growth, I would like to know it.]
We are now at the point where we have the technology to achieve
broader and more detailed awareness, based on human values, but
surpassing the capabilities of any individual human.
- Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT