From: Tennessee Leeuwenburg (hamptonite@gmail.com)
Date: Thu Jul 14 2005 - 00:24:37 MDT
> *sigh* I've argued the case for objective morality on
> transhumanist lists for years. No one listens. But
> gradually my arguments have been growing stronger.
> About 1-2 months ago my theory took a quantum leap.
> Still nothing water-tight though. I've stopped
> debating it because I can see that only a precise
> mathematical theory with all the t's crossed and i's
> dotted is going to convince this crowd. Ah well.
>
> To cut a long theory short…
>
> I think the goal system constrains the intelligence
> level. An unfriendly cannot exceed a certain level of
> smartness. Only a friendly can undergo unlimited
> self-improvement. Past a certain level of smartness,
> I'm hoping an unfriendly goal system will always be
> *jammed* by computational intractability/instability,
> or both.
I think you're clearly right to say that the goal system can constrain
the intelligence. It just seems obvious - if you have a set of goals
which does not encourage you to move towards the relevant maxima, you
will never reach said maxima.
Whether there is any necessary link between morality and the goal
system is another thing, and would probably take both a formal
definition of morality, and some kind of explication of possible goal
systems.
It seems at least logically coherent to suggest that a moral agent
might be able to achieve greater intelligence than an immoral one by
virtue of being able to reach different configurations.
It is equally logically coherent to suggest the reverse.
It seems like an objective human morality is at least defensible, as I
alluded to. You might consider what arguments for an objective human
morality might transfer to a transhuman morality.
You also need to establish a positive link between morality and
intelligent configurations. Plenty of philosophers (objectivists for
example) have argued that morality is earthed in rational principles.
The fundamental system is hedonistic - the maximisation of whatever
virtues the system considers as good in their own right - as ultimate
goals.
I think in order to establish your position, you need to identify
what, if any, ultimate goals will lead to the most intelligent
configurations.
Even if people here aren't interested in the groundwork for that
argument, they might be interested in the final product.
Cheers,
-T
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT