From: Christopher Healey (CHealey@unicom-inc.com)
Date: Thu Jun 03 2004 - 09:21:24 MDT
The idea, if I am not mistaken, is that the convergence process must be initiated in some state, and this initial state should be chosen as to minimize inadvertent biases. This seems to indicate starting with minimal necessary morality and volitional content. However, once convergence is attained, it does seem likely that the workload could be made more manageable.
The problem remains though, in making sure that the convergence continues to be actively corrected as the Collective Volition of humanity drifts over time or changes suddenly for whatever reason. I'd guess that doing this would result in a comparable resource load; not radically altered from the utilization levels during initial convergence.
Of course, this is assuming that convergence is possible at all. If it is not possible (wide spread across a majority of the model), then the AI is intended to minimize it's affective decisions to the small number of areas that DO have low spread, effectively disabling it's intervention.
Again, this is my understanding of it, but I probably omitted some key points.
-Chris Healey
________________________________
From: owner-sl4@sl4.org on behalf of Metaqualia
Sent: Thu 6/3/2004 10:01 AM
To: sl4@sl4.org
Subject: another objection
If increased intelligence and knowledge made opinions on morality converge
to a certain configuration space, the AI itself could just do whatever it
wanted without asking our opinion since it is the most knowledgeable and
intelligent system on earth. Why try to analyze us individually and make the
opinions converge by extrapolating individual mental progress?
mq
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST