From: Marc Geddes (firstname.lastname@example.org)
Date: Thu Jun 24 2004 - 02:30:22 MDT
(Taken from what I posted on wiki)
I think Eli was right not to make individual volition
the whole basis of morality. I don't think morality is
solely based on what a person really wants. After all,
you could imagine a society of sadomasochists that
enjoy being raped and sodomized, but this doesn't
correspond to our intuitive conceptions of morality at
all. On the other hand morality should help people. So
morality has to be somehow related to human wants and
needs. It boils down to the age-old dilemma of
internal wants and needs versus external dictates and
So where are these external factors to come from?
Eli's decided that they should come from 'the group'
(humanity) as a whole. But is humanity as a whole
really where Eli should be looking? I have my doubts.
There could very well be more to morality than this.
Simply trying to derive morality from 'the group' only
pushes the fundamental questions back to another
level, it doesn't really resolve them. What if the
'Collective Volition' (i.e. the morality derived from
the group as a whole) still runs totally contrary to
reasonable moral intuitions? Eli pulled a clever move
when he came up with 'extrapolated volition' - what
people would want if they thought longer, faster, knew
more , were more wise etc. This pragmatic operational
definition might well end up corresponding to
morality, but the trouble is that it's an answer
suspiously empty of content. Not wrong as far as it
goes maybe, but not necessarily very useful. My first
reaction upon seeing the CV 'answer' to the question
of morality was that it was rather like being given
the answer: 'Well it's a face' to the question: 'What
is Mona Lisa?' For instance I could define knowledge
as the converging probability resulting from factoring
successive pieces of information into Bayes Theorem,
but where does that get me? What is the solution to
the Riemann Hypothesis? Oh it's the result you would
get if people kept factoring in new pieces of maths
info into Bayes until there was a convergence
recognizable as the solution! Sure, but hardly a
satisfactory answer. Morality is defined as the end
result of a process, a process so enormously complex
it's quite likely to be practically impossible to
calculate. In that case CV might be correct as far it
goes, but just useless.
I wouldn't rule out the possibility of some sort of
objective morality yet. Sure, you need to look at
humans for 'calibration' of any reasonable morality
that would speak to the wants and needs of humans but
there doesn't mean that there is isn't some sort of
objective standard for determining the morality of
various human wants and needs.
What Eli seems to be worried about is the possibility
of A.I programmers 'taking over the world'. But does
the world really need anyone to 'run' it? Not
according to the anarcho-capitalists and various other
political systems that have been floated. Not that I'm
advocating anarchy, I'm just pointing out that the
whole idea of a singeton centralized agent might be
misguided. In any event the way the world seems to
work in the modern free market democracies is that
people are assigned status roughly acccording to their
talent and latent cognitive abilities. For instance
childen have fewer rights than adults, brilliant
adults who create good products end up with more
economic power etc. Since FAI would have cognitive
abilities far beyond an ordinary human, it's not clear
why it would be wrong for the FAI to be given the most
Collective Volition is unlikely to be the last word in
'Friendliness theory'. Not even close I suspect.
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
Please visit my web-sites.
Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org
http://personals.yahoo.com.au - Yahoo! Personals
New people, new possibilities. FREE for a limited time.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT