From: m.l.vere@durham.ac.uk
Date: Sat May 13 2006 - 07:18:27 MDT
Quoting micah glasser <micahglasser@gmail.com>:
> Have you ever read "The Moral
> Animal<http://www.scifidimensions.com/Mar04/moralanimal.htm>"
> by Robert Wright? This book looks at human morality and it's evolution
> through the lens of evolutionary psychology. Its a fantastic read IMO and
> while it may not refute moral nihilism it certainly does add dimension to
> the issue.
No, Ill look it up.
> Also what do you think of Nietzsche's moral philosophy?
> Nietzsche saw himself as a nihilist while a young man but eventually saw his
> way out of that darkness.
Nietzsche's moral philosophy is very interesting. Origianlly the devoutest of
christians, when he saw the flaws in his religion this led him to question and
find the faults with morality as well. IMO this previous chritianity is his
flaw - he saw moral nihilism as a great darkness because it had taken the
comforting father figure of God and the certainty of absolute morality from
him (on which IMO he had based much of his personality) and could not replace
it. I dont think he ever really got over this.
On his 'superman' philosophy of constructing ones own morality from moral
nihilism - perhaps. But its something Id want to do post-singularity when I
have an IQ 0f 5000000000000 - as opposed to now, and in doing so possible
restricting said posthumanity.
> Unlike some I've studied philosophy for long enough to not glibly dismiss
> your arguments based on primate instinct alone (i.e. the instinct that
> abhors anti-social behavior or ideas). Also I think this discussion is truly
> SL4. Most on this list take for granted that there is such a thing as
> "benevolence" and that we should all be working hard to relieve "human
> suffering". I'm not saying this is wrong but it would be nice to here some
> more sophisticated arguments for exactly why anyone should care about
> anything.
Definitely. What worries me is that AGI will be built based upon these
assumptions, which will likely become obsolete in a posthuman future - leading
to sub-optimal posthumanity.
> If we can't offer a philosophically rigorous refutation of moral
> nihilism then it will be quite difficult to program a machine AGI that can
> refute that position also. Just a thought.
Quite possibly. Although, maybe to say so is anthropomorhising the AI?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT