From: Metaqualia (metaqualia@mynichi.com)
Date: Sat Jan 10 2004 - 21:45:53 MST
> the belief in an objective morality to which you are privy, a belief
> infrequent at the sl4-level.
If space and time can be relative, then morality can be absolute, no? :)
You have not explained why an objective morality is impossible to formulate.
You have just said that entertaining such thoughts is 'unworthy' just like
heliocentrism was "unworthy of the light of day".
> Add to it the condescending deconstruction of
> my (putative) attitudes, which totally misses the point, and the
discussion
> becomes even less engaging.
Then perhaps we should concentrate on the issues, which are quite engaging.
It's ok to choose creating a morally-reasoning AI versus creating an AI with
a preset moral. But at this point, we do not even have an arrow. Go to the
east, they will tell you that not killing your wife when she commits
adultery is immoral. They will tell you that killing cows is more immoral
than killing monkeys. What is an AI gonna make of all this? Average out all
moral/immoral statements? Would that make barbie dolls slightly more immoral
than killing your political enemy? Can't make a saint when you maintain
right and wrong are relative!
There is confusion; as a species, we know nothing. How can we expect to
create an AI who reasons about morality when we, the creators, can't agree
on what this morality is?
I have already given an example of something that we may have completely
missed because objective morality has not been talked about: "the AI may
not reach any kind of moral belief after all, unless it can experience
qualia". Without experiencing qualia, a rigorously logical system will come
to hard determinism and interpret your screams as a physical reaction of
neuron activation and air flow from your mouth. A morally neutral physical
process. Do we know better? Then let's find some _valid_ justifications for
it.
mq
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:45 MDT