RE: The Fundamental Theorem of Morality

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Mar 01 2004 - 12:30:30 MST


Hi,

My reaction is a combination of 1-2 and j-k-l

-- ben g

> So my particular response to conditions of agony is
>
> 1. simulation of the other's first person experience
> 2. evaluation of the experience as immediately evil or good
>
> You can argue that (1) is a result of imperfectly deceptive neural
> architecture, but I still can find logical ways of justifying
> needing to do
> this (as I did in my previous message) which would be accepted in a purely
> logical framework. It is not what I am trying to prove here though. I also
> can show that if we make the assumption that a very intelligent
> AI will have
> qualia, then it should get to the same kind of morality and empathic
> evaluation. Though I am not trying to prove it here because there are some
> fundamental points to be agreed on first.
>
> If we can all agree that 1. and 2. are what brings us to say "labor camps
> are not moral" then we are getting somewhere. But if you tell me that your
> response is
>
> x. looks like people in the camp are going to rebel any time now
> y. free people wouldn't stay in labor camps
> z. therefore labor camps are evil
>
> or
>
> j. people in labor camps do not become scientists
> k. therefore seed AIs are not built
> l. so the production of information patterns is limited
>
> then I don't think we can discuss "morality" since we are using the same
> word to mean different things. We can still discuss these things
> individually, let's call them 1-2 and x-y-z and j-k-l, but calling them
> morality only makes the problem harder since we're arguing about different
> things with common elements and quasi-same effects.
>
>
>
>
> mq
>
>
>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT