Re: Moral standards (was Guide AI theory)

From: m.l.vere@durham.ac.uk
Date: Tue May 16 2006 - 12:46:11 MDT


Quoting Philip Goetz <philgoetz@gmail.com>:

> On 5/14/06, m.l.vere@durham.ac.uk <m.l.vere@durham.ac.uk> wrote:
> > 2. As morality is artificial, there is no one (or finite number of)
> 'correct'
> > moralit(y)/(ies). Thus it would be better for each individual posthuman to
> be
> > able to develop his/her/its own (or remain a nihlist), than have one
> posthuman
> > morality developed by a sysop.
>
> Even if you completely disbelieve in morality, objective ethics,
> good and evil, right and wrong -
>
> We call morality a "standard". This is true in the same way that
> Windows is a standard. It is better FOR YOU to have a small number of
> standards - even if they aren't the ones you would have developed -
> than to have everyone operating according to a different standard.
> The transaction costs are too high when there are no standards. This
> is true of moral systems as well as of operating systems.

Yes, you are right (not that these standards constitute an objective
morality). I suppose my guide AI post is really what I would put as the
standard. What would you put as the standard?
However, the opportunity cost of these standards would be impossible to work
out in advance, and would be likely different for different posthumans.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT