From: Marc Geddes (marc_geddes@yahoo.co.nz)
Date: Sat Jun 26 2004 - 01:33:53 MDT
--- Samantha Atkins <sjatkins@gmail.com> wrote: > How
in the world would you apply "reasonable" to
> "intuition"? Either
> "morality" can possibly be defined for some group
> (like humans) with
> enough objectivity to make its consideration useful
> or it is useless
> to worry over a word we attach no real meaning to.
> I believe that
> morality is objectively definable within the context
> of a particular
> group of sentients and perhaps all sentients. But
> that is a minority
> position here. The problem does not get better by
> assuming that a
> poll of humanity or extrapolation of human/>human
> intent/volition. I
> can agree that CV may give better guidance over some
> types of
> planning/decisions than morality or what passes for
> morality for most
> folks.
O.K, I can agree with you here. CV may indeed give
better guidance (if it's possible to calculate at all,
which I still have grave doubts about), but as you
point out, it really leaves fundamental moral
questions unresolved.
>
> Conflating morality and CV is a mistake that I don't
> see Eliezer making.
Well, if CV is the final theory of 'Friendliness' then
it would seem that Eli is indeed conflating morality
with CV. My point is that CV seems to be a pragmatic
operational definition of 'Friendliness', where
morality seems to be defined as the results of the CV
process. I can agree that this might give "Friendly"
practical results, but I doubt that CV can be
calculated at all, for the reasons people like Jef
and others have mentioned (too much data,
intractability, combinatorial explosions etc).
So CV might not be wrong as far as it goes, but just
be totally useless as a definition of 'Friendliness'.
>
> BTW, your notion of SM people and their desires is
> very out of whack.
I was just trying to point out the difficulties of
qualia based conceptions of morality, since people on
this list have been suggesting that whatever leads to
good qualia is good. But I pointed out that SM people
might really enjoy themselves, but this is not
neccesserily good. For similiar reasons we shouldn't
just equate what is good with what individual people
want.
>
>
> On Thu, 24 Jun 2004 20:30:22 +1200 (NZST), Marc
> Geddes
> <marc_geddes@yahoo.co.nz> wrote:
> >
> > I wouldn't rule out the possibility of some sort
> of
> > objective morality yet. Sure, you need to look at
> > humans for 'calibration' of any reasonable
> morality
> > that would speak to the wants and needs of humans
> but
> > there doesn't mean that there is isn't some sort
> of
> > objective standard for determining the morality of
> > various human wants and needs.
> >
>
> Wants and needs are not something that have
> "morality" so speaking of
> the morality of wants and needs is meaningless.
What I said should have read: '...objective standard
for JUDGING the morality of various human wants and
needs'. But also read what I say at the end.
>
>
> > What Eli seems to be worried about is the
> possibility
> > of A.I programmers 'taking over the world'. But
> does
> > the world really need anyone to 'run' it? Not
> > according to the anarcho-capitalists and various
> other
> > political systems that have been floated. Not that
> I'm
> > advocating anarchy, I'm just pointing out that the
> > whole idea of a singeton centralized agent might
> be
> > misguided. In any event the way the world seems to
> > work in the modern free market democracies is that
> > people are assigned status roughly acccording to
> their
> > talent and latent cognitive abilities. For
> instance
> > childen have fewer rights than adults, brilliant
> > adults who create good products end up with more
> > economic power etc. Since FAI would have cognitive
> > abilities far beyond an ordinary human, it's not
> clear
> > why it would be wrong for the FAI to be given the
> most
> > rights.
> >
>
> I don't believe that rights necessarily increase
> based on a
> quantitative increase of some aspect of an entity
> whose rights are
> being derived. Rights, like morality, can only be
> tied to reality
> through considering the nature of the entities we
> are talking about.
> Rights of the "unalienable" kind are those things
> required for the
> well-functioning of the type of entity. It is not a
> matter of "more"
> rights but of different rights for different types
> of entities. The
> rights of entities will intersect on those rights
> deriving from more
> or less intersecting aspects of their nature. It
> is possible that a
> vastly greater intelligence would require by its
> nature more rights
> than we do, but since rights are in the interaction
> of entities I do
> not see that it is necessarily so.
Yup I agree. But I argue that a vastly greater
intelligence would require by its nature far more
rights because far more possibilities are open to it
(it can rationally calculate outcomes further into the
future and also consider a greater range of actions).
For instance a small child does not have the right to
drive a car, because the child could not process the
sensory data to calculate outcomes when driving to the
same degree as an adult.
>
> The realm of morality is also the realm of
> inter-entity activity.
> This is a smaller and more delimited sphere than
> that potentially
> covered by CV.
>
> -s
>
In the most general sense of the term, morality is the
process of consciously determining goals. But this
process is itself a goal system. So a morality is a
goal system.
=====
"Live Free or Die, Death is not the Worst of Evils."
- Gen. John Stark
"The Universe...or nothing!"
-H.G.Wells
Please visit my web-sites.
Science-Fiction and Fantasy: http://www.prometheuscrack.com
Science, A.I, Maths : http://www.riemannai.org
Find local movie times and trailers on Yahoo! Movies.
http://au.movies.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT