Re: ESSAY: Forward Moral Nihilism

From: m.l.vere@durham.ac.uk
Date: Mon May 15 2006 - 05:29:52 MDT


Quoting John K Clark <jonkc@att.net>:

> <m.l.vere@durham.ac.uk> Wrote:
>
> > morality is built (atrificially), and people follow it
> > because of evolution induced emotions.
>
> Well what's the hell is "artificial" about that, humans didn't invent
> evolution, and for that matter who gives a hell if it is artificial? You
> keep using the word like it is a horrible insult, but I like artificial
> stuff.

My point is that the current artificial creations of humans will not be
advantageous under every circumstance, and circumstances will change an awful
lot after the singularity.

> > I believe that it will be very unlikely that there will be multiple
> > transhumans of anywhere near the same level (at least at the top level) of
> > power/intelligence - the most powerful would never let anyone else catch
> > up. Morality for cooperation will be unnesscissary.
>
> The universe is a big place, perhaps the biggest, so I think it's likely
> there is room for more than one Jupiter brain in it, and even if there are
> only two morality will be essential.

Here lies your contradiction - Posthuman number 1 will be limited to
morality/fear of agression if he allows another to ascend to his level of
intelligence/power. Thus the universe is not big enough.

> And if your scenario is true and
> transhumans try to suppress the advancement of other transhumans then they
> will be doomed to eternal war.

Er, no. The power differentials between posthumans will be enormous as they
will have (greatly) varying intelligence. When one gets on top, he/her/it
enhances his/her/its intelligence, and prevents others from doing the same -
thus becoming untouchable and gainig absolute power. This is why people are so
pro FAI - they see it as very unlikely that they would become posthuman no 1,
so second choice would be a FAI sysop.

> So I think morality will come in very handy, but it won't be the naïve
> morality espoused by most on this list with their friendly AI meme; the idea
> that we could engineer a Jupiter brain in such a way that it considered our
> well being more important that its own is ridiculous, such a situation would
> be about as stable as balancing a pencil on its tip.

I do believe you are anthropomorphising the AI. We are only concerned with our
own wellbeing because that is how evolution programed us. We program a FAI to
concern itself whith whatever we want.

> And in a way it's not
> even moral. I find the idea of a slave who is superior to me in every way
> imaginable but is nevertheless subservient to me repulsive; but that's just
> me, your mileage may vary.

Yep, it does. Again, isnt this anthropomorphising. Its not a slave in the
traditional sense as being subservient is what it most wants. And again, a FAI
will essentially be a (unimaginably powerfull) optimisation process, and lack
many of the things that make us human. As such I dont think we can say it will
be superior in *every* way.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT