Imposing ideas, eg: morality

From: Olie Lamb (neomorphy@gmail.com)
Date: Tue May 16 2006 - 01:27:05 MDT


The old bugbear about letting people do what they like without
imposing on others has reared its ugly head.

In this case, rather than a set of rules for the people, it's an
"operational methodology" for an expected superhuman intelligence,
that might become a sysop.

Some ethicists have said that

(**Stipulative characterisation**)

Morals = preferences that you want to apply to everyone.

F'rinstance, if you don't like bullfighting because sport bores you,
that's a matter of individual preference. If you don't like anyone
liking bullfighting because it hurts the bull, by the above
definition, it's a "moral statement".

(Nb: this is NOT my definition. I'm just using it for one post.)

With this characterisation, it's very hard to imagine an
anthropomorphic Sysop not effectively enforcing their "morality" on
others. Their operational methodology for weighing the requirements
of conflicting expressed wills would, in effect, be the Sysop's
"morality".

Just say that a Sysop adopted m.l.vere@durham.ac.uk 's two axioms:
> 1. Prohibiting any action which affects another member of the group,
> unless that member has wilfully expressed for that action to be
> allowed (a form of domain protection).

(Nb: can you say "Golden Rule"?)

> 2. Giving all group members equal resource entitlement

Would you expect such a sysop to not only enforce the axioms directly,
but also for others to adhere to them where they were operating
outside the Sysop's influence? As in, would you expect a Sysop to
allow Robin to voluntarily accompany Leslie into the woods*, when
Leslie has admitted that Leslie has a secret plan to "affect another
member of the group" with an action that is has not been allowed by
that member of the group, (eg: maim, torture, kill etc Robin).

* Yes, although a Sysop would normally have influence on temperate
forested areas, shut up.

Of course the Sysop is going to influence others to adhere to its
moral axioms. Leslie and Robins future actions might take place away
from the Sysop's field of influence, but the Sysop will always be
making actions that affect the future, because you can't make actions
that affect the present! (Insert TangentT here)

Brief ad hominem interlude...

If you expect others to respect your domain, what's that but a form of
morality? Hell, you even suggest giving resources out equally.
Communist! I happen to own large tracts of land that have more than
1/6billionth of the planet's solar collection potential and also
fossil energy reserves buried beneath.* You ain't stealing my land/
energy resources!

* This is a lie. My point is that one 2006human's share of the
earth's crust is 85ha, less than what some people own.

Back to sysops...

If the sysop is vastly more powerful than other entities, it may be
able to act in a genie-like way, and grants wishes that don't
interfere with other human's "domains". Why should humans/post-humans
be forced not to interfere with each others domains?

For a Sysop, because "might makes right" ;P

Otherwise, because there might be some (objective?) reason not to.

Furthermore, why should the sysop not adversely affect humans?
Because the Sysop's progenitors decided to make it that way.

As long as an AI is (1) taking actions that affect others (2) Weighing
the (conflicting interests of other parties (3) weighing its interests
against those of other parties, it would need some sort of methodology
to evaluate potential courses of action. Those that it chooses could
be called its "preferences".

If the AI-builder thinks that the AI should selfish (!!don't try this
at home, kids!!), the Builder is projecting their preferences onto
others. The AI doesn't even need to be conscious for the AI-builder's
preferences to match the stipulative definition of "moral statements"
above.

-- Olie

On 5/15/06, m.l.vere@durham.ac.uk <m.l.vere@durham.ac.uk> wrote:
> So, where would i take my 'moral nihilism'? The reasons I advocated it are the
> following:
>
> All morality is artificial/manmade. This is not an intrisnic negative, however
> it is negative in this case, as:
> 1. Morality made by mere humans would very likely not be suitable/a net
> postivie for posthumans. Therefore we need to go into the singularity without
> imposing morality on our/other posthumans (ie as moral nihilists).
> 2. As morality is artificial, there is no one (or finite number of) 'correct'
> moralit(y)/(ies). Thus it would be better for each individual posthuman to be
> able to develop his/her/its own (or remain a nihlist), than have one posthuman
> morality developed by a sysop.
>
> At the moment, what i would advocate is that
> universal egoists (or moralists who dont want to constrain others with their
> morals) build
> a sysop which grants them all complete self-determination in becoming
> posthuman. My ideas so far (written previously):
>
> "The best posssible singularity instigator I can imagine would be a
> genie style seed AI, its supergoal being to execute my expressed
> individual will. From here I could do anything that the person/group
> instigating the singularity could do (including asking for any other
> set of goals). In addition I would have the ability to ask for
> advice from a post singularity entity. This is better than having me
> as the instigator, as the AI can function as my guide to
> posthumanity.
>
> If anyone can think of better, please tell.
>
> The chances of such a singularity instigator being built are very
> slim. As such I recomend that a group of people have their expressed
> individual wills excecuted, thus all being motivated to build such
> an AI.
>
> The problem of conflicting expressed wills can be dealt with by
> 1. Prohibiting any action which affects another member of the group,
> unless that member has wilfully expressed for that action to be
> allowed (a form of domain protection).
> 2. Giving all group members equal resource entitlement
>
> The first condition would only be a problem for moralists and
> megalomaniacs (and not entirely for the latter as there could exist solipism
> stlye simulations for them to control).
> The second seems an inevitable price of striking the best balance
> between the quality of posthumanity and the probability of it
> occuring.
>
> I tentatively recomend that the group in question be all humanity.
> This is to prevent infighting within the group about who is
> included, gain the support of libertarian moralists and weaken the
> strength of opposition - all making it more likely to happen.
>
> This is a theory in progress. Idealy, we would have an organisation
> similar to SIAI working on its development/actuallisation. As it is,
> Ive brought it here. Note, I hope to develop this further (preferably from
> the standpoint of moral nihilism).
>
> Whilst the AI interpereting commands may be an issue, I dont see it
> as an unsolvable problem.
>
> Note: I see this as a far better solution to singularity regret than
> SIAI's CV."
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT