Re: Volitional Morality and Action Judgement

From: Randall Randall (randall@randallsquared.com)
Date: Mon May 17 2004 - 14:07:49 MDT


On May 16, 2004, at 10:18 AM, Keith Henson wrote:
> At 03:39 AM 14/05/04 -0400, Randall Randall wrote:
>> Either you are using the term "best interest" for something I would
>> not use that term for, or you are making the mistake of assuming
>> that a single objective "best interest" exists which can be determined
>> by an outside observer.
>
> There are multiple viewpoints for "best interest" that are sometimes
> in conflict. So the problem may not be solvable at all.

Often in conflict. That's why I put it in quotes. It seems to me
that the problem is likely not solvable for entities which are roughly
comparable in intelligence, but which have different goals.

>> Unless you are intelligent enough to closely simulate that person,
>> however (and no human currently is), you are unlikely to be able
>> to make such a determination, so you must accept the person's own
>> decisions as the closest approximation to their "best interest"
>> that you can find.
>
> Ten years ago I would have agreed with you. There is a strong
> libertarian outlook in me that was shaped by decades of Heinlein's
> influence.
>
> But over the past ten years I have come to see "the person" as
> something less than a unified whole, burdened by evolved psychological
> traits that may be way out of step with reality. Gambling, drug
> addiction and cult involvement, i.e., infection with a parasitic meme,
> are pathological states where intervention may be justified--not
> simply because of the damage to the person, but the dangers to the
> larger community.

I think that using terms and phrases like "infection", "parasitic", and
"pathological" presupposes your conclusion. In particular, the idea of
a parasitic meme is an example of its own class (if the idea is valid at
all) because it short circuits the process of deciding whether a
behavior
is useful for your supergoal. Therefore, making a determination about
whether someone else is "infected" seems inconsistent.

> We are *social* primates. Rational behavior (sanity if you will) is
> to a considerable extent maintained by interactions with those "in
> your tribe." That being the case, we have self-interest in keeping
> our neighbors sane, and they in keeping us sane--especially since we
> have mechanisms that switch on irrational behavior in response to
> environmental conditions (though "irrational behavior" may be rational
> from a gene's viewpoint in a hunter gatherer environment--key phrase
> in Google "xenophobic memes").

There's an interesting book by John McCrone, _The Myth of
Irrationality_,
which addresses some of this. Entertainingly, the third Google result
(I couldn't remember who wrote it) was
virus.lucifer.com/books/myth.html ,
suggesting that I'm not the only transhumanist who believes this is
relevant.

While I feel that this is relevant to the topic of AI design, I'm
willing to accept anyone else's determination that it isn't. :)

--
Randall Randall <randall@randallsquared.com>
'I say we put up a huge sign next to the Sun that says
"You must be at least this big (insert huge red line) to ride this 
ride".' -- tghdrdeath@hotmail.com


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:36 MST