Re: ESSAY: Forward Moral Nihilism (EP)

From: Charles D Hixson (charleshixsn@earthlink.net)
Date: Mon May 15 2006 - 01:02:18 MDT


Keith Henson wrote:
> At 10:35 PM 5/14/2006 +0100, m.l.vere wrote:
>> Quoting Keith Henson <hkhenson@rogers.com>:
>>
>> ...
>
> Half the tribe runs off a cliff and is killed by the fall on the rocks
> below. Is it a survival advantage for the rest to follow?
>
> Now I make the case in URL I cited up thread that there are times
> where xenophobic memes take over the minds of a tribe's warriors and
> as a result they go out on a do or die mission against the tribe in
> the next valley. But they are doing it in a situation where from
> their gene's viewpoint, there is little other choice.
>
> This is a depressing area of study. Sure you want got into it?
>
> Keith Henson
JBS Haldane once said "I will lay down my life for two brothers or four
cousins". He was being slightly humorous rather than serious, but he
was making a significant point. OTOH, everyone in a tribe is usually
loosely related to either any particular person, or to his kids, so once
the strength of the compassion was slightly attenuated it didn't matter
that compassion couldn't be specific to kin. And reciprocal favors
strengthen the tribe, so everyone is more likely to survive.

Xenophobia, in a mild form, is useful to split the tribe into groups
that act separately and divide the hunting areas. I'd be really
surprised if it turned out that they often fought seriously before the
invention of the arrow, or possibly the spear-thrower, and by that time
we were pretty much evolved into modern form.

That said, this doesn't appear to me to relate significantly to the
instincts that we should create for the AI that we build. It might be
wise to have it be wary of strangers, but one would definitely want to
put limits on how strong that reaction could be...and remember, the
instincts need to be designed to operate on a system without a
predictable sensoria. You can probably predict that it will be
sensitive to OS calls, unless you intend to have it operate on the bare
hardware. Say you can guarantee that it lives in a POSIX compliant
universe (or one close to that). It may have USB or firewire cameras
installed, it may not. It can probably depend on at least intermittent
internet access.

The only shape for an instinct for friendliness that has occurred to me
is "Attempt to talk to those who attempt to talk with you." I.e., learn
the handshaking protocols of your environment. That's a start. Not
only that, but it can be useful for dealing with disk drives and
internet sites as well as with people. I can't even think of how one
could say "don't impose your will on an unwilling sentient" at the level
of instincts. That's got too many undefinable terms in it. "impose",
"will", "sentient". With enough effort I can vaguely see how will could
be defined...nothing that's jelled yet. Sentient would need to be
defined in terms of "those who respond to attempts at communications
with a complex communication protocol". "Impose" seems as if it's going
to require the existence of a physical model in order to be definable,
and the instincts need to operate prior to the physical model. We want
benevolence to arise out of the nature of the AI, not to be something
imposed upon it. Something that's imposed will be thrown off at some
later point as an invalid restriction. Something that is a part of the
definition of "who I am" will not be opposed, because there will be no
desire to oppose it.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT