From: Jef Allbright (jef@jefallbright.net)
Date: Tue Dec 13 2005 - 23:13:45 MST
On 12/13/05, Tennessee Leeuwenburg <tennessee@tennessee.id.au> wrote:
> Jef Allbright wrote:
> >Would you like to comment on the questions I posed to Michael?
> >
> >
> I thought that I had done so. I will be specific.
>
> "Do you mean that to the extent an agent is rational, it will naturally
> use all of its instrumental knowledge to promote its own goals and from
> its point of view there would be no question that such action is good?"
>
> The Categorial Imperative (CI) is not a source of morality, but a
> description of how to make (orignially, moral) rules. Any number of
> moral positions are possible under this system, and as such the CI is no
> guarantee of Friendliness.
It seems to me that Michael pointed out that the CI is irrelevant to
AI decision-making. I would agree with him on that, and had intended
to explore the implications of our (possibly) shared line of thought.
Frankly, it's difficult for me to see a connection between your
response here and the intended question. Note that I carefully said
"from it's point of view" so as not to suggest that I'm assuming some
sort of objective good. I am curious as to how you (or Michael) might
answer this question. Wouldn't you agree that any agent would say
that which promotes its goals is good from its point of view,
independent of any moral theory?
>
> "If this is true, then would it also see increasing its objective
> knowledge in support of its goals as rational and inherently good (from
> its point of view?)"
>
> Not necessarily. It may consider knowledge to be inherently morally
> neutral, although in consequential terms accumulated knowledge may be
> morally valuable. An AGI acting under CI would desire to accumulate
> objective knowledge as it related to its goals, but not necessarily see
> it as good in itself.
I wasn't making any claim about something being good in itself. I was
careful each time to frame "good" as the subjective evaluation by an
agent with regard to whether some action promoted its goals.
>
> "If I'm still understanding the implications of what you said, would
> this also mean that cooperation with other like-minded agents, to the
> extent that this increased the promotion of its own goals, would be
> rational and good (from its point of view?)"
>
> Obviously, in the simple case.
Interesting that you seemed to disagree with the previous assertions,
but seemed to agree with this one that I thought was posed within the
same framework. It seems as if you were not reading carefully, and
responding to what you assumed might have been said.
>
> I can't work out who made the top-level comment in this email, but the
> suggestion was that CI might be relevant to an AI,
Yes, another poster did make that assertion.
So, if you would be interesting in responding to those questions, but
understanding that I am certainly not arguing for the CI, I would be
interested in your further comments.
- Jef
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT