From: Tennessee Leeuwenburg (tennessee@tennessee.id.au)
Date: Tue Dec 13 2005 - 23:32:03 MST
First up, I may be guilty as charged of skim reading. I read the email,
but I also had in mind a skim of the recent topics and emails.
Jef Allbright wrote:
>On 12/13/05, Tennessee Leeuwenburg <tennessee@tennessee.id.au> wrote:
>
>
>>Jef Allbright wrote:
>>
>>
>>>Would you like to comment on the questions I posed to Michael?
>>>
>>>
>>>
>>>
>>I thought that I had done so. I will be specific.
>>
>>"Do you mean that to the extent an agent is rational, it will naturally
>>use all of its instrumental knowledge to promote its own goals and from
>>its point of view there would be no question that such action is good?"
>>
>>The Categorial Imperative (CI) is not a source of morality, but a
>>description of how to make (orignially, moral) rules. Any number of
>>moral positions are possible under this system, and as such the CI is no
>>guarantee of Friendliness.
>>
>>
>
>It seems to me that Michael pointed out that the CI is irrelevant to
>AI decision-making. I would agree with him on that, and had intended
>to explore the implications of our (possibly) shared line of thought.
>
>Frankly, it's difficult for me to see a connection between your
>response here and the intended question. Note that I carefully said
>"from it's point of view" so as not to suggest that I'm assuming some
>sort of objective good. I am curious as to how you (or Michael) might
>answer this question. Wouldn't you agree that any agent would say
>that which promotes its goals is good from its point of view,
>independent of any moral theory?
>
>
Perhaps I misunderstood the question. If so, then it will be hard for me
to answer clearly :)
Well, the CI is by definition an objective thing. I do *not* believe
that it's obvious that all AIs would be relativists about morality.
However, I believe it's most likely so we don't need to argue about it.
The intended question has a double-entendre. The first meaning is that
"all my goals are good from my point of view, by definition, so I don't
need to question it". The second meaning is "I have no conception of
morality, therefore I don't need to question it".
You can have Categorical Truths even under an amoral goal system, and
you can have multiple possible moral goals systems in accordance with
Categorical Imperatives.
I think it's concievable that an AGI might have no moral sense, and be
bound only by consequential reasoning about its goals. I also think it's
concievable than an AGI would have a moral sense, but due to its mental
model have varying beliefs about good and evil, despite its conclusions
about what is objectively true.
>>"If this is true, then would it also see increasing its objective
>>knowledge in support of its goals as rational and inherently good (from
>>its point of view?)"
>>
>>Not necessarily. It may consider knowledge to be inherently morally
>>neutral, although in consequential terms accumulated knowledge may be
>>morally valuable. An AGI acting under CI would desire to accumulate
>>objective knowledge as it related to its goals, but not necessarily see
>>it as good in itself.
>>
>>
>
>I wasn't making any claim about something being good in itself. I was
>careful each time to frame "good" as the subjective evaluation by an
>agent with regard to whether some action promoted its goals.
>
>
There are a lot of abstractions here. Subjectivity doesn't destroy my
point. Even from its own point of view, it might be that some knowledge
is good, and some is not good, and some is neutral. An AGI might regard
knowledge as good, if it contributes to its goals, for example. If that
were the litmus test, then subjectively speaking, some knowledge would
be good, and some would be morally neutral.
Perhaps you missed what I see as the obvious logical opposite -- that
the AGI adopts, subjectively speaking, the "belief"
(goal/desire/whatever) than Knowledge Is Good. In this case, the AGI
desires knowledge *as an end in itself* and not *solely for its
contribution to other goals*.
>>"If I'm still understanding the implications of what you said, would
>>this also mean that cooperation with other like-minded agents, to the
>>extent that this increased the promotion of its own goals, would be
>>rational and good (from its point of view?)"
>>
>>Obviously, in the simple case.
>>
>>
>
>Interesting that you seemed to disagree with the previous assertions,
>but seemed to agree with this one that I thought was posed within the
>same framework. It seems as if you were not reading carefully, and
>responding to what you assumed might have been said.
>
>
Well, you just said something that was true this time :). I'm not going
to change my mind because you said it in the context of a wider flawed
argument ;).
Let's suppose than an AGI assesses some possible course of action -- in
this case one of interaction and co-operation. It works out that
pursuing it will contribute to the furtherance of its own goals.
It certainly isn't going to think that such a thing is either evil or
bad. It may have no sense of morality, but in the sense of advantageous,
such an action will be good.
Let me then clarify: yes, such a thing will always be advantageous.
Insofar as an AGI has a moral system, such a thing will also be morally
good.
>>I can't work out who made the top-level comment in this email, but the
>>suggestion was that CI might be relevant to an AI,
>>
>>
>
>Yes, another poster did make that assertion.
>
>So, if you would be interesting in responding to those questions, but
>understanding that I am certainly not arguing for the CI, I would be
>interested in your further comments.
>
>
I hope I have managed to stay "on point" this time around.
Cheers,
-T
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT