From: micah glasser (micahglasser@gmail.com)
Date: Wed Dec 14 2005 - 10:29:38 MST
Jef,
I agree with you that 'good' is nothing more than furthering one's goal
regardless of what that goal is in one sense. On the other hand might it not
be possible that there is an objective best trajectory of advancement for
all of mankind, and would this not be 'the good' in an objective sense? If
so, then we would want to ensure that the goal of achieving this objective
good for mankind was also a goal system of an AGI. Also, I would like to
point out that my thinking is certainly informed by modern cognitive
psychology and evolutionary psychology. I just think that far to often
people's philosophical thinking gets mushy because they start trying to
solve philosophical questions in the same way one would solve a scientific
problem. It is often good to go back to the great philosophers of yesteryear
for a refreshing new perspective on present problems of mind.
On 12/14/05, Jef Allbright <jef@jefallbright.net> wrote:
>
> Jef (to Michael)
> Do you mean that to the extent an agent is rational, it will naturally
> use all of its
> instrumental knowledge to promote its own goals and from its point of
> view there would be no question that such action is good?
> Tennessee
> Well, the CI is by definition an objective thing. I do *not* believe
> that it's obvious that all AIs would be relativists about morality.
> However, I believe it's most likely so we don't need to argue about
> it.
>
> Jef
> I certainly don't see any reason to argue about that particular topic
> at this time.
>
> Tennessee
> The intended question has a double-entendre. The first meaning is that
> "all my goals are good from my point of view, by definition, so I
> don't need to question it". The second meaning is "I have no
> conception of morality, therefore I don't need to question it".
>
> Jef
> My intent was closer to your first interpretation, but perhaps not
> exact. When I say "good" I don't mean it in a moral sense or as in
> "good" vs. "evil". I am trying (and not doing a very good [oops, that
> word again] job of it) to show that any agent (human, AGI, or other
> form) must necessarily evaluate actions with respect to achieving its
> goals, and that actions which promote its goals must be considered
> good to some extent while actions that detract from its goals must be
> considered bad. I am arguing that what is called "good" is what
> works, and while such evaluations are necessarily from a subjective
> viewpoint, that we can all agree (objectively) that for each of us,
> what works is considered good.
>
> <snip>
>
> Tennessee
> I think it's concievable that an AGI might have no moral sense, and be
> bound only by consequential reasoning about its goals. I also think
> it's concievable than an AGI would have a moral sense, but due to its
> mental model have varying beliefs about good and evil, despite its
> conclusions about what is objectively true.
>
> Jef
> I would argue, later, that any moral "sense" is ultimately due to
> consequential effects at some level, but I don't want to jump ahead
> yet. I would also argue, later, that any moral sense based on
> "varying beliefs about good and evil, despite its conclusion about
> what is objectively true" (as you put it) would be an accurate
> description of part of current human morality, but limited in its
> capability to promote good due to its restrictions on its own
> instrumental knowledge that must be employed in the promotion of its
> values.
>
> Jef's question #2 to Michael
> If this [question #1] is true, then would it also see increasing its
> objective knowledge in support of its goals as rational and inherently
> good (from its point of view?)
>
> Tennessee
> Not necessarily. It may consider knowledge to be inherently morally
> neutral, although in consequential terms accumulated knowledge may be
> morally valuable. An AGI acting under CI would desire to accumulate
> objective knowledge as it related to its goals, but not necessarily
> see it as good in itself.
>
>
> Jef
> I wasn't making any claim about something being good in itself. I was
> careful each time to frame "good" as the subjective evaluation by an
> agent with regard to whether some action promoted its goals.
>
> Tennessee
> There are a lot of abstractions here. Subjectivity doesn't destroy my
> point. Even from its own point of view, it might be that some
> knowledge is good, and some is not good, and some is neutral. An AGI
> might regard knowledge as good, if it contributes to its goals, for
> example. If that were the litmus test, then subjectively speaking,
> some knowledge would be good, and some would be morally neutral.
>
> Perhaps you missed what I see as the obvious logical opposite -- that
> the AGI adopts, subjectively speaking, the "belief"
> (goal/desire/whatever) than Knowledge Is Good. In this case, the AGI
> desires knowledge *as an end in itself* and not *solely for its
> contribution to other goals*.
>
>
> Jef's question #3 to Michael
> If I'm still understanding the implications of what you said, would
> this also mean that cooperation with other like-minded agents, to the
> extent that this increased the promotion of its own goals, would be
> rational and good (from its point of view?)
>
> Tennessee
> Obviously, in the simple case.
>
> Jef
> Interesting that you seemed to disagree with the previous assertions,
> but seemed to agree with this one that I thought was posed within the
> same framework. It seems as if you were not reading carefully, and
> responding to what you assumed might have been said.
>
> Tennessee
> Well, you just said something that was true this time :). I'm not
> going to change my mind because you said it in the context of a wider
> flawed argument ;).
>
> Let's suppose than an AGI assesses some possible course of action --
> in this case one of interaction and co-operation. It works out that
> pursuing it will contribute to the furtherance of its own goals. It
> certainly isn't going to think that such a thing is either evil or
> bad. It may have no sense of morality, but in the sense of
> advantageous, such an action will be good.
>
> Let me then clarify: yes, such a thing will always be advantageous.
> Insofar as an AGI has a moral system, such a thing will also be
> morally good.
>
> Jef
> Yes, I consistently mean "good" in the sense of advantageous. It
> seems that making this clear from the beginning is key to more
> effective communication on this topic. My difficulty with this is I
> see "good" as *always" meaning advantageous, but with varying context.
>
> I think we are in agreement that for any agent, those actions which
> promote its values will necessarily be seen by that agent as good.
>
> Thanks for taking the time to work through this, and thanks to the
> list for tolerating what (initially, at least) appeared to be very low
> signal to noise.
>
> - Jef
>
-- I swear upon the alter of God, eternal hostility to every form of tyranny over the mind of man. - Thomas Jefferson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:54 MDT