From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Aug 17 2005 - 22:24:44 MDT
Mitchell Porter wrote:
> Richard Loosemore:
> 
>> Nobody can posit things like general intelligence in a paperclip 
>> monster (because it really needs that if it is to be effective and 
>> dangerous), and then at the same time pretend that for some reason it 
>> never gets around to thinking about the motivational issues that I 
>> have been raising recently.
> 
> 
> This discussion is obscured by the use of concepts from human psychology 
> such as 'obsession', 'motivation', and 'pleasure'. It's functionalism in 
> reverse: instead of presuming that belief, desire, etc. are completely 
> characterized by a quasi-cybernetic description, here one is 
> anthropomorphizing a complex homeostatic mechanism and then attempting 
> to reason about its imputed psychology. Most of your critics accept the 
> validity of shuttling back and forth between psychological and 
> cybernetic descriptions, and have asked you to think about various 
> inhuman psychologies. I would ask you to abandon psychologizing entirely 
> for a moment
The discussion is not obscured by this issue.  I am using such terms 
advisedly, because they pertain to a type of mechanism (note: mechanism, 
not metaphysical or nonphysical entity) that I am trying to bring into 
the discussion.
I, like most cognitive scientists, can use such terms as shorthand for 
those mechanisms because we have a pluralistic grasp of distinctions 
between mechanisms and philosophy, and do not get confused between the 
two.  If you are confused please feel free to translate into some 
neutral language of your choice BUT please do not mistranslate into 
mechanisms that are different than the ones I am referring to.  There 
has been enough mistranslation of my arguments already (see separate 
post this evening).
This is not anthropomorphizing, or psychologizing, or functionalism in 
reverse, and none of my arguments depend on vague appeals to psychology, 
except where "psychology" refers to mechanisms.  You may get the feeling 
that they do, but this is not so:  look more closely and extract an 
example, please.
 > I would ask you to abandon psychologizing entirely
 > for a moment, and think about this entity as a *machine* - a homeostatic
 > system with advanced capabilities for calculation, adaptation, and
 > preemptive self-modification. It does nothing out of feeling,pleasure,
 > or desire. It does not have motivations or goals. It is as empty of
 > consciousness as a mirror, and the creation of a new level of feedback
 > will not automatically render it benign, any more than will the
 > introduction of a second mirror make a reflected figure smile in
 > self-recognition.
You make assertions about radical materialism that are highly debatable, 
easily challenged, crucially dependent on your own choice of the 
meanings of such terms as "feeling", and irrelevant to the discussion.
Then, you imply that I was making the rather foolish, Artificial 
Intelligence 101 kind of claim that "the creation of a new level of 
feedback will [] render it benign" and you impressionistically use your 
assertion to conclude that I am wrong.
Not going to wash.
Later, much later, we could have the C-word discussion.  Maybe.
Richard Loosemore
P.S. Sorry if I appear grumpy, Mitchell.  See other post, and also the 
time stamp on this message.  I am battle-weary.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT