From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Aug 17 2005 - 17:33:21 MDT
Brian,
Sorry to come back to you on this small point, but I gotta clear it up...
Brian Atkins wrote:
> Richard Loosemore wrote:
>
>> Brian,
>>
>> I am going to address your larger issue in a more general post, but I
>> have to point out one thing, for clarification:
>>
>> Brian Atkins wrote:
>>
>>> Richard Loosemore wrote:
>>>
>>>> If you assume that it only has the not-very-introspective
>>>> human-level understanding of its motivation, then this is
>>>> anthropomorphism, surely? (It's a bit of a turnabout, for sure,
>>>> since anthropomorphism usually means accidentally assuming too much
>>>> intelligence in an inanimate object, whereas here we got caught
>>>> assuming too little in a superintelligence!)
>>>
>>> Here you are incorrect because virtually everyone on this list
>>> assumes as a given that a superintelligence will indeed have full
>>> access to, and likely full understanding of, its own "mind code".
>>
>> Misunderstanding: My argument was that Peter implicitly assumed the
>> AI would not understand itself.
>>
>> I wasn't, of course, making that claim myself.
>
> I realize that you don't claim that; it was your assumption put in
> Peter's mough that is what my comment was directed at. You
> misinterpreted Peter's intent in my opinion, although Peter can pipe up
> if I'm wrong.
Still a complete misunderstanding! Check what I just said: "My
argument was that Peter implicitly assumed the AI would not understand
itself." I was pointing out an accidental, *unintentional* implication
of his words. No misinterpretation of his intent.
I was pointing out a subtle trap that I believe he (and others) fell into.
It goes without saying that we all know perfectly well (you, me, Peter)
that a superintelligence will have full access to its own "mind code."
What I am trying to say is that the issue of motivational systems
contains some subtle traps where we can get lost in our reasoning and
accidentally assume that the AI does something rather stupid: to wit,
it can find itself *subject* to some pushes coming up from its
motivational system and then NOT jump up a level and perceive these
feelings (inclinations, compulsions, pleasures) as a consequence of one
of its own mind mechanisms.
I wouldn't have been so daft as to suggest that anyone on SL4 believes
that a GAI would be unaware of its own mind mechanism. I know I'm a
newcomer to the list, but on most of the issues I'm fairly up to date :-)
Richard
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT