From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Aug 17 2005 - 00:32:48 MDT
Brian,
I am going to address your larger issue in a more general post, but I
have to point out one thing, for clarification:
Brian Atkins wrote:
> Richard Loosemore wrote:
>> If you assume that it only has the not-very-introspective human-level
>> understanding of its motivation, then this is anthropomorphism,
>> surely? (It's a bit of a turnabout, for sure, since anthropomorphism
>> usually means accidentally assuming too much intelligence in an
>> inanimate object, whereas here we got caught assuming too little in a
>> superintelligence!)
>
>
> Here you are incorrect because virtually everyone on this list assumes
> as a given that a superintelligence will indeed have full access to, and
> likely full understanding of, its own "mind code".
Misunderstanding: My argument was that Peter implicitly assumed the AI
would not understand itself.
I wasn't, of course, making that claim myself.
Richard
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:01 MST