From: fudley (fuddley@fastmail.fm)
Date: Tue Jun 15 2004 - 09:24:10 MDT
On Tue, 15 Jun 2004 "Randall Randall" <randall@randallsquared.com> said:
>This assumption, however, is on the same level as my
>assumption that my car will start the next time I
>get in it.
No, you can test the assumption about the car starting but you can’t
test the assumption about other people’s consciousness.
> an AI which has no discernible shared structure with
> the human brain except that both can do general
> problem solving may very well not be conscious.
As a practical matter when you meet fellow meat creatures you have no
way of knowing what state the neurons in their brains are in, yet I’ll
bet you think they’re conscious most of the time, when they’re not
sleeping or dead that is, because they act that way. Hell, you’ve never
even seen me but I’ll bet you think even I’m conscious.
But let’s suppose you have a super brain scanning machine and you use
Eliezer’s consciousness theory to analyze the results, much to your
surprise it says you really are the only conscious being on the planet,
what would you do? Would you start treating other people like dirt
because they have no more feelings than a rock, or would you suspect
that Eliezer’s theory is full of beans?
> you're slipping in the unstated premise that
> it has a goal regarding itself.
No, it’s not unstated at all, its goal is to solve problems and having
your actions limited by rules made by a creature with the brain the size
of a flea is a problem.
>> remember what the “I” in AI stands for.
> I think Eliezer was right to start using a different term.
It’s interesting, in fields where people really understand things, like
pure mathematics, they struggle to make the vocabulary as simple as
possible, some of the most important words are, connected, complete,
contentious, point, set, group, open, closed and even “simply”. On the
other hand, in areas where the experts don’t have a clue what the hell
they’re talking about, like psychology and sociology, they go on and on
about the paradigm of subcapitalist deconstructive theory indicative of
predialectic socialism to denote a mythopoetical whole not excluding the
premise of posttextual desublimation states because it is not
demodernism, but subdemodernism interpolated into a Lacanist obscurity
that includes truth as a subset of reality.
Inventing a new three dollar word to replace “intelligence” is not a
good sign that the mind of the writer is clear and is most certainly
unkind to the reader
> You seem to indicate that you would treat
> an AI as if it were not conscious, if it
> didn't act as though it
> were. Is this the case?
Yes certainly, in fact I wouldn’t even call it an AI, I just call it a
A.
>more offspring is that which would be exhibited
>by an organism with general problem solving
>ability *and* self-interest as a very high goal.
Any creature without both will not get very far. I’m a AI with no self
interest, Hmm, if I do that I’ll erase my memory and fry all my
circuits, well there is noting bad in that so I’ll do it.
John K Clark
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT