Re: Sentience [Was FAI: Collective Volition]

From: fudley (fuddley@fastmail.fm)
Date: Tue Jun 15 2004 - 09:24:10 MDT


On Tue, 15 Jun 2004 "Randall Randall" <randall@randallsquared.com> said:

>This assumption, however, is on the same level as my
>assumption that my car will start the next time I
>get in it.

No, you can test the assumption about the car starting but you canít
test the assumption about other peopleís consciousness.

> an AI which has no discernible shared structure with
> the human brain except that both can do general
> problem solving may very well not be conscious.

As a practical matter when you meet fellow meat creatures you have no
way of knowing what state the neurons in their brains are in, yet Iíll
bet you think theyíre conscious most of the time, when theyíre not
sleeping or dead that is, because they act that way. Hell, youíve never
even seen me but Iíll bet you think even Iím conscious.
But letís suppose you have a super brain scanning machine and you use
Eliezerís consciousness theory to analyze the results, much to your
surprise it says you really are the only conscious being on the planet,
what would you do? Would you start treating other people like dirt
because they have no more feelings than a rock, or would you suspect
that Eliezerís theory is full of beans?

> you're slipping in the unstated premise that
> it has a goal regarding itself.

No, itís not unstated at all, its goal is to solve problems and having
your actions limited by rules made by a creature with the brain the size
of a flea is a problem.

>> remember what the ďIĒ in AI stands for.

> I think Eliezer was right to start using a different term.

Itís interesting, in fields where people really understand things, like
pure mathematics, they struggle to make the vocabulary as simple as
possible, some of the most important words are, connected, complete,
contentious, point, set, group, open, closed and even ďsimplyĒ. On the
other hand, in areas where the experts donít have a clue what the hell
theyíre talking about, like psychology and sociology, they go on and on
about the paradigm of subcapitalist deconstructive theory indicative of
predialectic socialism to denote a mythopoetical whole not excluding the
premise of posttextual desublimation states because it is not
demodernism, but subdemodernism interpolated into a Lacanist obscurity
that includes truth as a subset of reality.
 
Inventing a new three dollar word to replace ďintelligenceĒ is not a
good sign that the mind of the writer is clear and is most certainly
unkind to the reader

> You seem to indicate that you would treat
> an AI as if it were not conscious, if it
> didn't act as though it
> were. Is this the case?

Yes certainly, in fact I wouldnít even call it an AI, I just call it a
A.

>more offspring is that which would be exhibited
>by an organism with general problem solving
>ability *and* self-interest as a very high goal.

Any creature without both will not get very far. Iím a AI with no self
interest, Hmm, if I do that Iíll erase my memory and fry all my
circuits, well there is noting bad in that so Iíll do it.

John K Clark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT