From: Emil Gilliam (email@example.com)
Date: Sun Nov 25 2001 - 18:24:24 MST
>I think this was a perfectly valid question not only because I am
>I now know of at least two other people on this list that are aware
>of NLP, but
>also because I have had many thoughts on this topic and its relationship to a
>general AI as well.
The Skeptic's Dictionary has this entry:
>First of all, NLP is but one of many models of human psychology. We
>that anthromorphisizing a SAI/GAI doesn't make sense, but I am certain that we
>can at least make comparisons, and maybe through a scientific method increase
>our understanding of both human and non-human AI. I am also curious
>to know what
>the other human psychology models are and compare them to a new AI.
Calling it just another "model" out of many is disingenuous, similar
to the claims by apologists of "alternative medicine" and some of the
kookier varieties of academic multiculturalists who speak of many
"ways of knowing" -- as if all claims are equally valid.
>I am not an NLP trainer, but I know one, and I do know, or used to know, some
>basic techniques and how to apply them. Indeed, NLP is not a science
>in the way
>of formulating theorems and proofs, et al. NLP is a set of techniques that
>enable effective communication between humans. Neuro == brain, Linguistic ==
>spoken language, and Programming usually involves well-known
>techniques such as
Aah, now you're getting into testable claims -- whether NLP's
"techniques" do the many fabulous things they're claimed to do. The
evidence is not particularly encouraging.
>Let's say that the path to superhuman AI lies in enhancing current human
>intelligence. The SAI will probably still have human feelings, but more
>"enhanced", or "faster".
Err ... an SAI would be "enhanced" because it was created by enhanced
humans? I thought the point of enhanced humans (back when it appeared
this was necessary -- Eliezer disagrees now, I'm rather ambivalent)
was to get to SAI (and hence the Singularity) faster. An SAI doesn't
automatically take on the properties of its creators; to claim
otherwise is an anthropomorphism.
> Would persuasion techniques become more difficult to
>implement? Would it be more difficult, even for a SAI, to program an agent
>surfing the web trying to learn from all published human knowledge,
>but it reads
>mostly spam and advertisements?
>What about an unfriendly AI that learns all about NLP and uses it to convince
>humans, that it is friendly?
I think we don't have to worry about that.
>To answer Jordan's original question, I believe that NLP and any other human
>psychological model is very relevant to an AI model, although at a
>or "chunk" in NLP-speak. If we send a stream of random data into an AI's
>processing cortex, effectively blinding or deafening or causing a
>without ver knowledge, would we be able to predict its behaviour? Would we be
>able to fix it?
This class of problems is a perfectly valid area of study, but NLP
would be relevant only if it actually described how an AI works. It
hardly describes how a human works, so there's not much of interest
>I think the crux of the matter is that for one sentient being to communicate
>with another, the messages and signals being transmitted will be
>richer and open
>to different interpretation at different levels of communication
>the intelligence of the beings. One of my favourite metaphors for
>God worship my
>father told me, "if there is a God and He's pointing towards
>would we be sucking on his finger?" Understanding how humans think
>at the higher
>level is important if we are to dumb-down our messages to a sub-human AI, and
>the AI needs to learn how to get smarter and dumb-down vis messages for us.
When you've designed an AI that uses NLP to read my body language
(one of the central themes of NLP), remind me to be exceedingly
careful. Wouldn't want to raise my hands in a gesture and have it
interpret this as "Blast Emil off into the vacuum of outer space."
- Emil Gilliam
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT