From: Simon McClenahan (firstname.lastname@example.org)
Date: Sun Nov 25 2001 - 16:31:14 MST
----- Original Message -----
From: "Eliezer S. Yudkowsky" <email@example.com>
> Jordan Dimov wrote:
> > Has anyone on this list put any thought into the relevance of the Neuro
> > Linguistic Programming model of human psychology to AI?
> NLP == junk science + clever marketing
So how do you really feel about NLP, Eliezer?
I think this was a perfectly valid question not only because I am delighted that
I now know of at least two other people on this list that are aware of NLP, but
also because I have had many thoughts on this topic and its relationship to a
general AI as well.
First of all, NLP is but one of many models of human psychology. We already know
that anthromorphisizing a SAI/GAI doesn't make sense, but I am certain that we
can at least make comparisons, and maybe through a scientific method increase
our understanding of both human and non-human AI. I am also curious to know what
the other human psychology models are and compare them to a new AI.
I am not an NLP trainer, but I know one, and I do know, or used to know, some
basic techniques and how to apply them. Indeed, NLP is not a science in the way
of formulating theorems and proofs, et al. NLP is a set of techniques that
enable effective communication between humans. Neuro == brain, Linguistic ==
spoken language, and Programming usually involves well-known techniques such as
hypnosis. Clever marketing is absolutely correct, because if the marketing was
not clever, it would not be a very effective technique. The most well known and
financially successful NLPer is of course Tony Robbins. Do a web search for NLP
and/or Tony Robbins, you will find plenty of ammunition against NLP and for NLP.
Especially the ethics behind its use and practice.
Let's say that the path to superhuman AI lies in enhancing current human
intelligence. The SAI will probably still have human feelings, but more
"enhanced", or "faster". Would persuasion techniques become more difficult to
implement? Would it be more difficult, even for a SAI, to program an agent
surfing the web trying to learn from all published human knowledge, but it reads
mostly spam and advertisements?
What about an unfriendly AI that learns all about NLP and uses it to convince
humans, that it is friendly?
To answer Jordan's original question, I believe that NLP and any other human
psychological model is very relevant to an AI model, although at a higher level,
or "chunk" in NLP-speak. If we send a stream of random data into an AI's
processing cortex, effectively blinding or deafening or causing a hallucination
without ver knowledge, would we be able to predict its behaviour? Would we be
able to fix it?
I think the crux of the matter is that for one sentient being to communicate
with another, the messages and signals being transmitted will be richer and open
to different interpretation at different levels of communication proportional to
the intelligence of the beings. One of my favourite metaphors for God worship my
father told me, "if there is a God and He's pointing towards enlightenment, why
would we be sucking on his finger?" Understanding how humans think at the higher
level is important if we are to dumb-down our messages to a sub-human AI, and
the AI needs to learn how to get smarter and dumb-down vis messages for us.
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT