From: mike99 (email@example.com)
Date: Mon Dec 15 2003 - 10:52:48 MST
The Laura program and its ilk are not AI's but future versions of them will
be. And since, as Eliezer points out "a human male is not designed as the
human female's ideal boyfriend, nor vice versa" we can expect the future AI
versions to target the emotional-sexual needs and desires of each sex--and
of specific individuals of each sex--with a high degree of accuracy that
continues to improve as the AI learns more about the targeted individual.
These AI's will simulate (or seem to become) the ideal
boyfriend/girlfriend/mate for each person. At first, this will only be in
the form of online chat and virtual reality encounters. (By the way, many
marriages have already been ruined by online adultery that leads to
real-life adultery.) Then, if the human has become sufficiently enamored, he
or she will want to have a full-time, 24/7 relationship with the loved one
in whatever medium that love one resides. Since it's unlikely in the very
near future that AI's will become embodied (whether as robots or cyborgs),
the greater likelihood is that the human will seek to upload into the AI's
This is precisely what happens in the concluding, fictionalized portion of
Ray Kurzweil's book _The Age of Spiritual Machines_. Many SL4 folks probably
skipped this book because it is literally nothing new for us. But I think
Kurzweil's highly favorable depiction of human-AI romance is worth reading.
(You can skip all but this last portion of the book.) What happens to
Kurzweil's character may happen to many people: She becomes more and more
dependent upon, friendly with, and ultimately emotionally (and sexually)
bonded to her AI companion. As generations of computers rapidly grow in
power and fall in cost, she is able to spend more and more time with an
increasingly realistic AI. She divorces her husband. Eventually, she uploads
into her AI's environment.
While this AI is depicted as infinitely wise and knowledgeable, totally
satisfying (and indeed, awing) her, what remains less clear is how her son
reacts to his parents' divorce. He never speaks to the reader. His
emotionally besotted mother does all the talking, assuring the reader (who
is represented by Kurzweil himself) that all is well with everyone
concerned. I'm not so sure that would be the case.
Kurzweil does depict this type of AI as having accomplished the hard task
Eliezer described as "[making] people stronger instead of weaker." But
again, I am dubious about the likelihood of this being accomplished first
time round. It would be much easier to make people weaker and more
dependent; that's easy -- just give people everything they think they want
and don't require any hard thinking, tough choices, or real efforts from
them. In no time at all people who have been indulged in this way will
comprise a new "welfare class" of astonishing lethargy and
self-centeredness. They would soon be living in the "dream factories" that
Arthur C. Clarke forecast as early as his 1950s novella "The Lion of
A truly Friendly AI would not wish such a fate on its best friends.
"For any man to abdicate an interest in science is to walk with open eyes
-- Jacob Bronowski
Extropy Institute: www.extropy.org
World Transhumanist Association: www.transhumanism.org
Alcor Life Extension Foundation: www.alcor.org
Society for Technical Communication: www.stc.org www.stc.org
> -----Original Message-----
> From: firstname.lastname@example.org [mailto:email@example.com]On Behalf Of Eliezer
> S. Yudkowsky
> Sent: Saturday, December 13, 2003 11:41 AM
> To: ExI chat list; firstname.lastname@example.org; SL4
> Subject: Affective computing: Candy bars for the soul
> Wired has recently run an article on "affective computing" (which, please
> note, is not even remotely related to FAI) about detecting and simulating
> emotions. The article is about a chatbot named Laura, designed to
> encourage its user to stick to an exercise program.
> Watch this space for further developments. This is an incredibly early
> form of the technology and I don't expect problems for at least a decade,
> but when it hits it will hit hard.
> This has nothing to do with AI; it's about programs with incredibly
> realistic graphics and means for recognizing emotions in their targets,
> being able to deploy apparent behaviors that act as superstimuli
> for human
> emotional responses. Think of chocolate chip cookies for emotions.
...As any evolutionary theorist knows, a human male is not designed
> as the human female's ideal boyfriend, nor vice versa.
> At the very least it would take far greater skill, wisdom, knowledge to
> craft a Laura that made people stronger instead of weaker. How many
> decades did it take to go from candy bars to health food bars? Which is
> cheaper? Which is more popular? And worst of all, which tastes better?
> I could be surprised, but what Laura presages is probably NOT a good
> thing. Transhumanism needs to lose the optimism about outcomes. Nobody
> is taking into account the fact that problems are hard and humans are
> stupid. Watch this space for serious developments in some unknown amount
> of time, my wild guess being a decade, and quite possibly nothing
> happening if "faking it well" turns out to be AI-complete.
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT