From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Thu Jan 01 2004 - 20:43:58 MST
Wei Dai wrote:
> On Thu, Jan 01, 2004 at 06:51:14PM -0500, Eliezer S. Yudkowsky wrote:
>> What does "better" mean when you say type A memes will do "better"?
> Better means having more copies of the meme in existence.
>> And why would they do better? Memetic fitness and evolutionary
>> fitness are not the same reason. Carrying meme A might contribute to
>> reproductive fitness, but it's not obvious to me why meme A would do
>> better memetically.
> You're being unexpectedly dense here. If meme A contributes to genetic
> fitness, and memes are mostly transmitted between relatives, then the
> increased genetic fitness implies more relatives for carriers of A and
> therefore also helps increase A's frequency in the meme pool.
I am not sure that this is a significant memetic fitness boost. In your
language, I expect that memes are mostly transmitted between non-relatives.
> On the other hand, if memes are mostly transmitted between
> non-relatives, then the increased genetic fitness does not help A's
> memetic fitness much. Instead, type B memes do better because their
> carriers spend less resources on childcare and more on propagating the
> memes in other ways.
This is not an either-or situation. Type A memes might have some tiny
amount of fitness from boosting the reproductive fitness of their holders,
while type B memes have some huge boost in fitness from converting their
holders into evangelists, thus causing type B to easily outcompete type A.
History suggests that this is what has happened. Barring the existence
of contraceptives, people do not need to be told to have lots of children,
nor to care for them - evolution is already screaming that into every
neuron; what can a mere philosophy do to improve on that? How much can a
philosophy *boost* evolutionary fitness, realistically speaking, again in
an age before contraception; and even then an insane amount of
evolutionary fitness might not amount to much as memes go. But if a mere
philosophy should divert lots of effort away from reproduction and towards
spreading the philosophy, why, that philosophy might spread to hundreds or
thousands of carriers. I think it implausible that memes should beat
evolution at its own game of increasing reproductive fitness.
And indeed the memes around us tend to be those that had their birth,
their origins, in their spread by dedicated evangelists; there's no
obvious evidence that memes are pouring all their strength into convincing
people to have more children, but plenty of obvious evidence that memes
are being selected to convince people to evangelize the meme.
Your idea seems to me to be as follows: People are, in general, more
vulnerable to memes, because in the ancestral environment there were lots
of memes around that offered philosophical reasons to attempt to have
children, because memes that offered philosophical reasons to have lots of
children created such a large differential in reproductive fitness as to
amount to significance on the scale of memetic fitness, assuming that such
memes are faithfully transmitted to children. Is this an accurate
representation? It strikes me as really unlikely. It is so much more
difficult to raise a baby than to talk to someone for ten minutes.
>> Also, why hypothesize a gene that discriminates childcare promoting
>> *memes* as such and promotes greater susceptibility to them, rather
>> than a gene that makes people like children, and hence (as a side
>> effect) memes that tap into people's liking for children?
> There's no reason why these genes can't both exist, is there? I see
> plenty of memes that try to promote child rearing as an obligation and
> also memes that try to promote it as an enjoyment, which I count as
> evidence that both of these kinds of genes exist.
> If you think it's not plausible that a gene can discriminate between a
> type A meme and a type B meme and promote the former, consider the
> hypothesis that the earlier in life that you encounter a meme, the more
> likely it is to be a type A meme. If this is true (and I think it
> probably was in our environment of evolutionary adaptation, because in
> childhood your parents tend to keep you away from type B meme
> carriers), a gene can promote type A memes simply by making the brain
> credulous in childhood and then increasingly skeptical as one grows
This is an extremely exotic, multistep explanation (with each additional
step decreasing the conjunctive probability) for a well-known phenomenon
with many simpler explanations, such as a child being around entities that
generally care for its welfare, and an adult having to take part in tribal
politics and discriminate new ideas. In fact, I believe I recall reading
about something of an arms race between parental information and child's
credulity; initially parents have a motive to provide the child with
useful information, so the children evolve to be credulous, but then the
standard parent-child conflict of interest kicks in and parents start
telling their children to share evenly with their siblings, giving
children a reason to be skeptical, and so on... This is much more obvious
than the proposition that "children are likely to be around type A memes".
But most of my own skepticism here derives from the proposition that (a) a
philosophical urge to bear children increases reproductive fitness by such
a large amount (in the ancestral environment) that the increase in
reproductive fitness amounts to a competitive increase in memetic fitness
(when memes can also be transmitted by talking for ten minutes, instead of
raising a child) and that this persisted reliably over ancestral durations
giving rise to genes which urge credulity to memes in general. Humans
sometimes do appear to be credulous but in contexts that suggest a quite
different cause, i.e., social sanctions for people who fail to believe in
the locally popular religion. There is no sign that "Believe in X,
because X implies you should have more children!" switches off people's
brains the way "God commands it!" does.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT