From: Durant Schoon (durant@ilm.com)
Date: Tue Jun 12 2001 - 17:15:01 MDT
> 1)Some coercive memes and exponential cults (as the terms seem to be used in
> this thread) are necessary to rational interaction and the development of
> transhuman intelligence. Is English grammar and vocabulary a coercive set
> of memes? The Academie Francaise certainly thinks so! Is algebra? It is
> remarkably successful at self-replicating, and many people who learn it at
> first would rather not, but come to enjoy it after being changed. The
> exponential cults started by Euclid, Gutenberg, Newton, Einstein, and Turing
> can be rather frightening to the uninitiated.
true.
> 2)If coercion and cult are redefined to keep a pejoratively evaluative
> connotation, its much less clear they will flourish in a transhuman
> environment. Assume the first approximation of "coercion" as applied to
> information is that information is coercive when it alters an AI's behavior
> without being mediated by the AI's intelligent reflection. (Paradigm
> example: direct involuntary change in AI's own source code.) Intelligence
> just is life's defense to such coercion. (Between the stimulus and the
> response falls the shadow.... of thought). The more intelligence develops,
> the more defense the individual has. Compare an AI to your desktop PC,
> vulnerable to viruses it cannot intelligently examine. Coercion, from the
> environment or other intelligences, can never be eliminated, but a sphere of
> potential volitional activity expands as intelligence expands.
I see the argument that increased intelligence can protect against coercion.
However, if our competitors/peers/companions all out pace us intellectually,
are we not, relativistically, still just as vulnerable?
Will transhuman law develop in time to protect everyone? (maybe during
Eli's wisdom tournaments, transhuman issues can be tackled *before* we
are ever faced with them).
Perhaps you can tell us or recommend a good book on how law evolves today.
> 3) Most of the examples discussed involve coercion by trick, not direct,
> forceful coercion. But the kinds of tricks proposed become much more
> difficult in a transhumanly intelligent environment.
Yes. I am specifically interested in "coercion" where Friendliness reigns
supreme. I guess it's "coercion with consent". Admittedly, I do what my
credit card companies and the DMV want me to, without too much detriment
to my well being. Our goals are contractually, and I suppose mutually,
aligned in some sense.
These relationships are not always mutually beneficial. I was listening
to the radio this morning about the debate over pay-day cash advances and
how people can get locked in a cycle of debt. Desperation can wreak
havoc on the life of a mostly-rational mind.
> First, I suggest the
> value structures of a person will in general become more stable as
> intelligence increases, so conversions become more difficult, especially
> conversions unrelated to the progress of scientific theories. (The string
> theory cult may prosper, but Astrology will have more and more difficulty in
> propagating as anything other than a set of literary symbols.)
I agree that our knowledge of the universe will probably all increase and
become (or remain) sufficiently stable, revolutionary paradigm shifts aside,
that self corrective meme systems, like Science will thrive.
Value structures (I'm interpreting as a means of assigning value to various
aspects of our experience: books, memories, friends, billboards, manure
sandwiches) might or might not be stable. I don't want to calcify into stone.
I don't want to randomize into noise. I want to keep evolving into more life
with new and "better" vstructs.
If I had locked myself off before considering Friendliness, I think I'd be
missing out right now.
> Second, Bill
> may be able to use tainted information to induce Charlie to change vis value
> structure (isn't this what was meant by "volitional structure?") in ways
> unforseen and unwanted by Charlie, but Dan and Ellen, seeing the results,
> will be harder to fool.
Yes, as Eli said, some will get eaten by memes. And there is hope yet if
Dan and Ellen can still influence Bill and "rescue him"...but he might not
want to be "rescued"...
> (And again, as intelligence increases, so will
> general knowledge of the social space, so that all the Dans and Ellens are
> more likely to become aware of each transhuman equivalent of scams, chain
> letters and cults, that arise.)
The slower the transition to higher intelligence takes, the more time Dan
and Ellen have to adapt and learn. The faster the ascent, means just the
opposite.
> 4) In general, meme talk, with its Darwinian overtones and selfish gene
> metaphors, becomes progressively attenuated as intelligence develops. The
> point of meme talk is to abstract from the mechanism by which ideas are
> evaluated and adopted. The point of AI is to understand, model and improve
> the mechanisms by which ideas are evaluated and adopted.
Ah, but if one can rewire the very criteria by which memes are selected and
transported (ie. I can modify how I'll accept ideas), and if memes
themselves can lead to this rewiring, intelligence must quickly learn to
guard itself against epidemic loss. Can it do so quickly enough is an
important question. This has sort of been covered in the "When Subgoals
Attack" thread, only now it's not an attack, but run away replication
IN THE DIRECTION of collective volition (eg. Exponential Cults).
(Maybe this should be called Exponential Groups to avoid pejorative
connotations).
I disagree that meme talk will attenuate as AI devolops. As AI "understands,
models and improves the mechanisms by which ideas are evaluated and adopted",
meta-memes about that understanding, modelling and improvement of these
ideas by observation will spread. These meta-memes will be generated
*in addition* to the ideas themselves (which when transmitted to other
entities, qualify as memes also). As long as concepts are created,
transmitted and disappear, we can talk about memes. But maybe you're
saying something like natural evolution (of say animals) is different
than rational design (genetic engineering of animals) or combinatorial
biology (changing each "bit" of a mouse gene to see what it produces).
Maybe you are saying that natural evolution of memes is significantly
different from rationally designed memes. That's fine by me. I'm willing
to update my overtones and connotations when talking memes with regard
to AI. The meme-meme is dead. Long live the meme-meme!
> 5) This is not to say that incidents of coercive behavior of all types will
> not persist as intelligence develops, just that the fear that coercive memes
> and cults will necessarily propagate like kudzu is overstated, and that
> intelligence generally provides defenses to coercion slightly more quickly
> than it provides new means to coerce.
Hmm, where do our memetic defences come from? If it partially comes from
the long time it takes to propagate adoption of ideas, that will change.
If it comes from evolutionary or acquired strategic defenses, then superior
attacks might render these useless. Can we entirely get rid of kudzu and
other weeds as it is now? (Ok maybe an SI could, but an SI could also
develop a much worse strain if ve so wanted).
Maybe we won't all be infected by cult memes, or we will, but we'll call
them communities. Or perhaps I am concerned because the possibility of
memetic plagues is not that outlandish. With winner take all hypergrowth,
it doesn't seem like such a bad idea to consder the worst case (or implore
an SI to do so). I'll bet there's a good sci-fi book I just need to read :)
I must admit though, that since I accept that an SI will likely outpace
the forces of corrosion and corruption, that sentient individuals have a
good chance as well. Of course once we are uploaded, will individualism
really be that highly regared? Are we in for a spate of mergers and
acquisitions? Can we / should we resist. It will be a strange, strange
world...
With modifiable goals systems I just want some assurance that we don't
all just end up in endless pleasure stimulus-response loops. It seems
like we'll have to be very careful in the way we phrase what we want
because we might just get it. Maybe, like Odyseus, I'll be able to
tether myself to a timer and pass the sirens of hedonisms without
getting stuck...or maybe not <gulp>.
-- Durant Schoon
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT