Friday, January 29, 2010

The Write Stuff

By Reid Buckley
The American Conservative

Higher education has destroyed young Americans’ ability to express themselves on the page—or in their own minds.

One student shouted indignantly, “I thought this was a course in public speaking!” There were murmurs of assent. I explained that, all things being equal, one’s thoughts were best written out before they were spoken. But the 30 or so members of the class remained upset. They wished to “wing it.”

That is the essence of the contemporary zeitgeist, which preaches spontaneous efflorescence born of inspiration issuing from a well of authenticity and soaring on the exuberant wings of conceit. It is the philosophy of ejaculation and orgasm and no Catholic guilt. These young people had not been taught to edit. They had not been taught self-criticism. They had been reared in an environment of self-esteem, even when this went unexamined and was unearned. And when they returned a week later with the fruits of their labors, I was appalled. I took the papers home and spent two afternoons and two evenings past midnight editing them.

I had to contend with an illiterate heaping of multisyllabic social-studies mush whose meaning was either obscured or contradicted by other heapings of academic mush, as indecipherable as they were ungrammatical. Illicit inferences lurked under false premises like salamanders under rocks. Logical connections did not exist. Non sequiturs were thick as chiggers. Do not mention grace or style. Of the 28 papers I labored through, only in two did I detect talent buried in the rubble. I had never seen anything so hopeless.

When I handed my University of South Carolina students their edited work, several shot up their hands and demanded to see me after class, to which I readily agreed. I sat down with each of them in chambers behind the lecture hall and went over the papers sentence by sentence, paragraph by paragraph. This took a lot of time. I had scrawled in the margins, squeezing my comments between the typed lines of the text. I had tried to be charitable, but because of the limitations of space, I had to be blunt.

One fellow had nothing to say about the shoddiness of his work, except to ask me belligerently, “How much does the final speech weigh?” “Fifty percent,” I said, reminding him, “You are aware of that, it’s on the syllabus.” “Well, it’s unfair,” he protested hotly. “This could ruin my 4.0 average! You do that, and I’ll complain to the dean!” He stomped out, leaving me to marvel that anyone so deprived of the ability to express himself could fly such academic banners. 4.0!

When I proceeded to go over the essay of another young man, his voice caught in his throat and he broke down. I was taken aback. We hadn’t proceeded beyond the first page. His wasn’t the worst effort, either. But he wasn’t protesting my criticisms. To the contrary. “You’re right,” he kept repeating, tears flowing, “It’s awful. I can’t write my thoughts down. They come out a mess, I know!” And then he related a scandal. Not in four years of high school and three years of college had a single teacher expressed concern about his writing or offered to edit it. When he said this, other students spoke out to confirm cognate experiences. “What can I do now?” this young man asked me despairingly. “I graduate in two months!”

The dimensions of his doom and that of these other young people hit me with full force. Not once in their educational lives had they been taught to impose order on chaos, that being contrary to the central dogma of liberal-arts education in our country today. There is no such thing as choosing, as distinguishing between the false and the real, discriminating between good and bad. The cost of this heresy to our nation is beyond calculating: for two generations our businesses, professions, universities, and politics have been populated by moral illiterates who reject reason.

The art of writing is the soul of reason, from which all civilization has spun. If one cannot give expression to one’s thoughts, one is reduced to grunts. These young men and women were to be graduated in two months’ time. Yet they were functionally illiterate, as the saying goes—a hideous euphemism for being thrust into the adult world intellectually crippled. Several other students who crowded around me now claimed that never had they had their written work reviewed. I was incredulous. “Never?” “Not once!” came their reply. Two or three then claimed that in nearly four years of college they had never been required to write an essay. Examinations were multiple choice.

I had no answer for them. The laziness of the faculty disgusted me. Some of these students were studying to be teachers. My anger burned. It was not their fault that they were unable to think or write their way out of a paper bag. A whole generation was being defrauded. The final day of the course I advised my students that their parents should join in a class-action suit against the state’s Commission of Higher Education, and at the end of the second term, I resigned.

In the past 70 years, the American Dream has been reduced to owning one’s own home and other materialist satisfactions. No other dimension of human existence is allowed. That, of course, was never the American Dream. The American Dream was to be free. But one does not say these things in the Age of Obama, when government is no longer perceived as the handmaiden of tyranny. Paper money replaces gold, vice virtue. Sociology replaces merit, earmarks candor. Euphemistic language replaces plain speech with sentimentalized softening. Public figures do not lie; they misspeak. They do not cheat or transgress the law or do moral wrong; they make mistakes.

Communication suffers in this culture of moral and intellectual relativism, where standards, like the coin of the realm, are debased. One can be illiterate and graduate 4.0. Reality becomes virtual. Hard true thought—the primal condition of writing—which can be offensive, difficult, and unpopular, is rendered by academe in language of such bureaucratic opacity that, it is hoped, no one will be able to penetrate it, to discover that it is hollow, that Nero is wearing no clothes. Reality is euphemized, extenuated, attenuated, temporized, dishonored. One is not born to this; one is obliged to acquire the vice of fungible truth in our decadent society and our deeply corrupted educational system.

I do not exaggerate. Eugene Genovese, the grand onetime Marxist historian, has written a tender memoir on his recently deceased wife, Betsey Fox, whom it was my privilege to know. In the course of his reminiscence, Professor Genovese remarks that it required graduate school for his wife’s prose to be ruined. She was 11 years younger than he and a budding Marxist scholar when he was already an established figure on the red-hot Left. He had been impressed by her college papers from Bryn Mawr, but when she went on to Harvard for advanced studies, her papers lost all charm, directness, and style. Academic bloviations took the place of the hard-hitting analytical energy that she had given evidence of as a younger woman and for which she would later become renowned as a polemicist. He ruminated:

I reminded myself that most graduate schools seemed dedicated to the transformation of the English language into gibberish. In place of clear, straightforward prose, budding geniuses in graduate seminars have to impress their professors with the profundity that only bad writing and vacuous ‘theorizing’ can communicate.

With her husband’s help, Betsey Fox soon got out from under the baneful influence of academe. American scholars and professors of the liberal arts—along with sociologists, economists, and theorists of any discipline—may be the only class of intellectuals in which their ordinary social chitchat is superior to their polished prose. They are capable of saying, “Will someone shut the damn door?” or “Who let the cat in?” But when they write for publication—that is, for the admiration of their peers—our intellectuals seem to strap on impenetrable dullness like chain mail.

A certain defensive posture explains the vice. It is difficult for us laymen to understand the degree to which academics are twerps, nerds, doofuses, and dweebs, not to mention moral cowards. Academics who are not protected by tenure are terrified of exposing themselves as the second-rate minds that most of them are, as sloppy, lazy, superficial, and mean-spirited pseudo-scholars to whom the discredited concept of truth is of less concern than what is politically de rigueur. So they rig their prose out in dense, nearly impenetrable syntax. Relative clauses become cherished long-lost cousins. Hairsplitting becomes more important than getting anywhere. Our academics become unable to shut the damn door or put out the cat or parse a sentence or respect the sequence of a syllogism.

They are afraid of putting on plain display their biases, the ordinariness of their minds and spirits, so they take cover in jargon. Sure, to not one person in ten million is given originality of mind. An Albert Einstein or a Stephen Hawking does not come along every other day. Not one person in several hundreds of thousands is even given a first-rate intellect. We must accept the humbling edict of fate and console ourselves: we are all genetically unique and our experiences are also almost always singular. It is virtually impossible for us to sieve any subject through our consciousness without endowing it with a special, even an original, slant. We should take confidence in this biological singularity and never betray it by worrying over whether the stockholders will like what we say, or fearing that our analysis will not please faculty lounges at Harvard or Chicago or Stanford, or fretting that our opinion will fail to find favor with the establishment, whatever it may be. We must be true to ourselves if we want to write.

Do you wish to wrest order out of chaos? I pray you have not attended college or taken classes at some writing school. Instead, go to work, travel, starve, meditate, fall hopelessly in love and have your heart broken. The deadening hand of academia, of corporate culture, of Beltway correctness destroys not only one’s native ability to discriminate but also one’s powers of expression.

Writing gives thinking shape. It suffers fools badly. It discerns design where none is apparent: the writer’s founding assumption is that order, right order, exists. To write is to develop a nose for posturing and an aversion to the false. It is to be in awe, to apprehend the structure of the universe in the loneliness of the human heart. Writing is a gift, which does not believe for a moment it is unearned, unless no merit can be ascribed to the submission.

Top Ten Darwin and Design Science News Stories for 2009

Access Research Network (ARN)

1. Intelligent Input Required for Life. In a significant peer-reviewed article in the September 2009 journal IEEE Transactions on Systems, Man and Cybernetics authors William A. Dembski and Robert J. Marks II use computer simulations and information theory to challenge the ability of Darwinian processes to create new functional genetic information. This paper is in many ways a validation of Dembski's core ideas in his 2001 book, No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence, which argued that some intelligent input is required to produce novel complex and specified information.

2. Signature in the Cell. Stephen Meyer forcefully outlined the positive case for design and refuted arguments that ID isn’t science in his seminal book, Signature in the Cell, published by HarperCollins in June of this year. Dan Peterson, in a review of the book in the September 2009 issue of The American Spectator, says “Signature in the Cell is a defining work in the discussion of life's origins and the question of whether life is a product of unthinking matter or of an intelligent mind” and “this book is an engaging, eye-opening, and often eye-popping read”. In a series of university lectures and debates in the second half of the year Meyer defended his thesis that the information content in DNA and the biological machinery that processes that information is positive evidence for intelligent design. A companion three-minute animated video, Journey Inside the Cell was released providing a stunning visual illustration of Meyer’s points.

3. The Collectivist Revolution in Biology. An essay by Mark Buchanan in the August issue of Nature Physics announced the breaking with "many of the presuppositions of traditional evolutionary thinking." He highlighted its message with these words: "A coming revolution may go so far as to unseat Darwinian evolution as the key explanatory process in biology." The essay is a contribution to cross-disciplinary thinking starting with an awareness of collective phenomena in modern physics. Thinking has moved away from reductionism and is adopting a holistic interactionism. Buchanan sees a parallel between physics and biology. The tools of physics and engineering are already being used to understand interacting networks within biological systems. Why does this take us beyond Darwinism? It is because the mechanisms of Darwinian evolution are inherently reductionistic, with individual life forms struggling for survival in competition with other individuals. Within Darwinian theory the environment acts as a filter, allowing the fit to live on. The emerging understanding of biological functions, such as horizontal gene transfer (HGT), moves away from individuals and towards breeding populations, and the environment becomes a driver of genetic change rather than a passive filter. The tree of life now looks like an unstructured bush. Darwinism is inherently reductionistic, and it can devise ways of framing HGT to fit into its own mental models. But what it cannot easily do is adopt the holistic perspectives that are emerging everywhere. This is why an increasing number of scientists find a framework of design to be compelling. Design provides a coherent context for systems biology, for biomimetics, and for many other contemporary areas of research.

4. The Modern Synthesis is Gone. In February of this year Eugene Koonin published a masterly analysis of the impact of genomics on evolutionary thinking (“Darwinian evolution in the light of genomics”, Nucleic Acids Research, 2009, 37(4), 1011-1034). Koonin notes that the 1959 Origins centennial was “marked by the consolidation of the modern synthesis” but subsequent years have witnessed great changes which have undermined its credibility. The modern synthesis was formulated in the 1930’s and 1940’s to draw together seemingly conflicting evidence from natural selection, population genetics, cytology, systematics, botany, morphology, ecology and paleontology into one modern theory of neo-Darwinian evolution. Three distinct revolutions have occurred over the past half-century to bring down the modern synthesis theory: the molecular, the microbiological and the genomic revolutions. Koonin tentatively identifies two candidates to fill the vacuum left by the discarded modern synthesis. The first appears to emphasize the role of chance; the second appears to emphasize the role of law. While many in the scientific community will continue to cling to the modern synthesis for years to come, it is significant that articles are now appearing in the peer-reviewed scientific literature declaring the theory needs to be abandoned because it no longer fits the molecular, microbiological and genomic data.

5. The Edge of Evolution Confirmed. Nature published an interesting paper in the September 24, 2009 issue that discusses severe limits on Darwinian evolution. The manuscript, from the laboratory of Joseph Thornton at the University of Oregon, is titled “An epistatic ratchet constrains the direction of glucocorticoid receptor evolution.” Although the work is interpreted by its authors within a standard Darwinian framework, it also confirms the primary thesis of Michael Behe’s recent book, The Edge of Evolution: The Search for the Limits of Darwinism, demonstrating the looming brick wall which confronts unguided evolution in at least one system. It points strongly to the conclusion that such walls are common throughout all of biology. In a series of blog exchanges Behe successfully defends his position against Thornton.

6. Cambrian Explosion Continues to Challenge Materialistic Theories. A paper in the July 2009 issue of BioEssays, admits the lack of a "materialistic basis" -- that is, a plausible materialistic explanation -- of the Cambrian explosion. As the article states: “Thus, elucidating the materialistic basis of the Cambrian explosion has become more elusive, not less, the more we know about the event itself, and cannot be explained away by coupling extinction of intermediates with long stretches of geologic time, despite the contrary claims of some modern neo-Darwinists.” The rest of the article focuses on explaining the overall loss of phyla and body plans since the Cambrian, rather than the gradual emergence of new body plans as predicted by Darwin’s theory. The impact of this story was amplified by the June 2009 release of the Illustra Media documentary Darwin’s Dilemma: The Mystery of the Cambrian Fossil Record. The documentary used stunning computer generated animations to bring to life the Cambrian creatures and clearly illustrated for the scientist and non-scientist alike why the Cambrian Explosion of life defies neo-Darwinian theory.

7. The Ida Hype and Bust. The launch of the Ida fossil lemur, alias Darwinius masillae, in May this year was unprecedented. It raised eyebrows across the whole range of media-savvy people. While scientists have learned how to capture the interest of the media and promote their work, this particular indulgence was a shock, and it was quickly recognized as hype. No doubt the Ida scientists were hoping to capitalize on the Darwin Bicentennial with a sensational “missing link” media fanfare, but instead they created another black eye for Darwin, as evidenced by the article “Much Hype and Many Errors” by Richard Kay that appeared in the August issue of Science. Researchers may yet find that this superbly preserved fossil lemur is within the range of variation observed in living and fossil lemurs and is more suited to provide evidence of stasis within the lemur basic type.

8. The Ardi Hype and Bust. The first fossils of the species, Ardipithecus ramidus (“Ardi”), were unearthed in 1994 and were first described in a series of papers in the journal Science in October 2009. The very poor condition of the ancient bones is one reason that it took researchers 15 years to excavate and analyze them. The Science editors declared Ardi to be the "central character in the story of human evolution" and named the fossil the science breakthrough of 2009. Evidently the Science editors have not been reading any of the other published opinions on Ardi. These articles reveal that Ardi is an “Irish stew” fossil that has undergone extensive reconstruction in order to become part of a PR campaign to make bold claims of ancestral status to the human line, even though at base its qualities are very similar to previously known fossils.

9. Peppered Moths Oscillates Back to Gray. The “peppered moth” became famous after textbooks started using it as an iconic example of evolution. While the peppered moth is still employed in some current textbooks, debates have raged over whether moths really do rest on tree trunks where they are predated on by birds, whether birds are the main cause of changes in the relative proportion of dark and light moths in populations, and how much the colors really changed. All that aside, an article in the June 19 edition of the London Daily Telegraph reported a new chapter in this story, as moth populations are now reverting from black back to gray-white: "We have seen these moths making a big swing back to their original colour," said Richard Fox, project manager of Moths Count. "It has been happening for decades as air pollution is cleaned up and with the demise of heavy industry in the big cities.” So what does this mean for the origins debate? If you’re an evolutionist, this is now becoming at best a case of oscillating selection, much like what has been observed in the oscillating sizes of beaks of the Galapagos finches, which grow slightly larger during a drought but revert back to their original size when the drought is over.

10. Reverse Engineering Biological Designs. As technology continues to advance, an increasing number of scientists and engineers are realizing that the living world is like a treasure trove packed full of engineering marvels. The agenda of biomimetics is to actively research the potential of applications inspired by biological designs found in humans, animals and plants. The human body supplied one of these design ideas this year with the inner ear, or cochlea. The June 2009 issue of IEEE Journal of Solid-State Circuits reports that scientists at MIT intentionally modeled their radio-frequency (RF) device after the design of the cochlea. "The human ear is a very good spectrum analyzer," said Rahul Sarpeshkar, a professor at MIT who co-authored the paper. "We copied some of the tricks the ear does, and mapped those onto electronics." In order to detect electromagnetic waves instead of pressure waves, the MIT scientists used circuits in place of cilia. On the outside edge of the 1.5-mm by 3-mm-chip are tiny squares, each one corresponding to a different size radio wave. As they spiral into the center, the squares become larger and larger. The outer spiral detects the highest energy, shortest frequency waves, while the center circuits detect less energetic, longer frequency waves. “The cochlea quickly gets the big picture of what’s going on in the sound spectrum,” said Sarpeshkar. “The more I started to look at the ear, the more I realized it’s like a super radio with 3,500 parallel channels.” Their “RF cochlea” may soon be part of the next generation of cell phones and other wireless devices. Other reverse engineering marvels documented this year include a biomimetic tactile sensor whose dimensions match those of the human fingertip, as discussed in the January 30 and March 13 editions of Science, and the discovery that butterfly wings have scales that act as tiny solar collectors. This has led scientists in China and Japan to design a more efficient solar cell that could be used for powering homes, businesses, and other applications in the future (Science Daily).

Honorable Mention
11. Early Large Galaxies Stun Cosmologists. Cosmology has a kind of Cambrian Explosion of its own to grapple with. Contrary to expectations, some of the earliest galaxies appear as large as current ones, if not larger. Astronomers, using the Subaru telescope in Hawaii, examined five galaxy clusters with ages estimated at 5 billion years after the Big Bang. Statements in a report on this study in the April 1, 2009 issue of Nature News make it sound revolutionary: 1) The findings could overturn existing models for the formation and development of galaxies that predict their slow and steady growth through mergers; 2) They calculated the mass of the biggest galaxy in each of the clusters and found, to their surprise, that the ancient galaxies were roughly as big as the biggest galaxies in equivalent clusters in today’s Universe; 3) The ancient galaxies should have been much smaller, at only a fifth of today’s mass, based on galaxy-formation models that predict slow, protracted growth; 4) “That was the reason for the surprise – that it disagrees so radically with what the predictions told us we should be seeing,” says Chris Collins of Liverpool John Moores University in Birkenhead, UK.; 5) “We have a whole different story now about how galaxies form,” says Avishai Dekel of the Hebrew University in Israel and first author of the earlier paper; 6) For years, astronomers have relied on a hierarchical model of galaxy formation....the models predicted that, to reach the massive galaxy sizes seen today, galaxies would have to steal their stars through mergers – a slow process – rather than growing their own; 7) It’s not yet certain how much of a readjustment the hierarchical model will need if the observations hold up....But Collins says the underlying models of dark-matter mergers could have problems. “I think the problem could be more general than just needing a tweaking.”

12. Failed Assault on Irreducible Complexity. In August 2009 a paper appeared in the online journal Proceedings of the National Academy of Sciences, entitled "The reducible complexity of a mitochondrial molecular machine." The Darwinian guardians appear anxious to debunk irreducible complexity, one of the key scientific concepts for intelligent design. This was evidenced by the editor’s refusal to print a letter to the editor exposing the basic problems with the article by Michael Behe. Casey Luskin later posted a detailed response documenting how the claims made in the paper far surpassed the data, and how distinctions between such basic ideas as “reducible” versus “irreducible” and “Darwinian” versus “non-Darwinian” were essentially ignored.

13. Walking White Blood Cells. How do white blood cells – immune system ‘soldiers’ – get to the site of infection or injury? To do so, they must crawl swiftly along the lining of the blood vessel – gripping it tightly to avoid being swept away in the blood flow – all the while searching for temporary ‘road signs’ made of special adhesion molecules that let them know where to cross the blood vessel barrier so they can get to the damaged tissue. This amazing story was reported in the May 4, 2009 edition of Science Daily based on a press release from the Weizmann Institute of Science in Rehovot, Israel (see Wiezmann Wonder Wander). An animation of this can be found at Harvard BioVisions. Click on the media file labeled “Extravasation”. The press release went on to say that these legs don’t just walk. They act as probes as they press into the epithelial tissue lining the vessels. The force of blood actually causes them to embed their little legs into the tissue as a way to sense the location of the damaged tissue and make their way to it. “The scientists believe that the tiny legs are trifunctional:,” the article said: “Used for gripping, moving and sensing distress signals from the damaged tissue.”

14. Cell Motors Work in Concert. If one molecular machine by itself is a design wonder, what about groups of them operating in concert? Recent papers and news articles are claiming that’s what happens in living cells: molecular motors coordinate their efforts. The February 25, 2009 issue of Science Daily led off a story on this by saying, “Even within cells, the left hand knows what the right hand is doing.” Researchers at the University of Virginia said they “found that molecular motors operate in an amazingly coordinated manner” when “simple” algae named Chlamydominas need to move with flagella. This contradicts earlier models that pictured the motors competing with each other like in a tug-o’war. “The new U.Va. study provides strong evidence that the motors are indeed working in coordination, all pulling in one direction, as if under command, or in the opposite direction – again, as if under strict instruction.”

15. Cells Use Cloud Computing. Cloud computing is the up-and-coming trend in information technology. It allows processes to run in parallel on multiple networked processors with more robustness, because other processors can pick up the slack if a major server fails. Scientists are finding that cells have been using this technology all along. Science Daily (June 17, 2009) reported on work by biologists in Spain and Israel working with Carnegie Mellon University. “Gene regulatory networks in cell nuclei are similar to cloud computing networks, such as Google or Yahoo!, researchers report today in the online journal Molecular Systems Biology,” the article began. “The similarity is that each system keeps working despite the failure of individual components, whether they are master genes or computer processors.” Cells have master control genes that turn on other genes. Researchers have been puzzled by experiments in which de-activating these genes one at a time did not interrupt the cell. It turns out that parallel copies, called paralogs, are able to step in. Paralogs have more or less sequence similarity to the master genes. The more similar they are, the more they can fill in for the master gene. The article explained, “if one of these genes is lost, other ‘parallel’ master genes with similar sequences, called paralogs, often can replace it by turning on the same set of genes.” Scientists estimate that 5 to 10 percent of genes are in this master-gene category.

The HBR List: Breakthrough Ideas for 2010

Harvard Business Review

When the business community supports an idea, change can happen fast. HBR’s annual ideas collection, compiled in cooperation with the World Economic Forum, offers 10 fresh solutions we believe would make the world better. Ranging from productivity boosting to nation building, from health care to hacking, any of the ideas presented in the following pages could go far with broad-based buy-in. Which ones will you get behind?

1: What Really Motivates Workers

by Teresa M. Amabile and Steven J. Kramer

Understanding the power of progress.

The Problem.

Ask leaders what they think makes employees enthusiastic about work, and they’ll tell you in no uncertain terms. In a recent survey we invited more than 600 managers from dozens of companies to rank the impact on employee motivation and emotions of five workplace factors commonly considered significant: recognition, incentives, interpersonal support, support for making progress, and clear goals. “Recognition for good work (either public or private)” came out number one.

Unfortunately, those managers are wrong.

Having just completed a multiyear study tracking the day-to-day activities, emotions, and motivation levels of hundreds of knowledge workers in a wide variety of settings, we now know what the top motivator of performance is—and, amazingly, it’s the factor those survey participants ranked dead last. It’s progress. On days when workers have the sense they’re making headway in their jobs, or when they receive support that helps them overcome obstacles, their emotions are most positive and their drive to succeed is at its peak. On days when they feel they are spinning their wheels or encountering roadblocks to meaningful accomplishment, their moods and motivation are lowest.

This was apparent in vivid detail in the diaries we asked these knowledge workers to e-mail us every day. In one end-of-day entry, an information systems professional rejoiced that she’d finally figured out why something hadn’t been working correctly. “I felt relieved and happy because this was a minor milestone for me,” she wrote, adding that her efforts to enhance a specific version of software were now “90% complete.” A close analysis of nearly 12,000 diary entries, together with the writers’ daily ratings of their motivation and emotions, shows that making progress in one’s work—even incremental progress—is more frequently associated with positive emotions and high motivation than any other workday event. For example, it was noted on 76% of people’s best days, when their reported moods were most buoyant, and on only 25% of their worst. (The exhibit “What Happens on a Great Workday?” shows how progress compared with the other four most frequently reported positive events.)

The Breakthrough Idea.

As a manager of people, you should regard this as very good news: The key to motivation turns out to be largely within your control. What’s more, it doesn’t depend on elaborate incentive systems. (In fact, the people in our study rarely mentioned incentives in their diaries.) Managers have powerful influence over events that facilitate or undermine progress. They can provide meaningful goals, resources, and encouragement, and they can protect their people from irrelevant demands. Or they can fail to do so.

This brings us to perhaps the strongest advice we offer from this study: Scrupulously avoid impeding progress by changing goals autocratically, being indecisive, or holding up resources. Negative events generally have a greater effect on people’s emotions, perceptions, and motivation than positive ones, and nothing is more demotivating than a setback—the most prominent type of event on knowledge workers’ worst days.

The Promise.

You can proactively create both the perception and the reality of progress. If you are a high-ranking manager, take great care to clarify overall goals, ensure that people’s efforts are properly supported, and refrain from exerting time pressure so intense that minor glitches are perceived as crises rather than learning opportunities. Cultivate a culture of helpfulness. While you’re at it, you can facilitate progress in a more direct way: Roll up your sleeves and pitch in. Of course, all these efforts will not only keep people working with gusto but also get the job done faster.

As for recognition, the diaries revealed that it does indeed motivate workers and lift their moods. So managers should celebrate progress, even the incremental sort. But there will be nothing to recognize if people aren’t genuinely moving forward—and as a practical matter, recognition can’t happen every day. You can, however, see that progress happens every day.

Teresa M. Amabile is the Edsel Bryant Ford Professor of Business Administration at Harvard Business School. Steven J. Kramer is an independent researcher and writer based in Wayland, Massachusetts.

2: The Technology That Can Revolutionize Health Care

by Ronald Dixon

Hint: It’s not high cost or high tech.

The Problem.

When people talk about technological breakthroughs in medicine, they’re usually referring to high-tech (and high-cost) innovations such as next-generation MRI machines or surgical robots. Oddly, they rarely talk about technologies that could improve the most critical factor in the quality of health care: the patient’s relationship with the provider.

Having graduated from medical school in 1998, I was well accustomed to the advantages of e-mail. Once I became an attending physician, I thought it would be a valuable way to communicate with patients. I sent messages to people who were suffering from hypertension, for example, asking that they give me readings from their home blood-pressure monitoring, and then adjusted their prescriptions accordingly. Quickly, however, a hospital administrator intervened: Because such contact wasn’t a form of care that we could charge for, he directed me to stop.

What’s more, no information gleaned through such communication could enter a patient’s medical record. So, for example, someone reviewing the formal record of a patient of mine named Don, to whom I provided palliative care after chemotherapy failed to keep his cancer from metastasizing, would conclude that Don and I had had no contact in the last four months of his life—although we had closely connected through Skype consultations every two weeks.

The rules about payment and paperwork aren’t without justification; they reflect the belief that in-person visits are essential to care. Certainly it’s true that physical examinations can’t be conducted by videoconference. But consider that a large proportion of patients’ office visits are for follow-up on lab test results and ongoing management of chronic diseases. For those purposes virtual interaction can support more solicitous care, not less—and lead to better outcomes.

Consider Agnes, a 75-year-old woman who suffers from congestive heart failure. In one period of just four months she was hospitalized six times—two of them within four days of having been discharged. After she left the hospital the sixth time, I tried something different: A few days later I picked up the phone and checked in with her, asking about warning signs and even having her step on the scale. Either a nurse or I called Agnes every four days thereafter. Six months later: still no trip to the hospital. What was the revolutionary technology? The telephone.

The Breakthrough Idea.

Now imagine that instead of simply having a phone conversation, we could remotely monitor patients using a kiosk like the one some colleagues and I are currently alpha testing. (We envision such kiosks in assisted-living facilities and other multiuser locations.) If reliable data on blood pressure, pulse rate, and so forth could be captured and beamed to the physician, some fragile individuals would be saved the necessity of making trips to the doctor’s office. And physicians would have many more readings, meaning more chances to discern patterns and detect anomalies in time to act.

To determine whether a more virtual practice would be feasible and acceptable to both patients and physicians, we recently conducted two studies, one focused on videoconferencing and the other on templated e-mail “visits” that would enter medical records. Both studies subjected real patients and their accustomed practitioners to these encounters. Both produced high levels of satisfaction, proving that people are ready for innovation—even eager for it. We just need to change the rules and overcome the lingering cultural barriers that currently prevent it.

The Promise.

When the day comes that physicians and patients readily engage in all three types of virtual interaction—asynchronous (such as e-mail), synchronous but remote (videoconferencing), and device-intermediated (kiosk collection of vital signs)—up to three-fifths of today’s office visits can be eliminated. Much of the time saved can be used to provide quality health care for additional patients—a crucial efficiency in light of the current shortage of primary-care physicians.

There is no question that we must improve how we deliver health care in the United States. Technology will never substitute for the relationship between physicians and patients, but its thoughtful and practical use will make that relationship richer and more collaborative. If we build increasingly interactive platforms to enrich feedback at each stage of diagnosis and care, patients and physicians can make better health decisions together.

Ronald Dixon, MD, MA, is the director of the Virtual Practice Project at Massachusetts General Hospital in Boston and an instructor of medicine at Harvard Medical School.

3: What the Financial Sector Should Borrow

by Lawrence M. Candell

A military approach to keeping the economy safe.

The Problem.

What can you do in a democracy when you rely on the private sector for solutions in an area of critical national importance?

In the case of U.S. national defense, the government funds nonprofit research centers like the one I work for—MIT Lincoln Laboratory, in Lexington, Massachusetts. These centers are stocked with technology experts and innovation capabilities as good as those to be found in industry, but they answer only to the public, not to profit-seeking shareholders. Seated across the table from large defense contractors, federally funded R&D centers function as independent honest brokers: They allow innovative defense contractors to thrive, as the system needs them to do, but they also check the contractors’ occasional temptation to do what is most profitable rather than what is really needed. These centers clearly perform a valuable function, and the cost of funding their expertise is less than 1% of the Defense Department’s budget.

As the financial crisis erupted in late 2008, some of my colleagues and I couldn’t help wondering why there was no similar setup in the financial industry. Here was another area of extreme national importance. Why didn’t the wizards who were devising new financial instruments in the private sector have a mirror image operating in behalf of the public?

Of course, there are many public-spirited experts developing and studying next-generation financial solutions at our universities—just as there are many professors working on military technology. But that academic contingent can’t be expected to protect the public interest. With an annual budget of $650 billion, the Department of Defense engages with industry on a scale far beyond what a collection of scholars could ever address. To decide how best to invest its resources, the department needs understanding and guidance at the systems level. The same would seem to be true for a financial industry that annually moves trillions of dollars.

The Breakthrough Idea.

Let’s federally fund an R&D center that, borrowing the best practices of defense research centers, could design, analyze, prototype, and troubleshoot financial innovations, making sure they promoted our economic security and prosperity. That would mean hiring top talent from a wide range of disciplines in both academia and industry in order to provide a high-level systems perspective on problems. It would mean having the resources to build rapid prototypes and surrogate test beds—not anywhere near industrial production capacity but sufficient to inform a full-scale approach. And it would mean transferring lessons learned to the industrial base, in part by establishing requirements for novel financial instruments and providing guidance to financial regulators.

The sheer complexity of the financial system may lead some to believe that such a center could never attain what our national-security-focused labs strive for: a deep understanding of the threats we face, and the know-how to design systems that respond to them. Unlike defense procurement, this problem area may have too many degrees of freedom and too few immutable laws and principles to guide us to some useful solutions.

The Promise.

There is little risk in trying. The center would have the greatest possible impact if its efforts were focused at the right level. They should complement rather than duplicate what researchers in academia are already doing, and should therefore target large-scale problems and aim to provide insights that would shape national policy.

If the center provided any answers that helped prevent another financial crisis of the magnitude of 2008’s, the expense would be more than justified.

Lawrence M. Candell is the assistant head of the aerospace division at MIT Lincoln Laboratory in Lexington, Massachusetts.

4: Getting the Drugs We Need

by Eric Bonabeau, Alpheus Bingham, and Aaron Schacht

Simple standards would spur innovation.

The Problem.

In an era of imploding business models, big pharma’s may be the next to go. But society’s need for new drug therapies, and companies’ interest in seeing a return on their investments, can still be served if we rally support for a simple change.

That we need a new model of some kind is beyond doubt. Major pharmaceutical and biotechnology companies know that with each passing year it takes more money and time to develop a new drug, and the number of costly Phase III and postlaunch failures is increasing. Meanwhile, generics are coming to market earlier and more aggressively. The blockbuster model, the salvation of big pharma in years past, is further threatened by continuing advances in genomics and the discovery of drug response markers. This is leading to a rise in tailored drugs aimed at smaller markets. The combined effect of these forces will make the economics of being a large, fully integrated pharmaceutical company increasingly untenable. Already, innovative drugs are routinely coming from smaller players.

How, then, should a traditional drug company change its business model? In the short term, it can focus on becoming the scaling partner of choice—using its infrastructure to support smaller innovators in conducting clinical trials, manufacturing efficiently, and marketing and selling globally. That will remain a viable model as long as one-size-fits-all blockbuster drugs continue to offer value, however limited, to payers and patients. In time, however, even that oasis will dry up, and a reinvention will be necessary. The survivors will be companies that have moved beyond their fully integrated past and established orchestrated drug-development networks.

Such networks have already been seeded; large companies have been acquiring or licensing pipeline innovations for years. But their imperatives to grow through innovation require them to form many more external partnerships, and faster. The problem is that despite advances in information and communications technologies, the coordination and transaction costs of operating a network model remain high.

The Breakthrough Idea.

One change would make a substantial difference: the creation of agreed-upon standards for digitally representing drug assets. The challenge is that every company has its own idiosyncratic (and therefore redundant) means of collecting, storing, and exploiting information from development trials, making it difficult to share the hundreds of gigabytes of documents and images among partners. Not only does this throw up barriers to collaboration, but it makes market transactions highly inefficient. It is not uncommon for the seller of an asset to have to set up as many data rooms as there are potential acquirers or licensees, since each one requires its own format. If a common standard for drug asset representation existed, it would speed up transactions, reduce coordination costs, and promote better decision making across networks by providing what the military calls a “common operating picture.”

The Promise.

Let’s play out the probable consequences of taking this relatively simple step. If we created a standard that was accepted across the industry, the drop in transaction costs would enable the largest players to share risk and monetize their undeveloped assets. Foundations or even patient groups could have drugs developed that targeted markets too small for the big players. Venture capital firms would be able to develop assets that were languishing at failed portfolio companies. Smaller pharmaceutical companies, government labs, and academic institutions would be able to broadcast the availability of their assets to a wide audience and find eager developers and partners. In short, tremendous innovation in drug development would be unleashed.

The emergence of fluid drug development networks would change how the value of intellectual property was captured. As information flowed more freely among partners, we would need clear mechanisms for assigning credit. The regulatory framework would have to adapt to innovative uses of information. It’s easy to imagine the emergence of a new type of business dedicated to orchestrating development activities.

The end of the fully integrated pharmaceutical company needn’t be the death of today’s big pharma and biotech companies. They can continue to play a central role by embracing a network model and, with their depth of expertise, acting as its orchestrators. But whether they choose to or not, their world will move on.

Eric Bonabeau is the founder and chairman of Icosystem and a cofounder of Hive Pharma. Alpheus Bingham is a cofounder of InnoCentive and Hive Pharma. Aaron Schacht is the COO of Global External R&D at Eli Lilly.

5: A Market Solution for Achieving “Green”

by Jack D. Hidary

Financing that encourages building retrofits.

The Problem.

It’s easy to get excited about the promise of clean technology—especially new high-efficiency and solar devices that can significantly reduce the energy use of existing homes and commercial buildings. But the retrofitting challenge we face is immense, and if we hope to see major progress, we must help home and building owners overcome the barrier of up-front costs.

Few of today’s owners have the necessary capital on hand, or can tie it up until the break-even point is reached and payback begins. In theory they could tap into lines of credit and home equity to pay for clean tech, but in practice they are reluctant or unable to do so. Institutional investors, meanwhile, have the capital and the appetite for the sure and steady returns of clean-tech installations; but they are set up to write large checks, not to finance disaggregated, small-scale projects. And, as smart investors, they are leery of opportunities where borrowers can default but improvements can’t be undone and funds recouped.

Already we are at the point—thanks to falling prices from large-scale production in China and other manufacturing hubs, and thanks to government rebates—where some clean-tech retrofits achieve cash payback in less than three years. But unless we can provide the necessary assurance to investors and tap into private capital markets, the improved economics of clean technology won’t make enough difference.

The Breakthrough Idea.

Enter PACE (Property Assessed Clean Energy) bonds, which are just being introduced in 15 states across the country. PACE bonds are debt instruments issued by a municipality and backed by property-tax liens on buildings whose owners take PACE loans from the bond pool. Here’s an example: Suppose a commercial building in Annapolis, Maryland, has utility costs of $20,000 a month, which include electricity and natural gas. The building owner, Annapolis Management, has done an energy audit and concluded that a $300,000 investment in energy efficiency (retrofitting windows, lighting, and HVAC) would bring monthly utilities down to $13,000.

Annapolis Management takes a $300,000 loan from the city’s PACE program and retrofits the building. The owner repays the loan over 20 years through an increase in the building’s annual property taxes equal to one-twentieth of the loan amount plus interest. In this example, assuming an 8% interest rate, that means additional taxes of $1,350 a month. Because this expense is markedly less than the utility cost savings of $7,000, the owner is cash-flow positive from day one after retrofit.

The Promise.

Let’s examine PACE bonds from the perspective of the city. The municipality issues the bonds, which are bought by institutional investors. Investors are drawn to bonds backed by property taxes, because they have very low default rates. The obligation to pay them survives foreclosure, so even if a property owner defaults on a mortgage, the new owner who buys the building at a bank fire sale must immediately bring the tax payments up to date.

PACE bonds are also very attractive to political leaders. As opt-in solutions, they raise taxes only for the property owners who choose to take loans. Other constituents’ pocketbooks are unaffected. Furthermore, retrofitting projects financed by PACE bonds bring employment for more construction and installation workers, potentially amounting to hundreds of thousands if not millions of jobs as this idea spreads across the country. What politician would not want to lay claim to a program that increased property values, lowered monthly utility costs, and created jobs?

Jack D. Hidary, who is based in New York, is the chairman of PrimaryInsight.com and serves on the national steering committee of PACENow.org.

6: A Faster Path from Lab to Market

by Robert E. Litan and Lesa Mitchell

Removing the technology licensing obstacle.

The Problem.

University-based innovators routinely produce breakthrough technologies that, if commercialized by industry, have the power to sustain economic growth. Because their research is largely funded by the U.S. government (much of whose $150-billion-plus R&D budget is channeled through universities), it is all the more imperative that these innovations find their way to the marketplace and generate benefits for society. But our system today is suboptimal: Many university-developed innovations could reach the marketplace much faster than they do now. The problem, ironically, centers on the very entities designed to facilitate commercialization. Nearly 30 years ago Congress provided a huge incentive for universities to pursue more commercialization of federally funded innovations. Through the Bayh-Dole Act, it granted them the rights to the intellectual property. That carrot got immediate results: Virtually every U.S. research university created a technology licensing office (TLO) to organize its commercialization activities and increase revenues from them. These centralized offices require that faculty members disclose their inventions to the TLO and pursue licensing opportunities through it.

Yet like the student who could earn A’s but consistently takes home B’s, TLOs are underperforming. For example, although funding from the National Institutes of Health has mounted over the years (and is now some $30 billion), the output in terms of new FDA-approved drugs has been falling. As the Department of Energy prepares to spend tens of billions of dollars on R&D to replace dirty fossil fuels with alternative sources of energy, it is critical that the disappointing pattern in drug commercialization not be repeated in clean tech.

Perhaps it was not a bad idea at first for universities to centralize their commercialization capabilities and give TLOs control of the process; they gained immediate organizational benefits and economies of scale. But this monopolistic model has since evolved into a major impediment. Inventive faculty members are hostage to their TLO, regardless of its efficiency or contacts. Moreover, because many TLOs are short-staffed, professors must queue up to get proper attention for their inventions.

The Breakthrough Idea.

So why not free up the market in technology licensing? Let’s allow any inventor-professor to choose his or her licensing agent—university-affiliated or not—just as anyone in business can now choose his or her own lawyer. This would be as simple as having the Commerce Department amend the rules of Bayh-Dole. (Maybe the Small Business Administration would have to revise its rules as well.) Specifically, federal research dollars should come with a condition attached: University recipients must allow faculty members to choose their licensing agents.

The Promise.

A free and competitive market in technology licensing would disturb neither the legal status of the invention nor the way royalties or license fees are divided between faculty member and university (a subject governed by the standard employment contract). But like other free markets, it would dramatically speed up the commercialization of new technologies, and ultimate consumers—in the U.S. and around the world—would thereby benefit from them much more rapidly. A free market would also most likely lead university TLOs to specialize or turn to outside agents with the appropriate expertise. A university might drop its TLO altogether but continue to earn licensing revenues—less the fees charged by outside TLOs or agents.

Let’s stop penalizing professors who come up with new ideas and the universities they work for. Most important, let’s not keep the world waiting for new products and services—some of them lifesaving—while valuable ideas languish on university shelves.

Robert E. Litan is the vice president for research and policy, and Lesa Mitchell is the vice president for advancing innovation, at the Kauffman Foundation in Kansas City, Missouri.

7: Hacking Work

by Bill Jensen and Josh Klein

Learn to love the rule breakers.

The Problem.

When a 12-year-old can gather information faster, process it more efficiently, reference more diverse professionals, and get volunteer guidance from better sources than you can at work, how can you pretend to be competitive? When the personal tools in your mobile phone are more empowering than what your company provides or approves for your projects, how can you be saved from devastating market forces? You can’t.

The tools we use in life have leapfrogged over the ones we use at work. Business’s lingering love of bureaucracy, process, and legacy technology has fallen completely out of sync with what people need to do their best.

The Breakthrough Idea.

So what can you do? Hack work, and embrace the others in your midst who care enough to do so. Hackers work around the prescribed ways of doing things to achieve their goals. The benevolent among them do this rule bending for the good of all. And once frontline performers and middle managers try hacking work—and discover they’ve increased their output by a factor of 20—they never go back.

Richard Saunders (not his real name) is a benevolent hacker. He works for one of those banks that did its job so well in 2008 that we landed in the worst financial hole we’ve seen since the Great Depression. As the crisis unfolded, the bank’s senior executives cried out, “Reports! Our kingdom for more reports!” The problem was that what they really wanted—useful, insightful analysis—couldn’t easily be produced with the software provided by corporate IT.

Poor Richard. What to do? Work 29 hours a day, 10 days a week, to manually create those reports and the much-needed analysis? No way. He hacked the system. He softened up a vendor, got a password, tapped into the database, and began creating never-before-possible reports for the C-suite.

Would the bank’s auditors and IT security guys freak out if they knew that Richard had hacked their system? Uh, yes. But since then, Richard has become incredibly productive and is now a go-to guy companywide. He’s a hero to all those senior execs who wanted more than data dumps. If only they knew the full story. Says Richard, “As a result of this hack, I keep senior management off our backs, so we’re able to continue doing more for our clients with less.”

He’s not alone in believing that he has to take matters into his own hands in order to get the job done and achieve better results for the organization. Many in the workforce are coming to the same conclusion. The illusion of corporate control is being shattered in the name of increased personal productivity.

The Promise.

This kind of work-around isn’t new—your company has been hacked from the inside for ages. What is new is that the cheat codes are becoming public, and there’s nothing you can do about that. Bloggers are telling your employees how to bypass procedures. Forums give tutorials on how to hack your software security. Entrepreneurs are building apps to help your employees run their own tools and processes instead of yours.

There’s only one successful strategy for a hacked world: If you can’t beat ’em, join ’em. Change the debate within your company to leverage what your hacker employees know. We’re seeing managers in enormous corporations such as Google, Nokia, and Best Buy embrace things that benevolent hackers would pursue with or without them: greater worker control over tools and procedures, increased transparency, and meritocracy. As even senior management begins to feel the pain of outdated tools and structures that refuse to budge, what was once shunned as bad is now the new good.

Bill Jensen is the president and CEO of the Jensen Group, a change-consulting firm in Morristown, New Jersey. Josh Klein is a New York–based hacker and a consultant on security and workplace effectiveness. The two are collaborating on a forthcoming book, Hacking Work (Portfolio).

8: Spotting Bubbles on the Rise

by Sendhil Mullainathan

We have the tools to sound the alarm early.

The Problem.

Will Rogers had sage advice on investing: “Buy some good stock and hold it till it goes up, then sell it. If it don’t go up, don’t buy it.” The guidance we get today regarding economic bubbles is just about as helpful: If it bursts, it was a bubble. That kind of postmortem analysis is useful to historians, but it does nothing to limit the collateral damage caused by, for example, a sudden collapse in housing prices.

An early warning system would be more valuable. For one thing, it would change the way that regulators go about securing the safety and soundness of financial institutions. To ensure that a financial institution is sound, regulators must discount the value of its assets for their riskiness. Under the current Basel regulatory framework, the discount is determined by looking at market pricing of risk. This has disastrous consequences during a bubble, when almost by definition, the market is underpricing significant downside risk. A financial institution holding $50 million worth of mortgage-backed securities in its trading book in January 2007 was facing far more risk—and was less sound—than the market price suggested. If we had a reliable metric for pronouncing an asset class to be in a bubble, regulators could dampen the risk. They could more aggressively discount asset values and analyze an entire balance sheet’s exposure to the threatening burst.

The Breakthrough Idea.

At ideas42—a behavioral economics R&D lab that I codirect—we have taken on the challenge of creating an early warning system. We are asking, “Could a bubbles committee—like the committee that does recession dating for the National Bureau of Economic Research—use the research in behavioral finance to identify bubbles as they form?” The answer appears to be a guarded yes.

Understand that our goal is not to be able to predict when a bubble will burst. That might never be possible. Luckily, in terms of the public interest it isn’t necessary. To regulate risks it would be helpful merely to recognize when we are in one—a far simpler task. That is why a public effort must create such a committee. (The market itself is far more interested in the timing of bubbles. Any smart arbitrageur would rather ride a bubble for some time than lean against it; a fortune can be made by riding the bubble up and selling right before the burst.)

How would the committee make the call on a rising bubble? Behavioral finance gives us the perspective to spot telltale signs. We know that when markets work well, it’s because they are incorporating disparate views of asset value and distilling them into a single price. When markets fail, as they do during bubbles, that is no longer true. After prices have risen for a prolonged period, the bears have sold all their shares, so their downward influence on price is lessened. If they believe that shares are overpriced and due for a fall, they must bet against them in more expensive (and hence less potent) ways, such as short selling.

This suggests an approach to finding warning signs. Looking at short interest, demand for put options, and trading on a variety of derivatives, a bubbles committee could construct technical measures of those opinions that are underrepresented. In taking these factors into consideration, the committee wouldn’t strictly be going against consensus opinion; it would be discovering times when narrow asset prices alone did not measure the consensus.

A bubbles committee need not be passive. If it suspected a bubble in an asset market, it could selectively recommend introducing derivatives that explicitly target bubble risk. Consider a long-horizon put option designed to pay out only in the case of a significant drop in prices. The market price of that security would help regulators decide how to view that asset class. Of course, the committee’s activities might serve to burst a bubble early, but that need not be its primary goal; we should be satisfied if the committee simply minimized the social costs of the bubble’s eventual burst.

The Promise.

Translating these raw insights into a concrete methodology will take some work. Careful research is required. Diverse technical measures must be gathered to quantify contrarian investors’ bets. These must be integrated with traditional indicators of fundamental value, such as P/E ratios. New consumer-sentiment measures, based on insights from consumer psychology, will also need to be explored. All of this must be tested against historical data. In this we are lucky: There is no shortage of data. Numerous asset classes around the world have gone through what in hindsight were obviously bubbles. The steps outlined above are technically challenging but very manageable if we make a concerted effort.

We can’t prevent earthquakes or hurricanes, but construction engineers have learned ways to minimize their damage. Similarly, financial bubbles will surely continue to rise and burst around the world, but with one big R&D push we can put tools that contain their effects in the hands of a public-minded committee.

Sendhil Mullainathan is a professor of economics at Harvard University and a cofounder of ideas42.

9: Creating More Hong Kongs

by Paul Romer

How charter cities can change the rules for struggling economies.

The Problem.

Knowing how hard it is to transform a change-averse organization, managers sometimes create a skunkworks, an autonomous corporate division where pioneers can build something new. A leader who starts a successful skunkworks changes the firm by showing rather than telling. Target is a good example: It began as a discount-retailing skunkworks at Dayton-Hudson and eventually remade the entire firm.

Transforming a nation is even harder, but the dramatic reforms in China show that it can be done. When China’s leaders started the reform process in the late 1970s, they could leverage a special asset: Accidents of history had made Hong Kong the skunkworks for Chinese political and institutional reform. The British government had administered rules that made the city livable and allowed a market-based economy to flourish. After World War II, it was a place where millions of Chinese could seek work—sewing shirts, for example, or making toys—and begin accumulating wealth, marketable skills, and the habits and values that sustain the quality of life in a well-run city. Hong Kong’s success showed Deng Xiaoping and other Chinese leaders how to bring urbanization, market incentives, and foreign direct investment to the mainland.

Wisely, China’s leaders did not compel every citizen to switch to the rules of the market. They started with special economic zones that Chinese workers and foreign firms were free to enter. Encouraged by the dramatic success of these zones (showcased by Deng’s famous southern tour in 1992), the Chinese government accelerated the pace of urbanization and economic reform. As a result, the quality of life has dramatically improved for an unprecedented number of people. Hong Kong was the nearby model that demonstrated the power of the market and the potential of special zones. By establishing it, Britain may inadvertently have done more to reduce world poverty than all the intentional aid programs of the past century.

The Breakthrough Idea.

Today many countries are stuck with rules that slow down inflows of technology, prevent successful urbanization, and stifle personal ambition. They need new rules that will let their citizens take full advantage of mutually beneficial exchange with millions of fellow citizens and with people and firms from around the world. Those rules could be introduced by chartering new cities like Hong Kong.

Creating this kind of city is not unlike launching an autonomous corporate division. It starts with a piece of uninhabited land and a charter listing the rules that will prevail in the city to come. With full knowledge of that charter, people choose whether to live and work there, to invest in its infrastructure, and to build and manage its apartments, factories, call centers, and shops.

A number of countries could benefit from chartering such cities. What if Raúl Castro wanted to follow this path and do for Cuba what Deng Xiaoping did for China? Even if he established attractive rules, no one could be sure that his successors would abide by them. The political risk would be too large for Cuba to attract meaningful levels of immigration and investment.

To make his commitment to new rules credible, Castro could enter into a joint venture with another nation. Canada could be party to a new treaty in which the United States handed over its rights to Guantánamo Bay. It could take over local administration for a defined period of time and establish a charter city there. The Canadian government would reduce political risk and attract foreign investors and citizens, just as the British government did in Hong Kong.

People would come because they knew that even if Cuba suffered from periods of political instability, the new city could use its port to trade with the rest of the world—just as Hong Kong did when China was going through the Cultural Revolution. Cubans who were eager to adopt the market model could move to the new city, while their more cautious fellow citizens could wait to see how things played out. The flow of goods and people between the charter city and the rest of Cuba would increase, and wages would begin to catch up with those in developed nations. The charter negotiated with Canada could structure the venture as an enormous build-operate-transfer project. As the final step in the nation’s political and economic transformation, people on both sides could eventually vote to integrate the city into the Cuban political system.

The Promise.

Many nations need to change their rules. North Korea’s, for example, are too strong and harmful; Somalia’s are too weak, lacking even a basic legal system that provides personal security. Many countries at intermediate levels of development still need rules to prevent cronyism, preserve competition, limit congestion and pollution, support modern utilities and infrastructure, and provide real educational opportunity for all.

Groups of people always find it hard to change the rules, even when other rules would clearly be better. Charter cities—dozens of them, perhaps even hundreds—could be the skunkworks that bring systemic change to entire nations. Ultimately, they could give the billions of people who will soon move to cities the chance to experiment with, and opt into, rules that let them achieve their full potential.

Paul Romer is a senior fellow at the Stanford Institute for Economic Policy Research and the president of Charter Cities, a nonprofit research organization.

10: Independent Diplomacy

by Carne Ross

Why pretend that only nation-states shape international affairs?

The Problem.

As globalization puts all of us at the whim of forces without borders, the power of states is in decline, and that of other actors is rising. The UN Security Council was constituted in 1945 to deal with conflict between states. Today more than three-quarters of its agenda involves so-called nonstate actors—guerrilla groups, separatists, the remnants of decaying states, and the kernels of new ones.

Yet established nation-states, for the most part, still have a virtual monopoly on the practice of international diplomacy. That hardly seems sensible in a world that’s growing more complex and is increasingly shaped by a diverse cast of players, be they new or emerging states, global corporations, criminal networks, or armed groups. The diplomatic system evolves slowly. Those cut out of the current system—small states, nonstate entities—need help getting their legitimate needs addressed. But we all face the challenge of how to engage.

The Breakthrough Idea.

To many, “independent diplomacy” will sound oxymoronic. It is, if you believe that diplomats by definition represent states and thus are anything but independent. When I was a traditional diplomat for Britain, that was my belief. Now I see it differently.

My views shifted because I saw up close the limits of the traditional model. In 2004 I gave evidence to an inquiry examining the intelligence on Iraq’s reported development of weapons of mass destruction. As the UK’s Iraq “expert” at the UN Security Council, I had detailed knowledge of that intelligence. I testified that the British government had manipulated the case for war and had ignored available alternatives. I had no choice but to resign, but no plan for what to do next.

It was Kosovo, where I was working at the time, that gave me inspiration. Kosovo’s future was then the subject of intense and secretive diplomatic negotiations. But, perversely, its democratic government was prohibited from having diplomats, and the country had no representation in those discussions. This exclusion was not only unfair; it invited instability. Diplomacy was a trade I knew, so I created a nonprofit organization to give advice on diplomacy. Kosovo became Independent Diplomat’s first client.

The Promise.

The organization Independent Diplomat was born out of a particular crisis, but the idea of independent diplomacy responds more generally to the seismic global changes that are rendering the traditional model obsolete. Particularly when smaller players are at risk of being marginalized in international negotiations, an independent diplomat can help them ensure that their interests are represented. My group is working now, for example, with small island states on the highly complex climate change talks. We’re also helping the Burmese opposition advance a transition to democracy, and advising representatives of the dispossessed inhabitants of Moroccan-occupied Western Sahara.

When I share the idea behind Independent Diplomat, I never claim that it is a perfect solution for our elaborate international system; it is only a necessary part. Power is shifting, and marginalized players need help. Excluding them increases the risk of conflict. Most important to me is helping (and urging) others to move beyond a naive reliance on governments to control the forces and events that affect our lives. My experience has taught me this: Outcomes are determined by those who join in, who act. When all is connected and every action has international consequences, everyone can be an independent diplomat. Indeed, everyone may need to be one.

Carne Ross is the founder and executive director of Independent Diplomat, a nonprofit advisory group based in New York.

Playing Christ

by Melissa Schubert
Biola Magazine

In what I take to be the kindness of God, I recently enjoyed with my students a rich discovery in an unplanned connection between our Scripture reading at the front end of class and our discussion of As You Like It. In our weeks together studying Shakespeare’s comedies we are reading Colossians from the Geneva Bible, the first mass-published English translation of the Scriptures and the one that Shakespeare likely had in his home.

Our reading from Colossians 2:9-15 reminded us that we who are in Christ Jesus have been filled in him, circumcised in his circumcision, buried with him, raised with him and made alive together with him. Our reading detailed what Paul just a bit later says more succinctly: Your life is hid with Christ in God.

Then we proceeded to our academic task, to discuss the masks that characters take on in Shakespeare’s plays and the relationship between playing a role and being one’s self. As Shakespeare’s Jacques famously holds forth:

All the world’s a stage
And all the men and women are merely players;
They have their exits and their entrances
And one man in his time plays many parts.

It’s not original to Shakespeare, this idea that all the world’s a stage. The concept of the theatrum mundi — the theater of the world in which the human drama plays out before a divine audience — is as antique as the Ancients. But through Shakespeare the notion makes its way into English literature. He keeps reminding his audience that the play world is an analogy for the real world, one in which people are always wearing masks. To be a person is to be an actor.

My class eventually found this notion unsettling. Mere acting so quickly becomes hypocrisy, so readily facilitates an empty self. Are we always putting on masks before others, effectively role-playing at the cost of stable selves? How can we know who we are at core if we simply move from room to room, relationship to relationship, job to job, playing our part? If we are what we do, who are we apart from what we do?

And the answer came straight from Colossians. Our identity is not simply the sum of every part we play. Our very life takes place in the life of Christ. This disrupts the need to know who I am at core and relocates my attention to Christ in me.

A 16th century catechism, the Heidelberg, identifies a radically relocated identity as the first and sole comfort for the Christian, asking its first question: Christian, what is your only comfort in life and in death? And the answer, borrowing Paul’s words to the Ephesian church: That I am not my own, but belong — body and soul, in life and in death — to my faithful Savior Jesus Christ.

The comfort of the gospel is that though I would be lost in myself I find my self in Christ. We pray to God as Father only because we have been included in the life of his Son. We are no longer aliens and strangers to our Creator because we have been brought near in the fellowship of his Spirit. We are no longer dead in our sins — in the many and various ways we act as and become selves opposed to our created nature — because Christ in his life, death and resurrection has triumphed over sin and death.

I am not my own, but Christ’s. So there is more good comfort: By including us in his life, Christ plays out his life in ours. Gerard Manley Hopkins captures this richness in his poem “As Kingfishers Catch Fire.” After a first stanza capturing the richness of being that creatures possess by doing the particular thing they do, he considers the unique situation of the human creature:

I say more: the just man justices;
Keeps grace: that keeps all his goings graces;
Acts in God’s eye what in God’s eye he is—
Christ—for Christ plays in ten thousand places,
Lovely in limbs, and lovely in eyes not his
To the Father through the features of men’s faces.

God, both true author and true spectator of this world’s stage, sees Christ in us. Christ has given us a new role to play — his very life in the world. May we gladly and humbly enter into this great drama.

Wednesday, January 20, 2010

How ‘Nature’s fury’ replaced God’s fury

By Brendan O’Neill
spiked

Evangelist Pat Robertson’s real mistake was to describe the calamity in Haiti as God’s work rather than Gaia’s work.

Pat Robertson, the US Christian evangelist who seeks headlines the way missiles seek heat, has understandably caused outrage with his craven comments on the earthquake in Haiti. That calamity is payback from God, he says, for Haitians who made ‘a pact with the devil’ by allegedly embracing voodoo over Jesus Christ.

Yet the real reason Robertson’s comments are shocking is not because he has misanthropically moralised a natural disaster as punishment for people’s sinful behaviour, but because he has done so in the name of God rather than Gaia. These days it is not acceptable to present terrible acts of nature as manifestations of God’s divine fury, but it is de rigueur to depict them as some kind of climatic payback for our greed and addiction to consumerism.

In keeping with his Good Book – in which ‘The Lord saw that the wickedness of man was great in the earth’ and so decided to send floods to punish us – Robertson says that Haiti has been ‘cursed’ for its rejection of Christian values, with poverty, political instability and now a calamitous earthquake (1). This follows his even wackier comments on Hurricane Katrina in New Orleans in 2005, which he said was heavenly punishment for legal abortion in the US.

Many are slating his stupidity and backwardness. Yet his real mistake, it seems, was to deploy religious language, rather than pseudo-scientific language, to make his poisonous point. Because today, moralising natural disasters, personifying them, imbuing them with sentience and purpose and vengeance, is a popular pursuit amongst secularists, commentators and climate-change alarmists, for whom everything from flooding to almighty gusts of wind reveals the ‘connections between our unsustainable lifestyles and climate change’ (2).

In the environmentalist outlook, floods, fire and natural destruction have all been discussed as punishment for our eco-hubris. During flooding in England in June 2007, a leading British green declared: ‘The drumbeat of disaster that heralds global warming quickened its tempo this week.’ He said the floods were payback for our failure to instigate a ‘managed mass withdrawal from fossil fuels’ and our insistence on living unsustainably. He even evoked God, Robertson-style, arguing that ‘behind the gathering clouds the hand of God is busy’ (3). Others have claimed that floods offer us a ‘glimpse of a possible winter world that we’ll inhabit if we don’t sort ourselves out’ (4). In short, flooding is brought about by our stubborn desire to live comfortable lives rather than to eke out meek, eco-respectful existences.

Following those floods, a Guardian columnist declared: ‘The turbulent weather we’ve seen is a warning of what lies ahead for us.’ She said we need to be ‘cajoled, led, provoked into changing [our] ways’ and welcomed the ‘drumming of rain on the skylight’ as a kind of warning from on high (5). Mark Lynas, author of the eco-Bible Six Degrees, which makes the story of Noah’s Ark look like an episode of Balamory, has even evoked the God of the Sea, predicting that ‘Poseidon [will be] angered by arrogant affronts from mere mortals like us. We have woken him from a thousand-year slumber, and this time his wrath will know no bounds.’ (6) There is barely a cigarette paper’s difference between this mad idea that the sea will punish us for living it up and Robertson’s idea that Haitians are being punished for their fascination with voodoo.

Fire is another favourite form of vengeance for both the old Bible brigade and the new climate-alarmist lobby. The Australian bush fires of 2009, which killed 173 people and destroyed 2,000 homes and which were actually a product of both very hot weather and arson, were described by one green as ‘global warming made manifest in the daily lives of ordinary people’ (7). Jonathon Porritt, a green who has advised both the UK government and the royal family, linked the bush fires to Australia’s pursuit of ‘unbridled affluence, California-style’ (8). So Australians burned for their sins, for daring to try to generate wealth.

If anything, the idea of Gaia punishing us is even more backward than the idea of God punishing us. At least Robertson only leaps upon disasters once they have happened in order to spread his codswallop – leading greens, by contrast, call upon Mother Nature to punish us more and more in the future in order to wake us from our consumerism-induced stupour. Porritt says there will have to be more climatic ‘shocks to the system’, and ‘from the perspective of our long-term prospects, they need to come as rapidly as possible. And to be as traumatic as possible. Otherwise, politicians and their electorates will rapidly revert to the current mix of non-specific anxiety and inertia.’ (8)

The problem with the Australian bushfires, he says, is that they clearly weren’t ‘bad enough’, because Aussies straight away went back to pursuing their ‘dreams of unbridled affluence’, which ‘gives us some sense of just how bad future climate shocks are going to have to be to drive any serious transformation’ (9). This amounts to a backward, vindictive rain-dancing for further natural calamity, for more of Gaia’s fire and fury, as a way of shocking the masses from their eco-inertia. It expresses both a medieval-style moralisation of weather events and an utter lack of faith in debate and democracy, so that Porritt hopes flames and floodwaters will change the way we plebs think and live.

The usurping of disaster-embracing religious cranks by disaster-demanding climate change alarmists became clear during Hurricane Katrina in 2005. Robertson and other minority Christians were attacked for describing that disaster as payback for abortion or for New Orleans’ sexy, sinful ways – yet greens everywhere interpreted it, not as a consequence of a freak weather event and insufficient flood defences, but as a symbol of what will happen if mankind doesn’t overcome his ‘addiction to fossil fuels’ (10). Robertson crankily says the Haiti earthquake was caused by Haitians’ ‘pact with the devil’ – is it really anymore sensible to describe other natural disasters as springing from mankind’s ‘pact with consumerism’?

Throughout human history mankind has had trouble accepting that there is such a thing as natural disaster, a sometimes unpredictable, sometimes unavoidable event, which causes hardship and horror. In earlier eras we described them as ‘acts of God’; later we believed they were brought about by demonic forces; now we say they are payback for our lust for wealth and affluence. The language changes, but the backward idea – that powerful, faceless forces are trying to correct us – remains strikingly similar. And the consequence, then as now, is that we spend more time pointing the finger of blame at greedy mankind than we do offering solidarity to the victims of natural disasters and devising ways to develop and industrialise societies everywhere so that they are better able to withstand nature’s alleged fury. And for that, these societies will need unbridled affluence.

Brendan O’Neill is editor of spiked. His satire on the green movement – Can I Recycle My Granny and 39 Other Eco-Dilemmas – is published by Hodder & Stoughton. (Buy this book from Amazon(UK).)

Thursday, January 14, 2010

The Audacity of the State

By Douglas Farrow
Touchstone Magazine

It’s Bent on Bringing Down the House on the Family & the Church

Jeremiah Wright’s 1990 sermon, “The Audacity to Hope,” which lent Barack Obama the title of his electioneering book, has the story of Hannah as its text, and a painting by G. F. Watts

as its foil. Whether the lecture at which Wright first heard of the painting, or his own subsequent reading, included a consultation of G. K. Chesterton’s 1904 treatment of Watts, I can’t say. But Chesterton writes of Watts as follows:

Those who know the man himself, the quaint and courtly old man who is still living down at Limnerlease, know that if he has one trait more arresting than another, it is his almost absurd humility. He even disparages his own talent that he may insist rather upon his aims. His speech and gesture are simple, his manner polite to the point of being deprecating, his soul to all appearances of almost confounding clarity and innocence. But although these appearances accurately represent the truth about him, though he is in reality modest and even fantastically modest, there is another element in him, an element which was in almost all the great men of his time, and it is something which many in these days would call a kind of splendid and inspired impudence. It is that wonderful if simple power of preaching, of claiming to be heard, of believing in an internal message and destiny: it is the audacious faculty of mounting a pulpit. [ G. F. Watts, 1904]

The Reverend Wright and President Obama certainly have the courage to mount a pulpit and preach. They, too, show a certain “inspired impudence,” albeit not of a Victorian variety. Obama’s rhetoric in particular strikes me as lacking any underlying modesty or humility. But then our present political condition is one of immodesty, not least where the state is in view, which is why I have somewhat impudently turned the title of Obama’s book back upon itself.

The Savior State

When I speak of the audacity of the state, the kind of state I have in mind is what we may call the savior state. The main characteristic of the savior state is that it presents itself as the people’s guardian, as the guarantor of the citizen’s well-being. The savior state is the paternal state, which not only sees to the security of its territory and the enforcement of its laws but also promises to feed, clothe, house, educate, monitor, medicate, and in general to care for its people. Some prefer to call it the nanny state, but that label fails to reckon with its inherently religious character. The savior state does have a religious character, precisely in its paternalism, and may even be comfortable with religious rhetoric.

We are familiar with such rhetoric from ancient times. Was Caesar not soter? Did his coinage not mark him out as divi filius and pontifex maximus? “This, this is he,” says Anchises in Virgil’s Aeneid, the one you’ve been waiting for—“the man you have heard promised to you so often, Augustus Caesar, son of a god, who will once again establish the Golden Age in Latium, in the region once ruled by Saturn.”

We are familiar with it from modern times too. The savior state is the kind of state that Hobbes envisioned, or that Louis Du Moulin had in mind when he said that “the Commonwealth is a visible church.” It is the kind of state that emerges when it is assumed, as Herbert Thorndike pointed out in objection to both “Hobbism and Independency,” that “a man may be heir to Christ’s kingdom and endowed with Christ’s Spirit without being, or before he be, a member of God’s church.” It is the kind of state that Obama had in mind when, during the presidential campaign, he invited a Christian audience in South Carolina to see him as “an instrument of God” and to help him “create a Kingdom right here on Earth.” Presumably it is the kind of state Dorothy Cotton had in mind when she penned her new gospel ditty:

Tell me, tell me, why was Obama born?
Tell me, tell me, why was Obama born?
Somebody had to inspire our youth
Somebody had to hear every voice
Somebody had to tell the truth
That’s why Obama was born, my lordy,
That’s why Obama was born.

A single verse—there are five others—suffices, I think, to make the point.

Eschatological Reserve

It is customary these days to associate the religiously audacious state with theocratic Islamic countries such as Iran, or with Christendom, and to see them as belonging to a “medieval” mindset. The savior state should not be associated with Christendom, however, but with the demise of Christendom. It is a great achievement of the Enlightenment to have taken credit for the doctrine of the separation of church and state, when in fact it effectively abolished that doctrine.

Separation of church and state was predicated on the eschatological reserve on which Christianity insisted, a reserve that required a doctrine of “the Two” and refused to combine the kingly with the priestly in a single office or person. To combine these offices (with their respective “swords”) belonged to Christ alone, and any other claimant to both was ipso facto a kind of Antichrist.

This same eschatological reserve, while supporting all manner of advances in civilizing social and political life, repudiated all utopianism, whether progressive or regressive. It sought no return to a Golden Age, nor did it trumpet “Change you can believe in.” It knew of two loves and two cities made by those loves, and it sought only peace as far as possible between them and within them.

The disruption of that peace in the so-called Wars of Religion was what inspired a revolution under the banner of political liberalism, but the Wars of Religion were not merely Wars of Religion, and political liberalism has morphed into a comprehensive liberalism that is itself religious in character. Comprehensive liberalism will not hear of Christian eschatology as a matter of public and political relevance. Indeed, it has as one of its fundamental premises that Western society has done away with Christian theology (I do not say, all theology) as a matter of public and political relevance. And so it has. But that has opened the field to would-be saviors and utopians of every stripe. It has made possible the return of the savior state—the audacious state that aims at building a kingdom of God right here on earth.

Re-Sacralized State

We can hardly be surprised at this. The Erastianism which (to speak anachronistically) had long been trying to get the upper hand in Christendom, managed to do so in the wake of the Lutheran Reformation, though it was in England that it first succeeded. The year 1534 brought the Act of Succession, and a mandatory oath of allegiance that included assent to everything declared by parliament about marriage in general and about Henry’s in particular. Later that year, the Act of Supremacy also established the king’s ecclesiastical jurisdiction, making no mention of the proviso formerly attached to it by the bishops: “as far as the law of Christ allows.”

Christendom, of course, had already seen many princes who were determined to make the church do their bidding. But Henry, by writing his supremacy into the laws of the realm, inaugurated a new era. In that era, the ongoing process of subordinating religion to the demands of the state would outrun the monarchy as such, and the Church of England too. Not merely some, but all of the church’s authority over things public would gradually be expropriated, binding even the conscience—as the Act of Succession already did—to the authority of the state.

Today we live in a society that shrinks in horror from the very idea of established religion, something the American Constitution in any case forbids. Yet we live, even if we live in America, in states increasingly ready to withdraw conscience clauses not only from public servants but also from doctors and druggists and so forth, requiring them to violate the teachings of their religion and the dictates of their consciences in order to demonstrate their allegiance to the state.

In Britain, and increasingly in North America, even churches and charitable organizations are not exempted from laws that demand conformity to state-endorsed ideologies loaded with religious implications. Penalties for violation include heavy fines or even imprisonment. Thus have we come round to accepting Erastus’s invitation to the state to punish the sins of Christians, supplanting the church’s sacramental discipline. We have come round, that is, to the de-sacralization of the church and the re-sacralization of the state, which is once again taking a tyrannical turn.

A Modern-Day Samson

Tyranny can nowhere succeed without pulling down the two most prominent pillars of political freedom, the pillars that have always provided for a roof or shield over the individual and his conscience. One pillar is the natural family unit; the other is the religious community. Of course, these pillars are not everywhere equally strong or upright. They may themselves be transformed into instruments of tyranny by this or that form of idolatry. But they are pillars for the simple reason that they do not concede to the audacious and immodest state the total authority it craves.

The natural family unit confronts the state as an entity that claims rights not granted by the state but brought to it—rights the lawful state is obliged to recognize and respect. The religious community likewise claims rights and liberties that derive from a source other than the state, a source that transcends and relativizes the state.

These two pillars are beginning to crack, however, in the grip of a modern-day Samson. I mean precisely that muscular but (if he only knew it!) blind and captive creature called “the individual.” Not the individual of whom Kierkegaard spoke when he asserted, in view of the peculiar dignity bestowed on human beings by the incarnation of God, that “one is worth more than a thousand.” But rather the individual fancied by the likes of Bentham, whose dignity consists merely in the freedom to pursue his own interests in his own way, whose interests must therefore be balanced against those of his neighbor under the formula, “Each to count for one and no more than one.”

Even this individual comes to the state with rights of his own, rights that do not derive from the state; but the state is always the arbiter of his rights. Moreover, this individual is not natural (as the “state of nature” philosophers claim) but unnatural, just because he is naked and alone, brought into the world by no one, lacking kin or allegiance, unclothed by tradition—or at all events resentful of it. With such an individual the state that has tyrannical aspirations can happily do business, for he is the individual who has been taught to see himself as chained between the two great pillars of family and church, constrained and belittled by their conventions; who in his shame and fury is willing to call, not on God, but on the power of the state as if on God, to bring them down to the dust.

More than one modern philosopher has dallied with this Samson, betraying him into the hands of the state. John Stuart Mill comes to mind as one of the most successful. Mill’s seductive side is his libertarian individualism. The sting is in the statism that ultimately overpowers it.

Liberty Prior to Truth

It is not as though no one has noticed the contradiction. William Gairdner, for one, makes much of it in The Trouble with Democracy. He observes how the first three chapters of On Liberty lay the foundations of libertarianism, the cornerstone of which is the thesis that “mankind are greater gainers by suffering each other to live as seems good to themselves, than by compelling each to live as seems good to the rest.” That thesis has as its corollary the so-called harm principle, which Mill puts as follows: “The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.”

To read these first three chapters is to be led out of the cave of civil coercion and social intolerance into the broad places of spontaneous experiments in individual freedom. “The only freedom which deserves the name, is that of pursuing our own good in our own way, so long as we do not attempt to deprive others of theirs, or impede their efforts to obtain it.” “No society in which these liberties are not, on the whole, respected, is free,” says Mill, “whatever may be its form of government; and none is completely free in which they do not exist absolute and unqualified.”

Gairdner has put his finger on the most seductive element in these chapters of On Liberty, which are pitched at just about the right level, intellectually and rhetorically, for crusading law clerks and higher-court justices. It lies in the notion that “liberty is prior to truth,” and indeed truth’s “efficient and final cause.” Gairdner is exactly right to put it this way. The power of On Liberty to overturn social and moral and religious conventions arises from Mill’s exciting and flattering suggestion that freedom will lead you into the truth. That iconoclastic gospel from the Romantic period still competes very successfully, tractable as it is to post-modern cynicism, with the older idol-smashing gospel of Jesus, that “the truth will set you free.”

Mill’s gospel takes no account of the creator/creature distinction, or of the fallenness of man. It takes no account of a freedom higher than freedom of choice, and gives no thought to how the truth of our own good will be recognized, or how that good will prove commensurate with the good of others. It is incurably romantic and naively optimistic. Most significantly, it fails to reckon with the fact that, in the absence of an overarching common good, based on a prior truth to which both the individual and the state are subject, the state must become the arbiter of all the competing goods of “free” individuals. It is not the individual who triumphs, then, in the appeal to a freedom that is prior to truth, but the state.

The Harm Principle

Behind Mill stands Rousseau, of course, whose rather more obvious statism Mill hoped to avoid. The basic premise of On Liberty is drawn from the Declaration of the Rights of Man and of the Citizen, that “liberty consists in being able to do anything that does not injure another.” And that dictum is in turn drawn from Rousseau, who got it from the Marquis d’Argenson, to whom we actually owe the harm principle: “In the Republic each man is perfectly free in all things that do no harm to others.” Rousseau’s intention in popularizing it was to downplay the obligations imposed by civil society, which he regarded as a corrupting more than a civilizing influence, especially in the form of family and church.

One’s primary obligations would hereafter be understood as obligations chiefly to oneself, on the one hand, and to the state on the other. That is what the harm principle is really all about—the elimination of the oppressive middle term between the individual and the state. This begs the question, however, as to what does or does not harm another, and who will decide that. Both Mill and Rousseau have ideas about that, and one gets glimpses of Mill’s ideas in the final chapters of On Liberty. Only glimpses, mind you, because Mill’s ideas aren’t really very libertarian after all.

Linda Raeder has also noted the contradiction in Mill, and her take on it is more cynical. In John Stuart Mill and the Religion of Humanity she develops Joseph Hamburger’s argument that Mill’s purpose in On Liberty was to implement the first stage of his Comtean agenda; that is, to assist in the people’s liberation from Christian religion and morals via individualist ideals, with a view to their eventual re-indoctrination in the atheistic Religion of Humanity, the religion of duty to the greater good of man. In other words, first you must make of the Christian an individual; then you must free him from his “miserable individuality” (Mill’s expression in Utilitarianism) into the new community that is the society whose ideal object is not God but itself.

Here again, Mill reminds one of Rousseau. They have in common an idea of obligation to the res publica, religiously conceived, that is a deliberate alternative to the idea of obligation to God and to the neighbor. Like Gairdner, Raeder points to the influence of Mill’s wife, the Romantic poet Harriet Taylor, on the first three chapters of On Liberty. But liberty was really for great souls like Harriet, not for the masses. The masses must “be indoctrinated from infancy in the values and ethics created by the best and wisest specimens of humanity.” They must be domesticated, in other words, by the savior state.

State Control of Education

However one reads Mill, it is safe to say that the further we have pressed his libertarian principles, the deeper we have submerged ourselves in the statist element of his thought and compromised our freedom. The sphere of education, and the related sphere of family law, have become particularly important battlegrounds.

For Mill, as for Bentham and others of their persuasion, the lack of a properly domesticated society is due (as Jacob Talmon puts it in The Rise of Totalitarian Democracy) “not to man but to the failure of governments to form man with the help of education and proper laws.” Working on the premise that “in an improving state of the human mind, the influences are constantly on the increase which tend to generate in each individual a feeling of unity with all the rest,” Mill invites us to imagine a situation in which

this feeling of unity [would] be taught as a religion, and the whole force of education, of institutions, and of opinion, directed, as it once was, in the case of religion, to make every person grow up from infancy surrounded on all sides both by the profession and by the practice of it.

Today that isn’t at all difficult to imagine.

Take Quebec, for example, where public education in North America was begun back in the 1660s by the diligent labors of the Vicar Apostolic, François de Laval, and a small band of hard-working colonists. The remarkable system they built has been almost completely colonized by the state, which has now instituted a mandatory curriculum in religion and ethics designed to instill just such a feeling in every young citizen (even those in private or religious schools, though that been challenged in the courts).

More proximate than Mill, in the inspiration of the program, is a 1996 report by the International Commission on Education for the Twenty-first Century, titled Learning: The Treasure Within, which has globalized Mill’s basic idea. “We must be guided,” says Jacques Delors in that report, “by the Utopian aim of steering the world toward greater mutual understanding, a greater sense of responsibility and greater solidarity, through acceptance of our spiritual and cultural differences.”

One should not be deceived by the reference to acceptance of differences. What this actually means in practice is increased state intervention in areas constitutionally reserved for civil society, so as to suppress or neutralize the visions children inherit from their familial and religious communities, in favor of the vision of “the best and wisest specimens of humanity” that such commissions can muster.

State Control of Children

The ascendancy of the state over civil society, which it ought rather to serve, is virtually guaranteed where the state exercises full control over education—particularly if the goal of education, as one professor boldly asserted in a recent McGill forum, is to release children from the control of their parents. In America, one notes, there have long been advocates of the still more radical idea that children should be regarded as the state’s property, to be educated on a compulsory basis according to state needs and requirements. That is a thesis likely to be advanced with renewed urgency as the implications of our declining birthrate begin to be grasped.

Nor is it altogether lacking support from the law. In 1840 Justice Paige of Connecticut opined in Mercein v. People that “the moment a child is born it owes allegiance to the government of the country of its birth, and is entitled to the protection of the government.” He further explained that “with the coming of civil society the father’s sovereign power passed to the chief or government of the nation.” While the state, for its own convenience, passes part of this power back to the parents, it maintains sovereignty over the question of what is in the best interests of the child. The Colorado Supreme Court partly endorsed that view in a 1910 child custody case:

Though nature gives to parents the right to the custody of their own children, and such right is scarcely less sacred than the right to life and liberty, and is manifested in all animal life, yet among mankind the necessity for government has forced the recognition of the rule that the perpetuity of the state is the first consideration, and parental authority itself is subordinate to this supreme power.

The pattern we have already observed is very much in evidence here. Radical critics such as John Taylor Gatto are not mistaken in pointing out the use of (basically Christian) doctrines of individual destiny, and of subjective rights, to separate children from their natural communities and attach them to artifacts of the state. In Canada we have performed costly exercises of public penance over such strategies in connection with the native residential schools, yet we are now doing something very similar with everyone but natives. The ever more vigorous expansion of public welfare programs, which we have witnessed on both sides of the border, works on exactly the same principle, of course: Citizens are separated from both their natural family units and their religious communities by a cultivated reliance on the state.

What is more, the normalization of divorce—one of the most significant features of our contraceptive culture—has ever more deeply insinuated the state into the child-rearing process and so into the sphere of the family. The “great and pernicious error” against which Pope Leo XIII warned in Rerum Novarum has thus gradually become the norm; namely, “that the civil government should at its option intrude into and exercise intimate control over the family and the household.”

Naked Before the State

To make matters very much worse, the parens patriae power has recently received an enormous boost from another feature of the contraceptive society: same-sex “marriage.” Though most people have not yet realized it, the advent of same-sex marriage has transformed marriage from a pre-political institution conferring “divine and human rights,” as the Roman jurist Modestinus put it, into a mere legal construct at the gift and disposal of the state. The legal terrain has thus changed dramatically, along with the cultural—something I have tried to show in a little book called Nation of Bastards. The family is ceasing to be what the Universal Declaration of Human Rights confesses it to be, viz., “the natural and fundamental group unit of society.”

Replaced by a kaleidoscope of transient sexual and psychological configurations, which serve chiefly to make children of adults and adults of children, the declining family is ceding enormous tracts of social and legal territory to the state. At law, parent-child relationships are losing their a priori status and privilege. Crafty fools ask foolish fools, “What harm does same-sex marriage do to your marriage, or to your family?” The truthful answer is: Same-sex marriage makes us all chattels of the state, because the state, in presuming to define the substance rather than the accidents of marriage, has made marriage itself a state artifact.

Those who have trouble connecting the dots here—which lamentably includes many defenders of the traditional institution—should take time to consider the fact that the new “inclusive” definition, in striking procreation from the purview of marriage, has left both parents and children without a lawful institution that respects and guarantees their natural rights to each other.

Opening up marriage in principle to non-generative unions really means closing it in principle to the inter-generational interests on which it has always been based. From now on, the handling of those interests will be entirely dependent, legally speaking, upon the good graces of the state. Every citizen will stand naked before the state, unclothed by his most fundamental community, unbuffered by any mediating institution with its own inherent rights. Nor should it be overlooked that, what the state has the power to define, it has the power to define again and again, and even to dispense with.

Admittedly, even the state has not yet fully connected the dots, but that is happening with remarkable rapidity, as concurrent moves in education demonstrate. States and international agencies are increasingly prone to argue that children have the right to a state-directed education and that this right must be protected by the state against the interference of parents. The logic is not difficult to follow: If marriage is procreative, it is also educative; but if it is not procreative, it is not educative either—educative rights and responsibilities are up for grabs, and it is the state that will do the grabbing. The pillar that is the family appears to have cracked nearly through.

Samson’s Revenge, or the Ichabod Effect

In a 2008 speech at the Catholic University of America, James Cardinal Stafford—who apparently does not see in Obama the humility that Chesterton saw in Watts—deployed his meditations on St. John’s Apocalypse “to strengthen the Catholic faithful . . . against the ever increasing pretensions of the state [to make] itself absolute.” Cyril of Jerusalem would have thought him wise: “If thou hast a child according to the flesh,” said Cyril, “admonish him now; if thou has begotten one through catechizing, put him also on his guard, lest he receive the false one as the true. For ‘the mystery of iniquity doth already work.’”

Had I space, I would now try to show you the way in which that mystery has also been at work in religion, producing cracks in the other pillar that holds up the roof which shields the individual from the state. That will have to await another opportunity, however, except insofar as I can hint at it by adverting again in conclusion to the story of Hannah.

I trust you know the story. Hannah meets Eli, who, sitting by the temple gate, mistakes her fervent prayer for drunkenness and rebukes her for being a base woman, only to discover that she is quite sober after all, and that she has been praying out of “great anxiety and vexation.” He blesses her. When Samuel is born as the answer to her prayers, Hannah delivers Samuel into Eli’s care in fulfillment of her vow to God.

That must have been a rather frightening prospect, given what everybody knew about the goings-on at the temple in those days. Eli’s sons, whom the text describes as “worthless men [who] had no regard for the Lord,” were the ones who had taken charge of it, and Eli apparently hadn’t the backbone necessary to bring them into line. (“They would not listen to the voice of their father,” we are told, “for it was the will of the Lord to slay them.”)

But Samuel is not corrupted by the sons of Eli, a fate from which God preserves him. Instead, he is given the unhappy burden of disclosing to Eli that his family line is about to be brought to a bitter end. That is just what happens during the rout of Israel by the Philistines. Blind old Eli, learning that both his sons and the ark of God have fallen to the Philistines, falls backwards from his seat at the gate, breaks his neck, and dies. His daughter-in-law, Phineas’s wife—who dies in childbirth on the same day—names her son Ichabod, “the Glory is not,” for the Glory, she says, “has departed from Israel.”

The Reverend Wright didn’t mention any of that in his sermon. I thought I would mention it, however, for the audacity of the savior state is rather like the audacity of Hophni and Phineas, who apparently believed that the house of God, and the people of God, were there merely for plunder. The savior state has its own aims and motivations, of course, motivations that may seem (despite the ever more frequent corruption scandals) much nobler than those of Eli’s sons. But the process is much the same, and the final outcome will also be the same, if its advance is not checked. That the state must collapse, when the pillars of family and church are gone, is the secret that the state itself does not know. This secret I call Samson’s revenge, but we may also call it the Ichabod effect.

The Hopeful Christian

Christianity is not about revenge, however, and it is certainly not about despair. It is, as Jeremiah Wright said, about hope (the title of Watt’s painting). Christians deny that the state is savior because they believe that God is savior. Their hope is not, like Mill’s, in the state; nor, like libertarians’, in themselves. Their hope, like Hannah’s, is in God.

So how shall they express their hope in God? What shall they do in the face of the audacious state, which threatens to bring them and their society to ruin?

Since I live in Québec, I will begin with Québec, where there is something Eli-like about many of our older clergy, and something Phineas and Hophni-like about certain of our civil servants, who follow a line of priests that defected from the Catholic Church during the Quiet Revolution. Which is to say, the former often content themselves with a rueful, defeated glance at the latter, while the latter proceed pretty much as if they had assumed the place of the former.

The hopeful Christian—hope being something quite different from mere optimism—will not be taken in by any of this. The Ministry of Education, though it issues documents these days that read like upside-down versions of Gravissimum educationis, is not the new regime in the house of the Lord. Moreover, Christians have undertaken no vow to deliver their children over to it, to be shaped by the state for its own ends. Rather, they have taken vows to God to see that their children are raised according to the faith. They must therefore demand of the state respect for their right to have their children eat from their own educational table. And they must be willing, like Bishop Laval and his “handful of colonists and scanty resources,” to invest what resources they have in providing such a table.

Something similar, I suspect, can be said in the land of Obama and Wright, though its history and habits are different. To be sure, there is a much stronger tradition there of resistance to the overweening state, but the forces of the state are also far greater. In America, Christians will require the courage of Dorothy Cotton’s hero, Martin Luther King, Jr., if they are to repair the pillars of freedom that have sustained such damage, and to roll back the impressive gains that have lately been made by the savior state. In America, too, the churches will need to renew their pedagogical mission and to fight for freedom of education. The natural family will need somehow to reclaim, if it can, the rights it is losing.

And in both countries, men and women will need to rediscover Hannah’s hunger for progeny, for the contraceptive mentality and the practice of abortion contribute very directly to the Ichabod effect.

Inspired Impudence

The hopeful Christian will not give up on any of this. But there is something else to be said. The cracks that have appeared in the pillar that is the family appeared first in the pillar that is religion. For, in Christianity, religion is still more fundamental than the family, as Jesus made clear. “He who loves father or mother more than me is not worthy of me.” The Christian religion is decidedly not an individualistic religion, however. Rather, it is an ecclesial religion that sacramentally embraces and transcends the family. That, alas, was obscured by the sectarianism into which it degenerated in the wake of the Protestant Reformation—the very same sectarianism that revived the beast that is the audacious state.

The hopeful Christian, then, if his hope is not misplaced, will eschew both individualism and sectarianism, seeking the union and communion of the Church. Like Hannah, who gave up her only son to serve the people of God in the temple of God, he will seek always the good of the Church, the precariousness of the times notwithstanding. If he acts with “inspired impudence,” it will be an inspired impudence that begins in prayer at the temple gate. That is where Hannah began, and it is where we, too, should begin. And if we, who are sober, are thought drunk and disorderly by the state and our fellow-citizens, or even by some of our priests, so be it.

Douglas Farrow is Professor of Christian Thought at McGill University in Montreal, Quebec. This article is based on a lecture Dr. Farrow gave at the Stead Centre for Ethics and Values in Evanston, Illinois, in March 2009.