Wednesday, July 30, 2008

Identity is That Which is Given

By Kenan Malik
butterfliesandwheels.com

The anthropologist Margaret Mead once observed that in the 1930s, when she was busy remaking the idea of culture, the notion of cultural diversity was to be found only in the ‘vocabulary of a small and technical group of professional anthropologists’. Today, everyone and everything seems to have its own culture. From anorexia to zydeco, the American philosopher Kwame Anthony Appiah has observed, there is little that we don’t talk about as the product of some group’s culture. In this age of globalisation many people fret about Western culture taking over the world. But the greatest Western export is not Disney or McDonalds or Tom Cruise. It is the very idea of culture. Every island in the Pacific, every tribe in the Amazon, has its own culture that it wants to defend against the depredation of Western cultural imperialism. You do not even have to be human to possess a culture. Primatologists tell us that different groups of chimpanzees each has its own culture. No doubt some chimp will soon complain that their traditions are disappearing under the steamroller of human cultural imperialism.

We’re All Multiculturalists Now observed the American academic, and former critic of pluralism, Nathan Glazer in the title of a book. And indeed we are. The celebration of difference, respect for pluralism, avowal of identity politics - these have come to be regarded as the hallmarks of a progressive, antiracist outlook and as the foundation of modern liberal democracies. Ironically, culture has captured the popular imagination just as anthropologists themselves have started worrying about the very concept. After all, what exactly is a culture? What marks its boundaries? In what way is a 16-year old British born boy of Pakistani origin living in Bradford of the same culture as a 50-year old man living in Lahore? Does a 16-year white boy from Bradford have more in common culturally with his 50-year-old father than with that 16-year old ‘Asian’? Such questions have led most anthropologists today to reject the idea of cultures as fixed, bounded entities. Some reject the very idea of culture as meaningless. ‘Religious beliefs, rituals, knowledge, moral values, the arts, rhetorical genres, and so on’, the British anthropologist Adam Kuper suggests, ‘should be separated out from each other rather than bound together into a single bundle labelled culture’. ‘To understand culture’, he concludes, ‘we must first deconstruct it.

Whatever the doubts of anthropologists, politicians and political philosophers press on regardless. The idea of culture, and especially of multiculturalism, has proved politically too seductive. Over the past two decades, nations such as Australia, Canada and South Africa have created legal frameworks to institutionalise their existence as multicultural societies. Other countries such as Britain have no formal recognition of their multicultural status but have nevertheless pursued pluralist policies in a pragmatic fashion. Even France, whose Republican tradition might seem to be the nemesis of multiculturalism, has flirted with pluralist policies. In 1986 the College de France presented the President with a report entitled ‘Proposals for the Education of the Future’. The first of ten principles to which modern schools should subscribe was ‘The unity of science and the plurality of cultures’: ‘A carefully fabricated system of education must be able to integrate the universalism inherent in scientific thought with the relativism of the social sciences, that is with disciplines attentive to the significance of cultural differences among people and to the ways people live, think and feel.’

‘There is a certain way of being human that is my way’, wrote the Canadian philosopher Charles Taylor in his much discussed essay on ‘The Politics of Recognition’. ‘I am called upon to live my life in this way… Being true to myself means being true to my own originality’. This sense of being ‘true to myself’ Taylor calls ‘the ideal of “authenticity”’. The ideal of the authentic self finds its origins in the Romantic notion of the inner voice that expressed a person’s true nature. The concept was developed in the 1950s by psychologists such as Erik Erikson and sociologists like Alvin Gouldner into the modern notion of identity. Identity, they pointed out, is not just a private matter but emerges in dialogue with others.

Increasingly identity came to be seen not as something the self creates but as something through which the self is created. Identity is, in sociologist Stuart Hall’s words, ‘formed and transformed continuously in relation to the ways in which we are represented or addressed in the cultural systems which surround us.’ The inner self, in other words, finds its home in the outer world by participating in a collective. But not just any collective. The world is comprised of countless groups – philosophers, truck drivers, football supporters, drinkers, train spotters, conservatives, communists and so on. According to the modern idea of identity, however, each person’s sense of who they truly are is intimately linked to only a few special categories – collectives defined by people’s gender, sexuality, religion, race and, in particular, culture. A Unesco-organised ‘World Conference on Cultural Policies’ concluded that ‘cultural identity… was at the core of individual and collective personality, the vital principle that underlay the most authentic decisions, behaviour and actions’.

The collectives that appear significant to the contemporary sense of identity comprise, of course, very different kinds of groups and the members of each are bound together by very different characteristics. Nevertheless, what collectives such as gender, sexuality, religion, race and culture all have in common is that each is defined by a set of attributes that, whether rooted in biology, faith or history, is fixed in a certain sense and compels people to act in particular ways. Identity is that which is given, whether by nature, God or one’s ancestors. ‘I am called upon to live my life in this way’. Who or what does the calling? Apparently the culture itself. Unlike politically defined collectives, these collectives are, in philosopher John Gray’s words, ‘ascriptive, not elective… a matter of fate, not choice.’ The collectives that are important to the contemporary notion of identity are, in other words, the modern equivalents of what Herder defined as volks. For individual identity to be authentic, so too must collective identity. ‘Just like individuals’, Charles Taylor writes, ‘a Volk should be true to itself, that is its own culture.’ To be true to itself, a culture must faithfully pursue the traditions that mark out that culture as unique and rebuff the advances of modernity, pragmatism and other cultures.

This view of culture and identity has transformed the way that many people understand the relationship between equality and difference. For the Enlightenment philosophes, equality required that the state should treat all citizens in the same fashion without regard to their race, religion or culture. This was at the heart of their arguments against the ancien regime and has been an important strand of liberal and radical thought ever since. For contemporary multiculturalists, on the other hand, people should be treated not equally despite their differences, but differently because of them. ‘Justice between groups’, as the political philosopher Will Kymlicka has put it, ‘requires that members of different groups are accorded different rights’.

An individual’s cultural background frames their identity and helps define who they are. If we want to treat individuals with dignity and respect, many multiculturalists argue, we must also treat with dignity and respect the groups that furnish them with their sense of personal being. ‘The liberal is in theory committed to equal respect for persons’, the philosopher Bhikhu Parekh argues. ‘Since human beings are culturally embedded, respect for them entails respect for their cultures and ways of life.’ The British sociologist Tariq Madood takes this line of argument to make a distinction between what he calls the ‘equality of individualism’ and ‘equality encompassing public ethnicity: equality as not having to hide or apologise for one’s origins, family or community, but requiring others to show respect for them, and adapt public attitudes and arrangements so that the heritage they represent is encouraged rather than contemptuously expect them to wither away.’ We cannot, in other words, treat individuals equally unless groups are also treated equally. And since, in the words of the American scholar Iris Young, ‘groups cannot be socially equal unless their specific experience, culture and social contributions are publicly affirmed and recognised’, so society must protect and nurture cultures, ensure their flourishing and indeed their survival.

One expression of such equal treatment is the growing tendency in some Western nations for religious law – such as the Jewish halakha and the Islamic sharia – to take precedence over national secular law in civil, and occasionally criminal, cases. Another expression can be found in Australia, where the courts increasingly accept that Aborigines should have the right to be treated according to their own customs rather than be judged by ‘whitefella law’. According to Colin McDonald, a Darwin barrister and expert in customary law, ‘Human rights are essentially a creation of the last hundred years. These people have been carrying out their law for thousands of years.’ Some multiculturalists go further, requiring the state to ensure the survival of cultures not just in the present but in perpetuity. Charles Taylor, for instance, suggests that the Canadian and Quebec governments should take steps to ensure the survival of the French language in Quebec ‘through indefinite future generations’.

The demand that because a cultural practice has existed for a long time, so it should be preserved - or, in Charles Taylor’s version, the demand that because I am doing X so my descendants, through ‘indefinite future generations’, must also do X - is a modern version of the naturalistic fallacy, the belief that ought derives from is. For nineteenth century social Darwinists, morality - how we ought to behave - derived from the facts of nature - how humans are. This became an argument to justify capitalist exploitation, colonial oppression, racial savagery and even genocide. Today, virtually everyone recognises the falsity of this argument. Yet, when talking of culture rather than of nature, many multiculturalists continue to insist that is defines ought.

In any case, there is something deeply inauthentic about the contemporary demand for authenticity. The kind of cultures that the Enlightenment philosophes wanted to consign to history were, in an important sense, different from the cultures that today’s multiculturalists wish to preserve. In the premodern world there was no sense of cultural integrity or authenticity. There were no alternatives to the ways of life that people followed. Cultures were traditional but in an unselfconscious fashion. Those who lived in such cultures were not aware of their difference, let alone that they should value it or claim it as a right. A French peasant attended Church, an American Indian warrior painted his face not because they thought ‘This is my culture, I must preserve it’ but for pragmatic reasons. As the political philosopher Brian Barry suggests, in the absence of some compelling reason for doing things differently, people went on doing them in the same way as they had in the past. Cultural inertia, in other words, preserved traditional ways because it was the easiest way to organise collective life.

Multiculturalists, on the other hand, exhibit a self-conscious desire to preserve cultures. Such ‘self-consciousness traditionalism’, as Brian Barry calls it, is a peculiarly modern, post-Enlightenment phenomenon. In the modern view, traditions are to be preserved not for pragmatic reasons but because such preservation is a social, political and moral good. Maintaining the integrity of a culture binds societies together, lessens social dislocation and allows the individuals who belong to that culture to flourish. Such individuals can thrive only if they stay true to their culture - in other words, only if both the individual and the culture remains authentic.

Modern multiculturalism seeks self-consciously to yoke people to their identity for their own good, the good of that culture and the good of society. A clear example is the attempt by the Quebecois authorities to protect French culture. The Quebec government has passed laws which forbid French speakers and immigrants to send their children to English-language schools; compel businesses with more than fifty employees to be run in French; and ban English commercial signs. So, if your ancestors were French you, too, must by government fiat speak French whatever your personal wishes may be. Charles Taylor regards this as acceptable because the flourishing and survival of French culture is a good. ‘It is not just a matter of having the French language available for those who might choose it’, he argues. Quebec is ‘making sure that there is a community of people here in the future that will want to avail itself of the opportunity to use the French language.’ Its policies ‘actively seek to create members of the community… assuring that future generations continue to identify as French-speakers.’

An identity has become a bit like a private club. Once you join up, you have to abide by the rules. But unlike the Groucho or the Garrick it’s a private club you must join. Being black or gay, the philosopher Kwame Anthony Appiah suggests, requires one to follow certain ‘life-scripts’ because ‘Demanding respect for people as blacks and gays can go along with notably rigid strictures as to how one is to be an African American or a person with same-sex desires.’ There will be ‘proper modes of being black and gay: there will be demands that are made; expectations to be met; battle lines to be drawn.’ It is at this point, Appiah suggests, that ‘someone who takes autonomy seriously may worry whether we have replaced one kind of tyranny with another.’ An identity is supposed to be an expression of an individual’s authentic self. But it can too often seem like the denial of individual agency in the name of cultural authenticity.

‘It is in the interest of every person to be fully integrated in a cultural group’, Joseph Raz has written. But what is to be fully integrated? If a Muslim woman rejects sharia law, is she demonstrating her lack of integration? What about a Jew who doesn’t believe in the legitimacy of the Jewish State? Or a French Quebecois who speaks only English? Would Galileo have challenged the authority of the Church if he had been ‘fully integrated’ into his culture? Or Thomas Paine have supported the French Revolution? Or Salman Rushdie written The Satanic Verses? Cultures only change, societies only move forwards because many people, in Kwame Appiah’s words, ‘actively resist being fully integrated into a group’. To them ‘integration can sound like regulation, even restraint’. Far from giving voice to the voiceless, in other words, the politics of difference appears to undermine individual autonomy, reduce liberty and enforce conformity. You will speak French, you will act gay, don’t rock the cultural boat. The alternatives, the French philosopher Alain Finkielkraut suggests, are simple: ‘Either people have rights or they have uniforms; either they can legitimately free themselves from oppression… or else their culture has the last word.’

Part of the problem is a constant slippage in multiculturalism talk between the idea of humans as culture-bearing creatures with the idea that humans have to bear a particular culture. Clearly no human can live outside of culture. But then no human does. ‘It’s not easy to imagine a person, or people, bereft of culture’, observes Kwame Appiah. ‘The problem with grand claims for the necessity of culture’, he adds, ‘is that we can’t readily imagine an alternative. It’s like form: you can’t not have it.’ Culture, in other words, is like oxygen: no living human can do without it, but no living human does.

To say that no human can live outside of culture is not to say they have to live inside a particular one. Nor is it to say that particular cultures must be fixed or eternal. To view humans as culture-bearing is to view them as social beings, and hence as transformative beings. It suggests that humans have the capacity for change, for progress, and for the creation of universal moral and political forms through reason and dialogue. To view humans as having to bear specific cultures is, on the contrary, to deny such a capacity for transformation. It suggests that every human being is so shaped by a particular culture that to change or undermine that culture would be to undermine the very dignity of that individual. It suggests that the biological fact of, say, Jewish or Bangladeshi ancestry somehow make a human being incapable of living well except as a participant of Jewish or Bangladeshi culture. This would only make sense if Jews or Bangladeshis were biologically distinct – in other words if cultural identity was really about racial difference.

The relationship between cultural identity and racial difference becomes even clearer if we look at the argument that cultures must be protected and preserved. If a ‘culture is decaying’, the sociologists Avishai Margalit and Joseph Raz argue, then ‘the options and opportunities open to its members will shrink, become less attractive, and their pursuit less likely to be successful.’ So society must step in to prevent such decay. Will Kymlicka similarly argues that since cultures are essential to peoples’ lives, so where ‘the survival of a culture is not guaranteed, and, where it is threatened with debasement or decay, we must act to protect it.’ For Charles Taylor, once ‘we’re concerned with identity’, nothing ‘is more legitimate than one’s aspiration that it is never lost’. Hence a culture needs to be protected not just in the here and now but through ‘indefinite future generations’.

A century ago intellectuals worried about the degeneration of the race. Today we fear cultural decay. Is the notion of cultural decay any more coherent than that of racial degeneration? Cultures certainly change and develop. But what does it mean for a culture to decay? Or for an identity to be lost? Will Kymlicka draws a distinction between the ‘existence of a culture’ and ‘its “character” at any given moment’. The character of culture can change but such changes are only acceptable if the existence of that culture is not threatened. But how can a culture exist if that existence is not embodied in its character? By ‘character’ Kymlicka seems to mean the actuality of a culture: what people do, how they live their lives, the rules and regulations and institutions that frame their existence. So, in making the distinction between character and existence, Kymlicka seems to be suggesting that Jewish, Navajo or French culture is not defined by what Jewish, Navajo or French people are actually doing. For if Jewish culture is simply that which Jewish people do or French culture is simply that which French people do, then cultures could never decay or perish – they would always exist in the activities of people.

So, if a culture is not defined by what its members are doing, what does define it? The only answer can be that it is defined by what its members should be doing. The African American writer Richard Wright described one of his finest creations Bigger Thomas, the hero of Native Son, as a man ‘bereft of a culture’. The Negro, Wright suggested, ‘possessed a rich and complex culture when he was brought to these alien shores’. But that culture was ‘taken from him’. Bigger Thomas’ ancestors had been enslaved. In the process of enslavement they had been torn from their ancestral homes, and forcibly deprived of the practices and institutions that they understood as their culture. Hence Bigger Thomas, and every black American, behaved very differently from his ancestors. Slavery was an abomination and clearly had a catastrophic impact on black Americans. But however inhuman the treatment of slaves and however deep its impact on black American life, why should this amount to a descendant of slaves being ‘bereft of a culture’ or having a culture ‘taken from him’? This can only be if we believe that Bigger Thomas should be behaving in certain ways that he isn’t, the ways that his ancestors used to behave. In other words, if we believe that what defines what you should be doing is the fact that your ancestors were doing it. Culture here has become defined by biological descent. And biological descent is a polite way of saying ‘race’. As the cultural critic Walter Benn Michaels puts it, ‘In order for a culture to be lost… it must be separable from one’s actual behaviour, and in order for it to be separable from one’s actual behaviour it must be anchorable in race.’

The logic of the preservationist argument is that every culture has a pristine form, its original state. It decays when it is not longer in that form. Like racial scientists with their idea of racial type, some modern multiculturalists appear to hold a belief in cultural type. For racial scientists, a ‘type’ was a group of human beings linked by a set of fundamental characteristics which were unique to it. Each type was separated from others by a sharp discontinuity; there was rarely any doubt as to which type an individual belonged. Each type remained constant through time. There were severe limits to how much any member of a type could drift away from the fundamental ground plan by which the type was constituted. These, of course, are the very characteristics that constitute a culture in much of today’s multiculturalism talk. Many multiculturalists, like racial scientists, have come to think of human types as fixed, unchanging entities, each defined by its special essence.

Tuesday, July 29, 2008

Why, If We Share the Same Bible, Do Presbyterians Differ So Widely on the Issue of Gay Ordination? Section 1

By Mark D. Roberts
markdroberts.com

As I pick up my blog series on the PCUSA, I want to consider the question of why we Presbyterians, given that we share the same Bible, differ so widely on the issue of gay ordination. I realize that some of my readers want me to stop analyzing the issue and start proposing solutions (or dissolutions!). I will get to the “What are we going to do about this?” question soon enough. But I believe that it’s essential for us to understand not only what Presbyterians believe but also why we believe as we do. Clarity about these matters will help us make wise choices when it comes to tangible actions. It will also help us speak truly and respectfully of those with whom we disagree. Too often in this debate folks on both sides have misrepresented the other side.

A word of caution: I will be painting with a broad brush here as I try to capture major differences among Presbyterians. The reality is more complex than my analysis. But I think I’m getting the main brush strokes in the right place.

The fact that Presbyterians disagree widely on gay ordination is beyond question. In my recent posts I have tried to show what’s underneath this disagreement. Supporters of gay ordination see their cause as a matter of biblical justice. Opponents of gay ordination see their cause as a matter of biblical righteousness. This means something rotten is the state of Presbyterianism, because God’s justice would never actually be in conflict with God’s righteousness! Somewhere along the line somebody has missed God’s will in the matter.

A Question of Biblical Authority and Interpretation

Opponents of gay ordination often explain why proponents believe as they do by saying something like: “We follow what the Bible teaches. They do not. We uphold the authority of the Bible. They do not. This whole debate isn’t really about homosexuality. It’s about the authority of the Bible.” Supporters of gay ordination sometimes object to this explanation: “That’s not true. We also uphold the authority of the Bible. We just interpret it differently. This isn’t a matter of the biblical authority. It’s about the interpretation of the Bible.”

In my opinion, both sides are partly right. That means both sides are partly wrong as well. In fact, what leads Presbyterians to such different conclusions with respect to homosexuality is a matter both of biblical authority and of biblical interpretation. In the end, these are interlocking issues that can’t be completely distinguished.

Almost all Presbyterians agree that the Bible is authoritative in some sense. Almost all Presbyterians agree that biblical truth comes to us embedded in culture (or cultures, to be more precise). And almost all Presbyterians agree that the Bible is both divine and human. We differ, however in our estimation of just how much of Scripture is divine, and therefore just how much of it is authoritative.

In general, opponents of gay ordination believe that all of the Bible is divinely-inspired and therefore authoritative. The timeless truth of God, because it comes in a cultural package, needs to be carefully discerned, so as to clarify that which is authoritative for us. So, for example, those who believe that the whole Bible is inspired do not argue, on the basis of 1 Corinthians 11, that women in today’s church should be veiled. But they don’t dismiss 1 Corinthians 11 as something that was relevant for first-century Corinth at best, or simply wrongheaded at worst. They believe that Paul’s discussion of veiling contains and reflects timeless truth that is authoritative for us today, and that needs to be unpacked so we can implement it. This truth would include such things as the authority of women to pray and prophesy in church, the essential male/female character of creation and church, and the need for doing in church that which is edifying.

In general, proponents of gay ordination believe that the Bible contains divinely-inspired portions, but also portions that are merely human, and therefore not authoritative for us today. Paul’s claim that women should be veiled, therefore, is seen as culturally-bound, or even as simply wrong. One must look elsewhere for the timeless truth of Scripture, which is found, for example, in Jesus’s instruction to love, or in the consistent call of the Bible to seek justice for the oppressed. The interpreter of Scripture has the responsibility of sifting out the timeless from the time-bound, so that God’s Word might be properly understood and applied today.

When we come to the issue of gay ordination, therefore, opponents of gay ordination believe that the Bible clearly reveals the sinfulness of homosexual activity because such teaching is found in several biblical passages. Proponents of gay ordination deny this. Some argue that the Bible never addresses the case of loving, mature, committed homosexual lovers. But proponents tend to believe that even if the Bible condemned all homosexual activity, this would not reflect God’s inspiration, but rather human enculturation and limitation. As they interpret the Bible, they believe they have the freedom and the responsibility to sort out what is inspired and authoritative and what is neither inspired nor authoritative. The Bible’s consistently negative teaching on homosexuality falls in the neither inspired nor authoritative category.

In my next post in this series I’ll continue this conversation.

Monday, July 28, 2008

The Prudent Person: Toward a Christian General Education

By John Mark Reynolds
Scriptorium Daily

The twentieth century loved to speak of “revolutions” . . . even in the area of thought. The outcome was not promising. In the history of ideas, I think it better to think of development as “biological” instead of as revolutionary. Ideas build on each other. In fact, if anything evolves, human thought evolves. The story of intellectual development is less the story of Kuhn-like paradigm shifts, then it is the tale of slow and gradual growth in understanding with many set backs as well.

If the Greeks were mistaken to believe that there was Golden Age of the intellect to which all humanity could only look back, then the modern notion of intellectual progress is just as deceptive. While humanity has gone forward in some areas, sensitivity to ecological concerns or issues regarding race, it is has decayed in others. This is true in every area. Just as the advent of processed foods has not been an unmixed blessing, so the easy availability of entertainment may not be good for us.

As a result, it is foolish prejudice to believe the ancients can teach us nothing. As Lewis argues in his Discarded Image and in a preface to When God Became Man, we are unlikely to imitate the mistakes of the past, but also likely to discover long lost virtues.

For example, the poetry of Dante demonstrates a sublime integration of every area of knowledge. His vision is a holistic one that has a place for every idea that he believed. His thoughts are presented in matchless poetry with precision and grace. His sublime art reminds us that not all change has been positive since his day.

This suggests that conservatism in thought is not a bad thing. Most intellectual changes are faddish and long forgotten. To quote a mentor, Sheldon Vanauken, “the up to date is forever dated.” Except in the cases of the staggering geniuses of history, such as Socrates, Shakespeare, Newton, or the Christ, progress is incremental. Such inspired genius is very rare.

Even the inspired genius owes something to his intellectual mother and father. Shakespeare is matchless, but he is not a theatrical Adam, parentless other than God. A Christian is, therefore, entitled to moderation in his view of intellectual progress. He is right to distrust announcements that, at long last, he must abandon the faith of his fathers or that all opposition has been refuted for all time!

He is a prudent, or moderate, man. The prudent man avoids intellectual defect and excess.

Moderation has become almost a dirty word amongst academics and Christians, because too many people confuse moderation with a lack of zeal or commitment. Many people who claim to be moderate are simply confused intellectually or being disingenuous about their views in order to keep the peace.

Aristotle would urge sanity in our pursuit of our goals. For the Philosopher, moderation meant avoiding excess or defect in areas not inherently base. (Nicomachean Ethics). He would remind us not to confuse moderation with a call to reactionary repetition of what has worked in the past.

...

The great success of one science may tempt us to adopt standards of intellectual achievement in all areas. The literature professor is tempted to find “new things” in Shakespeare, instead of passing on the beauty recognized from old. The arts can teach us much about what it is to be human, but the methods and means of the arts are not the same as those of science.

Some believe that they should merely adopt the view of reality that makes the most sense of the matter and energy in it. They act as if there is some virtue in believing an uncomfortable truth, just because it is uncomfortable. Instead, I would suggest that we are well within our rights to root intellectually for the good and the beautiful as long as we reasonably can. Any other view does not take all of our humanity into account.

Christian educators, scientists, and theologians must never forget the importance of beauty and of the types of reason that persuade the human heart. Joy is a good reason to adopt a view, after all. A view of the world that leaves us cold hearted and full of toxic doubts and fears is no more desirable than one that ignores the mind.

I believe that God often speaks to our hearts through beauty and confirms His Word through intense religious experience. Ideally, this conforms to our best theories about the data that our senses deliver to us about the external world in one authentic and consistent whole. One thing confirms another in a bi-conditional relationship.

Sadly, in a fallen world things are never so simple.

It should be obvious to all Christians that Pascal was right. We do have to take the heart into account. I would suggest that just as elegance is valued in a proof of logic, so we can and should value beauty and elegance in a general view of reality.

Science itself is not the problem, of course. With all the problems, it has been more a blessing than a curse to mankind. The misuse of science or of the speculations of science is a problem, but one must not go too far in condemnation.

The chief problem with science for the Christian is being confronted by those who care less about what science actually discovers or does and more about establishing an official hegemony of philosophical naturalism over the sciences. The sociological pressures putting religious people constantly on the defensive has the intended effective of stamping out faith in many and weakening it in others.

All of us who are the products of the system must acknowledge this danger to our souls.

Many otherwise thoughtful people, some of them Christians, have forgotten that there is knowledge to be gained in fields outside of science. No amount of discovery by science of is will ever give them the ability to declare what ought to be. On this David Hume was right, but theists seem to be the only one who have learned the Scottish skeptic’s lesson.

A second related problem comes from those afraid of science and so reject the Western tradition of reason and investigation altogether to avoid the excesses of scientism. Some love the lessons of the heart so well that ignore the realities described by the mind. This way lays the post-modern madness that infects so many humanities departments. Many Christian college faculties behave as if they must choose between the modern temptation to merely ape the sciences and the post-modern snare of ignoring them.

...

An earlier and better film Metropolis by Fritz Lang gave a different and more Christian answer. This silent film argues that humanity cannot function without heeding the folk wisdom of the heart and the scientific knowledge of the head. He pounds home the point using apocalyptic and Biblical imagery about the destruction of a city that ignores either.

It is an absence of beauty that most harms the common residents of the Metropolis. Most of them have been reduced to serving a state as cogs in the state’s machine. While capturing perfectly the ugliness of the totalitarian German state to come, Lang missed the banal ugliness of hedonistic consumerism that dominates much of Western popular culture. If it provides short-term pleasure, then many no longer care for it is ugly.

Such folk not only heed their hearts, they learn to worship them. Perversely this only strengthens the hand of those who wish to ignore the wisdom of the poet and artist. Why listen to the artist when what he produces is kitsch or the artistic chaos that dominates too many modern art galleries?

In reaction, many engineers have created roads, buildings, and even entire cities that lack beauty. It is no accident, I think, that the extreme secularists of our own time, like P.Z. Myers, are so often insensitive to human things . . . to the symbols that give our lives meaning.

One problem feeds the other. The ugliness of the so-called “scientific man, who lacks poetry, repulses the artist and the consumerism of the artist produces no real beauty to attract the scientific man.

...

The popular culture is addicted to subjectivity. As much as possible, people are less interested in what is true, good, or beautiful than in what they wish was true, good, and beautiful. Partly this is a result of a consumer driven culture. Companies spend a fortune telling people what they wish was true about themselves and not what is true.

Christians easily recognize the harm of reducing morality and truth to mere subjectivity. We have not been as good at recognizing the importance of beauty. In his Abolition of Man, C.S. Lewis points out that this change has not come about so much as a result of argument, but because of a kind of intellectual propaganda in our schools. When I was in school, I received a handout where I was to identify all statements about beauty as matters of mere opinion.

The nearly universal American belief that “beauty is in the eye of the beholder” has had a profound impact on our educations. Fine arts classes too often are relegated to the “frosting on the cake” which are the first to be cut. Since musical and artistic preferences are merely in the mind of the teacher, there is no strong reason to pay someone to pass their eccentric preferences on to the next generation. Artists and musicians are reduced to entertainers.

Much is lost as a result.

I have elsewhere argued for the reality of beauty as an idea in the mind of God and for its importance. If we assume with most Christians at most places and at most times, that God has opinions about beauty to which we should conform, then our educational systems will change.

We will not teach our students in ugly rooms. We will value beauty, as the traditional Christian university did, as a good sign. Beauty cannot be divorced from truth and goodness without stunting the whole. Beauty has something to teach our hearts and through our hearts our selves.

In my own life, and in the life of any man or woman who has been “born again,” one intense and beautiful thing for which his view of reality must account is religious experience. Too often religious experience, and the internal witness of the Holy Spirit, has been ignored in our thinking about science, religion, and ethics.

William James in the Variety of Religious Experience carefully describes the nature, limitations, and importance of religious experience. At the very least, James argues that religious experience suggests that the materialists are wrong. There is more than matter and energy in the cosmos.

Religious experience is of limited use in an apologetic, but of great use in living one’s life. Nobody can crawl inside my mind and know the quality and intensity of my religious experience. No human can know how clear and compelling certain experiences are. There are simply experiences so real, so intense, that the burden of proof is always going to be, for the person having the experience, on the skeptic.

Since the skeptic cannot share the experience, he or she cannot be persuaded by it. Of course, one’s interpretation of the experience may be wrong. One encounters the Divine and then tells the most likely story one can about what has happened. My Christian experience with all its intensity does not prove to that Christianity is true, but it suggests that something is up!

Even more compelling is the personal relationship with Jesus Christ that most of us in the room have. In the words of the grand old hymn, we “walk with him and we talk with him.” This might be wrong, but I am going to need a great deal of persuasion to accept that it is all in my head.

This experience is so similar at its best in so many ways to my discourse with other persons, human persons, that arguments against it run the risk of proving too much and inducing a belief in solipsism! If some make too much of their personal relationship with Jesus, thinking that their mere testimony should persuade everyone, intellectuals in the church may risk thinking it does too little.

Materialists often claim that the problem with human spiritual experience is that they can “explain it away” on materialist grounds. Even if this dubious claim is true, it is not surprising. It is very hard to refute a simple idea such as materialism as an examination of its opposite, idealism, shows.

Our primary experience is, after all, mental. What if we were to demand that the materialist prove the necessity of matter and energy as an explanation?

Oddly idealism of the sort advocated by Berkeley is widely rejected in our culture while materialism is widely accepted. I find it odd, because if one wishes to reduce everything to one type of thing, what the Pre-Socratic philosophers called an “arche,” ideas seem a better candidate. Descartes after all was at least arguably right when he claimed that it was impossible to deny one’s own existence.

Both idealism and materialism face serious philosophical challenges. However, there is a bias toward taking materialism seriously, as if we already know it is true, that suggests something important about the intellectual culture. There may indeed be a prejudice against any answer that seems to open the door to the possibility of Divinity and to Divine agency.

Reality is, in all probability, more complex than can be captured by such simple theories such as idealism or materialism. However, for the Christian it is surely reasonable to give at least some priority to his intense religious experiences. They are frequent, comforting, and work toward giving him a worthwhile and happy (in the Aristotelian sense) life. They also account in satisfying manner for the existence of the seemingly miraculous events that so many believers experience.

For a Christian our religious experience is a seminal point for our thinking about the world.

Belief in miracles is ubiquitous in the West, but many Christian academics are shamed faced about their commitment. If God has acted out of His personal will, then it means that not all of the past will be explainable without the knowledge of that Divine will. It might even present an insuperable barrier to understanding exactly how past events occurred. If God used divine power to open the Red Sea, then no scientific account that relies only on natural forces will ever be able to account for that fact of history. There will be a temptation to deny the fact, if only to preserve the theoretical completeness of our knowledge.

My colleague J.P. Moreland in his Kingdom Triangle has argued against this “thin view” of reality that is closed to the supernatural. He defends the possibility of a reasonable person using the miraculous and the prophetic in his daily life. The reaction of part of the Christian community was illustrative of the point.

Evidently, many fear that allowing for the possibility of miracles in the present world will automatically lead to accounting everything (or at least far too many things) to Divine agency. However, this is surely an overreaction. Materialistic explanations are good, so far as they go, and I am not about to abandon them because some have turned them into a basis for materialism. In the same way, Divine agency, which is after all merely a form of the personal agency that forms the basis of many causal explanations, can be abused, but need not be abandoned.

In historic events such as the Flood, Christians may have to be content with showing that there is evidence outside of the pages of Sacred Scripture that it took place, but never being able to give a fully naturalistic account of how it took place. What is needed is a good means to recognize acts caused by an agent from those not caused by an agent. How could we, at least in principle, tell if an event had been done, in the distant past, by an agent (even one other than God) as opposed to natural causes?

Must a theist always prefer any naturalistic answer (however improbable) to any appeal to divine agency (however likely given his other beliefs)? I think not unless we believe that theism and also Christianity are so much on the intellectual defensive that we cannot take any of our religious beliefs as true based on other religious beliefs for which we have stronger external evidence.

This will weaken our apologetic about the truth of Biblical events such as the Flood to nonbelievers only if we have accepted the dubious notion that we can convince a man to be a theist only by showing him that everything God did can be explained without recourse to His power!

However the Christian organizes his ideas, it must provide ample opportunity for the Divine to provide information and insight. In a culture that is rightly impressed with a certain kind of information (the factual or scientific) it is easy to underestimate the importance of another kind of aid to rational thinking: the motivating framework. We are in need of an epistemology that does not always put us on the defensive.

This is especially true when it comes to thinking about science and Christianity. Too often the Bible is treated as a story, but science is treated as the truth. It is not post-modern to suggest that all human attempts to deal with reality are a form of story telling; indeed it is precisely and gloriously pre-modern! This does not mean, however, that just any old story will do.

Though not right about everything, Plato expressed a proper understanding of what science does when he said:

“If we can furnish accounts no less likely than any other we must be content, remembering that I who speak and you my judges are only human, and consequently it is fitting that we should, in these matters, accept the likely story and look for nothing further.”

We are too often captives of an epistemology of skepticism that can never fit well with our faith.

This year I am determined to listen more, be more prudent, and try to learn from science, literature, and theology the truth, goodness, and beauty that can set me free to see the face of God. (more)

Friday, July 25, 2008

Hollywood’s Hero Deficit

By James Bowman
The American Magazine

The movie industry no longer aspires to portray genuine heroism—even though that’s precisely what audiences want to see.

A spate of movies about the wars in Iraq and Afghanistan and the war on terror came out last year, all of them hostile to U.S. involvement and all of them box-office flops. At the time there was a certain amount of soul-searching in the media as to why, when most Americans told pollsters they thought the Iraq war, at least, had been a mistake, they didn’t seem to want to go and see movies that sought to show them just how great a mistake it had been. The New York Times critic A.O. Scott cited what he called “the economically convenient idea that people go to the movies to escape the problems of the world rather than to confront them,” but acknowledged the possibility that America’s opposition to the war “finds its truest expression in the wish that the whole thing would just go away, rather than in an appetite for critical films.”

Without denying that insight, I would like to propose another explanation: American movies have forgotten how to portray heroism, while a large part of their disappearing audience still wants to see celluloid heroes. I mean real heroes, unqualified heroes, not those who have dominated American cinema over the past 30 years and who can be classified as one of three types: the whistle-blower hero, the victim hero, and the cartoon or superhero. The heroes of most of last year’s flopperoos belonged to one of the first two types, although, according to Scott, the only one that made any money, “The Kingdom,” starred “a team of superheroes” on the loose in Saudi Arabia. What kind of box office might have been done by a movie that offered up a real hero?

Government, corporate, and civic leaders are bad guys while heroism, now the province of lawyers or journalists rather than soldiers or cowboys, can only hope to unmask them.

There’s no way of telling, because there haven’t been any real movie heroes for a generation. This fact has been disguised from us partly because of the popularity of the superhero but also because Hollywood has continued to make war movies and Westerns, the biggest generators of movie heroism, that are superficially similar to those of the past but different in ways that are undetectable to their mostly young audiences, who have no memory of anything else. In an otherwise excellent article in Vanity Fair about “chick-flicks,” James Wolcott recently wrote that, like the chick-flick, “the Western is also a genre that’s often pronounced dead and buried only to be dug up again and propped against the barn door—witness 2007’s ‘3:10 to Yuma,’ ‘The Assassination of Jesse James by the Coward Robert Ford,’ and ‘No Country for Old Men.’”

Wolcott is far from being the first to express such an opinion, but neither he nor anyone else appears to have noticed the principal way in which the movies he mentions differ from those of 50 years ago. None of them has anything like a real hero, though all three have charismatic villains, played by Russell Crowe, Brad Pitt, and Javier Bardem, respectively. The title tells us what to think of the would-be hero of “The Assassination of Jesse James,” played by Casey Affleck. He’s a creep, a stalker, and a traitor, as well as a coward. “No Country” has one really sympathetic character, the aging sheriff played by Tommy Lee Jones, who is as helpless against the bad guy as everyone else is. Next to the sexy and invincible serial killer, a kind of inverted superhero played by Bardem, he is reduced to being just another victim hero, maundering on about what a nasty old world it is.

During and after World War II, real-life heroes often looked to the likes of John Wayne to see what a hero was supposed to look and act like. Such men hardly exist now.

But it is “3:10 to Yuma” that offers the most interesting contrast between the old-fashioned sort of Western and the new breed. It was a remake of a movie first made in 1957, directed by Delmer Daves and starring Glenn Ford and Van Heflin. Like so many other Westerns of the period, it was a parable of the heroism of the ordinary people who brought civilization, peace, and prosperity to the Wild West. Heflin’s character, Dan Evans, is a simple farmer in danger of losing his farm to drought who, for the $200 it would take to pay the mortgage, accepts the task of escorting Ford’s Ben Wade, a dangerous killer, to catch the eponymous train to trial. At a moment when it looks as if he is sure to die in the attempt, Evans explains to his wife that he is no longer escorting the prisoner for the money but as a civic duty. “The town drunk gave his life because he thought people should be able to live in peace and decency together,” he said. “Can I do less?”

Needless to say, there is no comparable line in the remake. The Dan Evans of 2007, played by Christian Bale, is an almost helpless victim, a Civil War veteran who lost his leg in a friendly-fire incident and whose motivation would remain merely mercenary but for the fact that, like us, he is meant to become rather fond of Crowe’s fascinating Wade—and vice versa. James Mangold, the director of the remake, has turned it into a meet-cute buddy picture. In the original, Evans stands four-square for due process and saves Wade from a vigilante. Ford’s Wade, having the rough sense of frontier honor of old-fashioned Western villains, repays the favor, even at the cost of having to make the train. He doesn’t like owing anything to anyone, he says. The remake ends with a general shootout in which it is unclear why anyone, especially Wade, does what he does. Poor Evans remains only a victim.

Sergeant York.jpgBoth films are typical of their times. The 1957 version shows moral earnestness, an optimistic belief in civilized standards, and an unabashed portrayal of heroism. These things are lacking in its 2007 counterpart. In this it is like the other two new Westerns, or the HBO series “Deadwood.” Its moral landscape is the war of each against all that we see on the lawless and violent streets of “American Gangster” or other films with a contemporary setting. The Wild West has been resurrected not as a story of taming the wilderness, both external and internal, on behalf of decency and civilization, but as a convenient synecdoche for that dark, amoral, and timelessly violent world that all art worthy of the name today must presuppose. Where there is no hope of a better world, there can be little to distinguish heroes from villains.

That’s why the American movie hero—who once so impressed the world that he personified heroism for people far beyond our borders—has been missing in action for decades. From the days of Tom Mix and other silent-screen cowboys up until the 1970s, America’s heroes were the world’s heroes. During and after World War II, real-life heroes themselves often looked to the likes of John Wayne or Gary Cooper to see what a hero was supposed to look and act like. Such men hardly exist anymore, except in old movies. In the early 1970s, there were many paranoiac films influenced by the popular take on Vietnam and Watergate. In Alan J. Pakula’s “The Parallax View” (1974), Sydney Pollack’s “Three Days of the Condor” (1975), or Peter Hyams’s “Capricorn One” (1978), not to mention Pakula’s “All the President’s Men” (1976), government, corporate, and civic leaders are bad guys, while heroism, now the province of lawyers or journalists rather than soldiers or cowboys, can only hope to unmask them.

The point of the heroes Hollywood has specialized in over the last 35 years has been to make sure that heroism can exist only on a plane far from the daily lives of the audience.

Here was the origin of the whistle-blower hero who, however noble in other ways, can’t help being a rat, a betrayers of friends and colleagues, and self-righteous in proportion, which would seem to limit his appeal. Yet down the years from “Norma Rae” (1979) through “The Insider” (1999), “Erin Brockovich” (2000), the “Bourne” trilogy, and last year’s “Michael Clayton,” behind every whistle-blower hero has been the assumption that the public realm is inescapably corrupt. Once populated by heroes whose job it was to tangle with and triumph over the villains, the institutions that support the community have now been abandoned to the villains. The hero stands alone against corruption so massive that he cannot hope to do anything more than expose it, not end it. This makes him, also, a victim hero. He may also, like Jason Bourne, morph into a superhero and so hit the post-heroic heroism trifecta.

The vogue of the superhero dates to the late ’70s and early ’80s when, in the “Star Wars” and “Indiana Jones” movies—the latest of which, starring a geriatric Harrison Ford, came out this spring—the movie hero paid the price of his continued existence in Hollywood by living out his cinematic existence in a galaxy far, far away. Like Superman, whose first feature-film incarnation was in 1978, these heroes were unashamed of their cartoon origins and, therefore, their detachment from reality. Often muscle-men, like Arnold Schwarzenegger or Sylvester Stallone, they even looked unreal. Wayne and Cooper had, of course, been imposing physical presences on and off screen, but no one would have mistaken either of them for bulgy, oiled-up Mr. Americas. Nowadays, even so traditional a heroic story as that of Thermopylae finds its translation (in “300”) into contemporary terms as beefcake.

The doomed Spartans were also examples of the victim hero who was the staple of the Vietnam War films, beginning with “Coming Home” and “The Deer Hunter” and continuing through “Apocalypse Now,” “Platoon,” “Full Metal Jacket,” “Hamburger Hill,” “Born on the Fourth of July,” and others. Like the superhero, the victim hero did not invite emulation—though hints of some nameless hidden trauma, sometimes self-inflicted like drug or alcohol addiction, were among the hallmarks of “cool” masculinity. Thus he might also overlap with the whistle-blower hero who, like Warren Beatty’s character in “The Parallax View,” was caught and destroyed by the forces of evil, or with the cartoon hero who suffered childhood trauma, as in “Batman Begins,” or undergoes torture, as in “Braveheart.”

The Man Who Shot Liberty Valance.jpgThe point of all three of the kinds of hero in which Hollywood has specialized over the last 35 years has been to make sure that heroism can continue to exist only on a plane far removed from the daily lives of the audience. It is hard not to speculate that this is because of a quasi-political aversion on the part of filmmakers to suggesting to the audience that real-life heroism was something to which it, too, could aspire. The subtext of films featuring the whistle-blower hero, the cartoon hero, and the victim hero is that heroism—heroism of the, say, Gary Cooper type—belongs to the public and communal sphere, now universally supposed to be cruel and corrupt, and therefore is really no longer possible or even, perhaps, desirable.

That seems to have been the point of the great John Ford film of 1962 called “The Man Who Shot Liberty Valance.” In it, John Wayne plays rancher Tom Doniphon in the Wild West town of Shinbone, which is still part of a territory not admitted to statehood and has only a comically feckless Andy Devine resembling anything like a duly constituted authority. Shinbone is terrorized by an outlaw named Liberty Valance, played by the great Lee Marvin. An idealistic lawyer named Ransom Stoddard (James Stewart) comes to town to practice his profession only to find that there is no law there. In fact, he himself is robbed by Liberty on his way into town, yet he can find no one there who thinks that this is any of his business, or that it is even possible for this outlaw to be brought to justice. The law is helpless where there is no law enforcement. As Doniphon advises the newcomer, “Out here men take care of their own problems.”

Where there is no hope of a better world, there can be little to distinguish heroes from villains.

Doniphon is the only man in town capable of standing up to Liberty, but as he himself hasn’t been robbed he doesn’t quite see why anyone else being robbed, let alone this geeky stranger, should be any business of his. Eventually, the idea of a larger civic responsibility begins to sink in—and, with it, a sense that it has become incumbent on him to do what no one else can do. Yet it can only be done outside the law, which remains powerless. This puts Doniphon and Liberty (the name is of course significant) on the same side. Both are outlaws whose would-be heroic struggle has no place in a civilized community. When Wayne triumphs, a way must be found for the townspeople to pretend that it is the law which has rid them of the depredations of Liberty and his gang, and a way duly is found. Stoddard is hailed as a hero and Doniphon, the real hero, is forgotten.

Ford’s film was a parable less of the coming of civilization to the West than of the cultural transformation that was taking place in the postwar period in America and elsewhere—a transformation which resulted in an early but unmistakable foreshadowing of the death of the hero in the 1970s. The heroes who had won the Second World War commonly didn’t want to be heroes. They wanted to believe that they had been fighting for “a better world” (as it was so often formulated), by which they meant, among other things, a world that would have no need of heroes. The idea went back to Woodrow Wilson’s characterization of World War I as “a war to end all wars,” and this became the enduring dream behind the League of Nations and, after the setback of World War II, its successor body, the United Nations. War had become a shameful thing simply as such and irrespective of the justice of the cause in which it was waged or the net humanitarian good it might accomplish.

‘3:10 to Yuma’ offers the most interesting contrast between the old-fashioned sort of Western and the new breed.

As a result of this increasingly influential cultural attitude, the movie hero was already beginning to become a more and more ambiguous figure in the immediate postwar period. The kind of clean-living, pious hero portrayed by Cooper in the pre-war “Sergeant York” (1941)—which celebrated an American hero of the First World War—gave way to the isolated and magnificent but dubious postwar figure of Cooper’s Marshal Will Kane in “High Noon.” The heroics of Sergeant York were seen as having been performed on behalf of a community and a nation—two-thirds of the film is spent introducing us to his hometown of Pall Mall, Tennessee—which are as properly grateful to him as he is devoted to them. Kane’s deeds are performed in spite of and in opposition to the will of the community he serves and more to satisfy a personal standard of honor than a sense of duty to such a pack of ingrates. The film ends with his dropping his badge in the dust and leaving town for good.

Similarly, John Wayne’s Ringo Kid in Ford’s “Stagecoach” of 1939 may be a convict, but he wins our hearts not only by being handy with a gun but also by his willingness to form an ad hoc community with his fellow passengers when they are attacked by Indians and by his broad-mindedness and chivalry toward a “fallen” woman. But in such postwar roles as Tom Dunson in “Red River” (1948), Sergeant John Stryker in “The Sands of Iwo Jima” (1949), or Ethan Edwards in “The Searchers” (1956), Wayne was portrayed as a lonely and isolated figure, living by a personal code, like Kane, but also like him in being more or less mistrusted and excluded from the community of those on whose behalf his heroic deeds are performed. In “The Sands of Iwo Jima,” Wayne’s attachment to a pre-war idea of what it meant to be a U.S. Marine even suggested that, in spite of the film’s admiration for his heroism and leadership, it finally saw him as a throwback who could have no place in the postwar world.

The greatest of the postwar contributions to the eventual decline and fall of the American movie hero came from what the French called the films noirs of the 1940s and 1950s. The noir hero was a prototype for all three of the heroes who have dominated American movies since the 1970s. Alone and without roots in any community, he lived in an urban twilight where few if any people could be trusted. Often a criminal himself, his real job was to expose a larger corruption and criminality than his own, and to suffer from it. In his most perfect incarnation, a private eye such as Sam Spade or Philip Marlowe—both played by Humphrey Bogart in “The Maltese Falcon” (1941) and “The Big Sleep” (1946)—he even had something like super-heroic powers. The noir film didn’t survive its period, however, and to my eye the many attempts to revive it since “Chinatown” (1974) have all failed.

The reason, I think, was that in the noir pictures there was always a sense—enforced to some extent by the Hays Code that aimed to uphold high moral standards and was still in force at the time—that however hated and resented the moral order enforced by the social and political powers-that-be, it was still a genuine moral order and not just the greed, viciousness, and violence of those who happened to hold power. Though the antihero whose flowering we have seen in our own time was there in embryo, it still left open the possibility of goodness and decency, not just on the part of individuals but of a community. That’s what it took for Dan Evans in the 1957 version of “3:10 to Yuma” to be a hero: the idea that his courage was for the sake of a belief that “people should be able to live in peace and decency together.” Without this belief in a community where power is not antithetical to the good and the decent but the means of its advancement, neither the war films nor the Westerns of our own time will ever be able to give us any but a debased sort of heroism.

Thursday, July 24, 2008

An Absentee God?

By Dinesh D'Souza
Townhall.com

In my debate with atheist Christopher Hitchens in New York last October he raised a point that I did not know how to answer. So I employed an old debating strategy: I ignored it and answered other issues. But Hitchens' argument bothered me.

Here's what Hitchens said. Homo sapiens has been on the planet for a long time, let's say 100,000 years. Apparently for 95,000 years God sat idly by, watching and perhaps enjoying man's horrible condition. After all, cave-man's plight was a miserable one: infant mortality, brutal massacres, horrible toothaches, and an early death. Evidently God didn't really care.

Then, a few thousand years ago, God said, "It's time to get involved." Even so God did not intervene in one of the civilized parts of the world. He didn't bother with China or Egypt or India. Rather, he decided to get his message to a group of nomadic people in the middle of nowhere. It took another thousand years or more for this message to get to places like India and China.

Here is the thrust of Hitchens' point: God seems to have been napping for 98 percent of human history, finally getting his act together only for the most recent 2 percent? What kind of a bizarre God acts like this?

I'm going to answer this argument in two ways. First, I'm going to show that Hitchens has his math precisely inverted. Second, I'll reveal how Hitchens' argument backfires completely on atheism. For my first argument I'm indebted to Erik Kreps of the Survey Research Center of the University of Michigan's Institute for Social Research.

An adept numbers guy, Kreps noters that it is not the number of years but the levels of human population that are the issue here. The Population Reference Bureau estimates that the number of people who have ever been born is approximately 105 billion. Of this number, about 2 percent were born in the 100,000 years before Christ came to earth.

"So in a sense," Kreps notes, "God's timing couldn't have been more perfect. If He'd come earlier in human history, how reliable would the records of his relationship with man be? But He showed up just before the exponential explosion in the world's population, so even though 98 percent of humanity's timeline had passed, only 2 percent of humanity had previously been born, so 98 percent of us have walked the earth since the Redemption."

I have to agree with Kreps's conclusion: "Sorry Hitchens." But actually Hitchens’ plight is worse than this. As I pointed out in a recent three-way debate with Hitchens and radio host Dennis Prager, Hitchens’ argument poses a far bigger problem for atheism than it does for theism.

To see why this is so, let’s apply an entirely secular analysis and go with Hitchens' premise that there is no God and man is an evolved primate. Well, man's basic frame and brain size haven't changed throughout his terrestrial existence. So here is the problem. Homo sapiens has been on the planet for 100,000 years, but apparently for 95,000 of those years he accomplished virtually nothing. No real art, no writing, no inventions, no culture, no civilization.

How is this possible? Were our ancestors, otherwise physically and mentally undistinguishable from us, such blithering idiots that they couldn't figure out anything other than the arts of primitive warfare?

Then, a few thousand years ago, everything changes. Suddenly savage man gives way to historical man. Suddenly the naked ape gets his act together. We see civilizations sprouting in Egypt, Mesopotamia, India, China, and elsewhere. Suddenly there are wheels and agriculture and art and culture. Soon we have dramatic plays and philosophy and an explosion of inventions and novel forms of government and social organization.

So how did Homo sapiens, heretofore such a slacker, suddenly get so smart? Scholars have made strenuous efforts to account for this but no one has offered a persuasive account. If we compare man's trajectory on earth to an airplane, we see a long, long stretch of the airplane faltering on the ground, and then suddenly, a few thousand years ago, takeoff!

Well, there is one obvious way to account for this historical miracle. It seems as if some transcendent being or force reached down and breathed some kind of a spirit or soul into man, because after accomplishing virtually nothing for 98 percent of our existence, we have in the past 2 percent of human history produced everything from the pyramids to Proust, from Socrates to computer software.

So paradoxically Hitchens' argument becomes a boomerang. Hitchens has raised a problem that atheism cannot easily explain and one that seems better accounted for by biblical account of creation.

Let the Navy Pray

By John Mark Reynolds
Scriptorium Daily

Nothing makes an ideologue madder than actual people.

People have the obstinate desire to live their own lives, refusing to fit into the neat little patterns of the ideologue. The rest of us know, as Aristotle taught, that all human institutions have to be left a bit messy at the edges or they become unbearable. One size does not fit all and for any society to work, especially a diverse society, there has to be room for different practices and perspectives.

Practical people understand that different institutions in our society need different rules to work. Ideologues never understand this. They have a fever for conformity and long to banish ancient and gentle costumes so that all of society will be fit into their dogmas.

The Founders were not this way. The genius of our federal system is that it allows different public institutions to be different. Utah is not forced to have all the same laws as California. Houston need not have the same civil practices as Santa Monica. Americans have always preferred messy compromise to the inhuman perfection of ideological absolutism.

This is especially true regarding the religion in public life and always has been. The vast majority of Americans know religion is important, gives valuable knowledge about how to live a good life, and has been part of most of the great moments in American history.

In a crisis most Americans want to pray, like it when their leaders pray with them, and don’t like the government telling them they cannot. Americans are so thankful to God for His goodness that we started a national holiday to do so. The public celebration of Thanksgiving each November has not, so far as I can tell, led to a theocracy.

Americans reject theocracy for good reasons. Theocratic ideologues wish to simplify the complex problem of religion in a pluralistic society by making everything about religion and everyone practice their own religious beliefs.

Americans also reject the simplistic idea that a minority of adult citizens who do not pray are deeply harmed when the majority engage in public prayer. Public prayer is, after all, not new to our nation. Congress has been doing it for some time without imperiling the rights of secularists.

While the rights of a minority group must be respected (in terms of allowing them their own space), there is no reason for the majority to bow to the tyranny of the minority at every moment. In terms of religion, this means that people should never be forced to practice a religion, but the majority should also be allowed to bring their faith and the practice of their faith into the public square.

This has long been the tradition of our nation. Presidents have placed their hands on the Bible as they are sworn into office. The Constitution was signed “in the year of our Lord.” Washington D.C. is full of Biblical and other religious references on official buildings. One cannot read the majestic Second Inaugural Address of Abraham Lincoln, carved in stone on his government memorial, without being awash in Biblical and Christian imagery and argument.

Americans have never wanted an official religion, but they want and allow their officials to reflect the general religious beliefs of the majority. Coercion is out, but so is a public square stripped of every religious reference. From prayer at the inauguration of the president to the chaplain in Congress, such public religious practice is not unusual or weird. Every piece of American money after all says, “In God We Trust” without bringing on the dystopia of the Handmaid’s Tale.

Most Americans are happy when a minister from a religious tradition different from their own offers thanks at a meal or opens of a government or political ceremony. There will be such prayers said at both the party conventions this summer.

It is a profound reflection of the character of the American people and part of the messy (but workable!) compromise that allowed an overwhelmingly Christian people to allow space for religious minorities and the nonreligious without being forced to pretend to be something they were not.

Now the ideologues of the ACLU have decided to tamper with tradition and with the culture of the fighting men and women of our nation. It is evidently of grave concern to them, in a nation presently at war, that official prayers are said at some meals. That this has been done since 1845 without destroying our liberty is of no concern to the ideologues of the ACLU.

It does not fit their Utopian ideology of how everything must be, so it must go. Consistency to public secularism is the hobgoblin of their little minds. Too much “official religion” is dangerous, but the ACLU are ideological extremists in seeing danger in old and workable public accommodations to the overwhelmingly religious desires of Americans.

Like all ideologues history does not matter, tradition does not matter, and there is no sense of proportion. Every public act must fit their cherished scheme. They are theocrats in reverse and just like the theocrats the pursuit of their ideas of perfection threatens to unravel the careful compromises that make our culture work.

The ACLU would apply to a service academy the same rules it applies to an elementary school. The military, an institution that deals with immanent peril and death daily, is not just like any other institution in our society.

Our Armed Forces have chaplains, because fighters from a very religious nation like America need and want them. The Armed Forces have always prayed, because we are a praying nation and men who fight are uniquely interested in speaking to the Deity. Secularists don’t agree that this matters, but then there are not enough secularists in this nation to defend it.

Unlike school children, the men and women of the Armed Forces are mature. If their leaders choose to lead them in a general prayer of thanksgiving, as they have always done, then these future leaders are not likely to be swayed from deeply held unbelief, as courts worried children might be with school prayer.

If you think these secularizing ideologues are reasonable ask them this: Should we stop singing the Battle Hymn of the Republic at state events? This song claims that we see the “glory of the Lord” in camps of our fighting men. Prayer at lunch is nothing compared to the frequent repetition of “Hallelujah!” This dangerous ditty also asserts that as Christ died to make men holy, we should die to make men free. When the Battle Hymn is performed at the next inauguration, as it almost surely will be whether McCain or Obama wins, using taxpayer dollars, is a theocracy imminent?

Only ideologues are worried. Theocrats worry there is not enough religion in government all the time and radical secularists that there is always too much.

Practical people know that we can accommodate a religious people without setting up a theocracy. Practical people are content to leave good things alone. Here is to hoping the courts recognize the historic right of different areas of civic life to have different rules regarding the appropriate amount of public religious display.

Wednesday, July 23, 2008

Wedding Sermon: What God has Joined Together

By Fred Sanders
Scriptorium Daily

For Mark Makin and Carri Javier, July 18, 2008

Part I: A Thing

Dearly beloved, we are gathered together here today because we want to witness the creation of a new thing. This thing is a new family, this new household consisting of Mark and Carri, this Makin family, this couple, this one new reality, this thing which was not previously on the universal catalog of “things that exist,” but which now is. There used to be no such thing as this household, and now there is such a thing. We are here to welcome it into our common world of being.

Like everything that has existence, this new thing has its existence from God, who is the maker of all things, visible and invisible. God has not made it out of nothing, ex nihilo, as he made creation in the beginning. God has made this new Makin household out of already existing things, out of two people we already know and will continue to know, from two already existing families of origin which we know and will continue to know.

But God has made this new thing by bringing about unity, by making two into one. Our God is in the business of unity. All the big mysteries in Christianity are mysteries of unity. The mystery of the Trinity is how these three, Father, Son, and Spirit, are one God. The mystery of the incarnation is how the one Jesus Christ personally unifies the divine nature with human nature. The mystery of the atonement is how a holy God and sinful humanity can be reunited through the cross, and the mystery of sanctification is how God can stand to live with us so patiently in order to make us more like him.

And among the mysteries of creation is how God puts things together with a deep and abiding unity that constitutes them as real things rather than jumbles or aggregates of constituent parts. These two are now one. We can no longer know either of them without taking the other into account. Mark and Carri: No more standing alone on the B-I-B-L-E: from now on this family stands together alone on the B-I-B-L-E.

Mark and Carri, your oneness is not the oneness of a club or a team or an alumni association or a contractual obligation of partners; you have not become one by going shopping for a spouse at the mall of mating and adding one another to your rich consumer lifestyles; your unity is not temporary, partial, or open to later renegotiation. God is better at making things than that. Your unity is covenant unity: total, exclusive, permanent.

This family is a real thing, and God treats it like a thing: we know that God causes all things to work together for good for those who love him and are called to his purpose. Like all things, this one is something God can and will use to bring good into the world. It will be a channel of God’s grace, and everyone around this new family can experience the grace of God through it. Though it’s not a sacrament, this new thing is as sacramental as any means of grace that God uses in drawing us closer to him. Mark and Carri, you will come to know God through this oneness just as you have known him in Christ individually and personally.

Part II: Take Your Place

Like every other thing, this new thing can be abused. You can make it into an idol. You could set it up in a place of religious devotion and commit your life to it as if it were God. You may be tempted to find the very meaning of your lives in this relationship alone, to consider your life and your souls as having meaning only for the sake of this human love. In doing so, you could each transform the other into false gods and worship each other jealously.

Don’t do that.

Worshiping each other is only one step removed from worshiping yourself. Worship of the beloved is a little less shabby and small-minded than self-worship, but in fact it’s just a sneaky way of being more completely self-centered than you could manage without help.

Your love is a thing but not the main thing. Marriage is a big deal, and I could spin out a systematic theology of marriage from the garden of Eden, when Adam met Eve, to the descent of the New Jerusalem made ready as a bride for her husband. We have just heard echoes of that story in the Scripture readings from Isaiah, Ephesians, and Revelation. Mark and Carri, you already know that story, and you know that your own love story, a thing worth celebrating, is tiny in comparison to God’s story.

God’s story is from everlasting to everlasting, and reaches from the heaven of heavens to the deepest part of the earth. From the sacred stillness of the life of the Trinity, in which God has no need of anything, and enjoys perfect, living fellowship of blessedness with his dear Son and the Holy Spirit, God has revealed his salvation, he has made his righteousness known in the sight of the nations, and has remembered his lovingkindness and faithfulness to the house of Israel. Through Jesus Christ his Son he has humbled himself in order to exalt and renew his ruined creatures. He became poor to make us rich. He suffered and died, and in so doing gave us freedom and life. All his ways are directed to dwelling among us and being our God, drawing us to walk before him and be his people.

You know that on that big map of reality, your own life together has its proper place. Take your place. Occupy together the location that God has prepared for you, and as his workmanship, walk into those good works that he has prepared for you.

Devote yourself to this task: The task of taking your place, which cannot be taken by anybody else. Nearly all the work you will do throughout your life, no matter how well you do it, will be work in which you are expendable and replaceable. Somebody else could get that degree, somebody else could teach that class, somebody else could perform that service. But in this marriage, you each take up a post in which you are irreplaceable.

Mark, Carri only gets one husband. She is a woman who deserves a very good husband. You are taking up the only space that can be occupied by the only husband Carri gets. Don’t just fill up that space, fulfill it. Be the husband Carri deserves. This is one of the few things you alone can do, one of the very few offices in which you cannot be replaced. Only you can do this. If you are not the poet who sings the praises of her thousand perfections, there will be no such poet, and her virtues will go unsung. If you are not the leader who brings her to see her own life in proper perspective, she will have no such leader, and will find her own way in solitude. Do what only you can do.

Carri, Mark only gets one wife. He is a man who deserves a very good wife. Only you will see behind the scenes all the things he does quietly and without fanfare, and only you can encourage him in the thousand virtues which need to be encouraged and nurtured if they are to flourish. If you do not keep goodness, truth, and beauty before his eyes every day, he will come to think of them as abstractions. If you do not make his home a haven and a garden and a banquet and a stronghold, he will wander homeless. Do what only you can do.

Mark and Carri have been charged to enter this estate “reverently, advisedly, soberly, and in the fear of God.” We have assembled a large company of witnesses here because these young people don’t know what they are getting themselves into. Of course they’ve been advised, counseled, shepherded, have studied the abundance of role models they are blessed with. Relatively speaking, within the range of human possibilities, they are prepared. But what they are prepared for is to get in over their heads, and they only partly know what that means. Some of you gathered here know much more fully what these vows imply. Some of you have paid a high price to keep your vows, or a high price in failing to do so. Share the wisdom you have gained. Some of you are not married, but have keen insight into the challenges and opportunities before this new family. Share your wisdom and lend your support. You are all witnesses of these vows, so you can all hold them accountable to them. Good friends, you will see your opportunities to provide the much-needed, much-desired outside pressure, to hold their feet to the fire lest they lose their sensitivity and grow forgetful of what they have promised in the sight of God and this entire congregation.

Part III: Back to School

Mark and Carri, you have only recently graduated from college, and I bring you the news that here in the middle of summer, you are officially being sent back to school. The class is life together, and I say to you: School is now in session. Martin Luther said that marriage is a school of love. Today you start all over again with a new, advanced course of study, to learn to love each other better. When God creates ex nihilo, it happens instantly. But when God makes two into one, as he has with you, it develops at its own rate, according to its own laws. Becoming a unified thing does not happen instantly. Unity is something that you will learn, practice, and improve at over the course of your marriage. Becoming one means loving each other, sharing all things in common, taking each other into consideration always. Becoming one means declaring war on every kind of selfishness big or small, because there is no room for it in marriage. If God blesses your marriage with children, then the borders of this new thing will be extended to include them in the same love. There will be more to learn in a larger school.

So study. Study theology and your spouse. Study the love of God in Christ, because you need to know what covenant faithfulness looks like, and you know where to look to see that faithfulness. God is in the business of unity, and of covenant fidelity. You need a truckload of that faithful love dumped on your front lawn every morning, so you can bring little pails of it inside for each other. Carri, study Christ’s submission, and study Mark. Mark, study Christ’s love of the church, and study Carri.

Be friends. Be lovers. Learn Christ and learn each other. Receive grace from God and give grace to each other. You are disciples of Jesus Christ, so walk as he walked: have in yourselves the mind that was also in him. Your lives have been hidden with Christ in God; learn now a new thing: to hide your one life together in that same place.

Religion and the Age: George Weigel gives Christian answers to the West’s most pressing questions

By Bruce S. Thornton
City Journal

Against the Grain: Christianity and Democracy, War and Peace, by George Weigel (Crossroad, 352 pp., $24.95)

Few commentators these days recognize that the war against radical Islam is the latest battle in a 14-century-long spiritual conflict between two very different schools of belief about man’s relationship to God. Catholic theologian and nationally syndicated columnist George Weigel is a happy exception, and this alone makes him an invaluable source for anyone wanting to understand what this war is about. Unlike those who try to comprehend jihadist violence solely in materialist terms, Weigel focuses on the spiritual dimensions of the conflict: on the one side, an Islamic revival fired with certainty about the rectitude of its beliefs and their sanction by Allah; on the other, a West that, having driven God from the public square, is riddled with self-doubt and uncertainty about its own beliefs and political ideals, which the jihadists despise.

In his previous books, such as The Cube and the Cathedral and Faith, Reason, and the War Against Jihadism, Weigel has examined the spiritual crisis of the West. Against the Grain collects 12 essays from the last two decades that approach this problem from the standpoint of Catholic theology and social doctrine. The essays, Weigel writes, are “attempts to show how Catholic understandings of the human person and human society, human origins and human destiny—all of which derive from the basic Christian confession of faith—can shed light on controverted and urgent questions of public life.” Contrary to the dominant secularist myth that the “American project” banished Christianity from the discussion of political questions, Weigel shows in these elegantly written, meticulously argued pieces that the American political order is incomprehensible—and its problems unsolvable—without that Christian framework.

The first eight essays explore the limitations of democracy when understood in “functionalist or proceduralist terms.” Such a view, Weigel writes, sees freedom as a “matter of willfulness or choice” and relegates “questions of personal and public goods” to private life—a reductive and impoverished perspective contrary to that of the American Founders. In contrast, Weigel recognizes that democracy depends on addressing “questions of public moral culture and civil society” and on tending to “the institutions of civil society and their capacity to form genuine democrats.” Catholic social doctrine provides a robust tradition for addressing these questions, but our current fundamentalist secularism seeks to banish religion from what John Courtney Murray called “the public argument.” Weigel reminds us that we face two urgent challenges, one domestic, the other foreign: a “pragmatic utilitarianism” that banishes such questions to the private sphere, thus leaving them hostage to bureaucratic technicians and the vagaries of political interests; and “political Islamism,” which answers the same questions in ways inimical to the fundamental goal of the American political order.

Weigel examines the implications that restoring Christian—and more specifically Catholic—philosophical and theological perspectives to our political discourse would have in a host of areas, including foreign policy, globalization, the problems of the Third World, the role of faith in politics, abortion, bioethics, the promotion of human rights and democracy abroad, and many others. Weigel’s analysis of political freedom is particularly valuable, for the starting point of all other political disputes is our understanding of liberty.

Weigel’s essay “Two Ideas of Freedom” begins by critically examining Isaiah Berlin’s influential notion of “positive” and “negative” freedom: the former is the freedom “to,” which allows us to pursue some perceived greater good; the latter is freedom “from,” particularly from governmental intrusion into private life and interference in the individual’s pursuit of happiness. But Berlin fails to address “the crucial question,” Weigel writes, which is “the truth about man—the truth about the human person—on which any defense of human freedom with real traction must ultimately rest.” Thus Berlin’s notion of freedom reduces it “to a matter of one human faculty—the will—alone.”

Pointing out that Berlin’s analysis is rooted in Enlightenment philosophy and ignores earlier thinkers, Weigel revisits pre-Enlightenment thinking in his discussion of William of Ockham and Saint Thomas Aquinas. For Aquinas, freedom “is a means to human excellence, to human happiness, to the fulfillment of human destiny,” Weigel writes. Freedom helps us to “choose wisely and to act well as a matter of habit.” Only then can we pursue happiness suitable for a rational, moral creature and “build free and virtuous societies in which the rights of all are acknowledged, respected, and protected in law.”

In contrast to Aquinas, Berlin’s intellectual ancestor Ockham reduces freedom to “a neutral faculty of choice, and choice is everything—for choice is a matter of self-assertion, of power,” Weigel writes. Thus freedom has nothing to do with goodness, truth, or virtue. The moral life is now severed from human nature, and humans are severed from one another, “for there can be no ‘common good’ if there are only the particular goods of particular men and women who are each acting out their own particular willfulness.” Moreover, by putting reason into conflict with freedom, Ockham “created a situation in which there are only two options: determinisms of a biological, racial, or ideological sort, or the radical relativism” that eventually leads to nihilism. “In either case,” Weigel believes, “freedom self-destructs.”

Weigel traces the consequences of an Ockhamite understanding of freedom shorn from virtue and moral truth, or the “freedom of indifference” that dominates “much of Western high culture.” Advances in genetics and biotechnology entice us with the promise of human engineering for perfection and immortality, while cloning and stem-cell research destroy human embryos in the service of various ends. By ignoring Aquinas’s notion of “freedom for excellence” we are unlikely “to deploy our new genetic knowledge in ways that lead to human flourishing rather than to the soulless dystopia of the brave new world.” More immediately dangerous is moral relativism, which has been on display throughout the culture in response to the challenge of Islamic jihad; it is an outgrowth of the separation of freedom from moral truth. Meeting the Islamist challenge, Weigel writes, requires not the flabby tolerance or guilty self-loathing engendered by such moral relativism, but rather a patriotism that is the “expression of a nobler concept of freedom than mere willfulness.” For ultimately, “Homo Voluntatis cannot give an account of a freedom worth sacrificing, even dying, for.” Absent such patriotism, we will end up in the state of appeasement that Weigel documents in his essay “Is Europe Dying?,” a brilliant survey of a culture that can no longer reproduce itself or act against Islam’s “aggressive anti-humanism fueled by a distorted theism.”

Three Weigel essays explore how best to conduct the war against jihadism in the context of the “just war” tradition in Christian theology, which he describes as “a sustained and disciplined intellectual attempt to relate the morally legitimate use of proportionate and discriminate military force to morally worthy political ends.” Contrary to what we have heard from many Christian leaders, the just war tradition does not begin with a “presumption against war.” Instead, the tradition begins “by defining the moral responsibilities of governments, continues with the definition of morally appropriate political ends, and only then takes up the question of means.” In other words, war can be a moral instrument, one amenable to rational discussion and “subject to moral scrutiny.” And that scrutiny reveals that one should start with the ius ad bellum—the reasons for going to war—and then proceed to the ius in bello, which addresses issues of proportionality and discrimination. To reverse the order of questions—as do many pacifists—is to build a priori obstacles to just war, which can in turn have dangerous consequences for “the legitimate sovereign’s moral obligation to defend and promote right order.”

In two essays, Weigel applies this proper understanding of the just war tradition to the war in Iraq. He concludes that, on the terms of this tradition, the war is indeed just, no matter what errors in strategy and tactics have been committed. Along the way, he gives a concise and informative justification for the war, answering the cavils of those who continue to argue that it was unjust or unnecessary. The three just war requirements—“competent authority, just cause, and last resort”—were met, he argues, in the decision to invade. Weigel dismantles the notion that the United Nations, rather than the United States, was the only “competent authority” for deciding on the rectitude of the war by documenting that institution’s sorry record of corruption and incompetence. He also points out that the UN is not a sovereign body but a collection of sovereign states—a body that has no monopoly on legitimate force, and in fact possesses little credible force at all. Weigel’s catalog of Saddam Hussein’s crimes, lies, and duplicity in the 12 years after the Gulf War should leave no doubt that the U.S. had “just cause.” Finally, the public-relations nightmare of 12 years of sanctions that enriched Hussein as they increased the suffering of the Iraqi people; the relentless efforts of China, France, and Russia to dismantle the sanctions altogether; and the failure of UN weapons inspectors to determine definitively whether or not Hussein possessed WMDs, all left war as the “last resort.” Whatever one’s opinion about the conduct of the war, Weigel makes clear that it is justifiable by the criteria of the just war tradition.

Weigel’s learned, clearly written, and tightly argued essays stand as the best evidence for his claim that the Christian tradition is indispensable for any serious discussion of the challenges facing our country. In contrast to the materialist determinism or secularist scientism dominating our public discourse, Weigel himself exemplifies what he describes as the “Christian realist sensibility—an understanding of the inevitable irony, pathos, and tragedy of history; alertness to unintended consequences; a robust skepticism about schemes of human perfection (especially when politics is the instrument of salvation); [and] cherishing democracy without worshipping it.” These habits of mind will be sorely needed in the coming years.

Tuesday, July 22, 2008

Quid Pro Quo: "Here's Your Money," Say Today's College Students, "Now Give Us Our Degrees."

By Marcia Segelstein
Salvo Magazine

Perhaps the defining event in the history of modern higher education was the passage of the Servicemen’s Readjustment Act of 1944, commonly known as the GI Bill of Rights. Enacted in large part as a reaction to what was widely viewed as the poor treatment of World War I vets and their subsequent suffering during the Great Depression, the GI Bill offered veterans a variety of assistance programs. Among them was the payment of tuition up to $500 per school year, along with the right to receive a monthly living allowance while in school.

In the end, it didn’t matter that the bill almost didn’t pass. One tie-breaking vote opened the door of higher education to hundreds of thousands of returning veterans. Many men who would otherwise have flooded the job market eagerly seized what was for them the opportunity of a lifetime: a college education.

Before World War II, college was a largely unattainable goal for average Americans. Only ten percent attended college before the war. It was, in fact, primarily a luxury of the upper-middle classes, along with those few who could obtain scholarships. Two years before the war, approximately 160,000 Americans were enrolled in college. By 1950, the number was almost 500,000.

Such numbers reflect an enthusiasm for higher education on the part of those former soldiers, who would ordinarily have never seen the inside of a college classroom. By 1952, veterans accounted for 49 percent of all college students. Altogether, it is estimated that 2.2 million vets took advantage of the opportunity presented by the GI Bill and enrolled in college. It was a revolution of sorts, a transformation of higher education, to say nothing of the transformation of the participants’ lives. With college degrees in hand and, more importantly, the education that those degrees represented, countless veterans went from being poor blue-collar workers to middle-class white-collar workers.

Tom Brokaw writes about this phenomenon in his book The Greatest Generation:

Campus classrooms . . . were overflowing with young men in their mid-twenties, many of whom had never expected to get a college education. They left those campuses with degrees and a determination to make up for lost time. They were a new kind of army now, moving onto the landscapes of industry, science, art, public policy, all the fields of American life, bringing to them the same passion and discipline that had served them so well during the war.

GI Joes were transformed into Average Joes who were anything but average. Having helped save the world from a terrifying and tenacious enemy, they were now well-educated homeowners, breadwinners, fathers, and stalwarts of a newly growing middle class. And while this new middle class was busy working and raising what would become the Baby-Boom generation, there was another revolution of sorts taking place on college campuses—the fall of the Protestant establishment in America and the reverberations that followed.

David Brooks, in his book BOBOS in Paradise (“BOBOS” being “bourgeois bohemians”), paints a vivid picture of this establishment and how it controlled higher education:

In the 1920s, sensing a threat to the “character” of their institutions, Ivy League administrators tightened their official or unofficial Jewish quotas . . . Columbia reduced the proportion of Jews . . . from 40 to 20 percent in two years. At Harvard, President A. Lawrence Lowell diagnosed a “Jewish Problem” and also enforced quotas to help solve it.

But by the late 1950s and early 1960s, such discrimination was seen for what it was—unjust discrimination—and class hierarchies began to topple. Brooks continues:

The campus gates were thus thrown open on the basis of brains rather than blood, and within a few short years the university landscape was transformed. Harvard . . . was changed from a school for the well-connected to a school for brainy strivers. The remaining top schools eliminated their Jewish quotas and eventually dropped their restrictions on women.

Those newly opened gates helped lead to an explosion in the number of Americans getting a higher education. Brooks cites the following statistic to make the case: “By 1960 there were about 2,000 institutions of higher learning. By 1980 there were 3,200. In 1960 there were 235,000 professors in the United States. By 1980 there were 685,000.”

Brooks’s theory, nicely expounded in his very readable book, is that, partly due to this rapid expansion of the educated class, America became a true meritocracy, Horatio Alger stories aside. Family connections mattered less. Brains and ability mattered more, as did the name of the college or graduate school on your résumé.

Like the veterans who so enthusiastically and overwhelmingly took advantage of the opportunity to get a college education, Baby Boomers of varying stripes in their turn did the same. With a zeal to transform themselves and the world, they flooded college campuses across America.

If “passionate and eager” can be used to describe the attitudes of the Greatest Generation and their children towards higher education, what can be said of the grandchildren? In the past twenty years, the number of college degrees handed out annually has more than doubled. So certainly they are attending college in droves. But with what attitude and to what end?

Peter Sacks took up the question in his 1996 book, Generation X Goes to College. What Sacks found when he left journalism to become a college professor was an overarching lack of interest on the part of the students—what he identifies as a kind of disengagement. He describes a typical classroom scene:

Scattered mostly in the back and far side rows were young males with professional sports baseball caps, often worn backwards. Completing the uniform . . . was usually a pair of baggy shorts, a team T-shirt, and an ample attitude. Slumped in their chairs, they stared at me with looks of disdain and boredom, as if to say, “Who in hell cares? Say something to amuse me.”

According to Sacks, today’s college students have been “conditioned by an overly nurturing, hand-holding educational system not to take responsibility for their own actions.” He blames a system that has become “customer-driven.” Administrators want students to be happy so that enrollment remains high. And fearing lack of support from administrators, teachers have become reluctant to hold students to high standards. As Sacks writes, “Excellence wasn’t really the point . . . [T]he real point was whether you kept students sufficiently amused and entertained.”

One measure of the quality of a college education these days might be drawn from a list of some of the courses offered. Here are but a few of the more egregious examples in recent years: “Canine Cultural Studies,” “History of Electronic Dance Music,” “Cultural History of Rap,” “Music of the Grateful Dead,” “Taking Marx Seriously,” “Sex Change City: Theorizing History in Genderqueer San Francisco,” and “The Phallus.”

With courses such as these, it’s no wonder that today’s college seniors score, on average, little or no higher than high-school graduates did fifty years ago, according to a survey commissioned by the National Association of Scholars. There are also these depressing results from a 1993 Department of Education survey of college graduates: 56 percent couldn’t calculate a correct tip, and over 90 percent couldn’t figure out the cost of carpeting a room—and they were allowed to use calculators!

In a piece written for The Boston Globe a couple of years ago, college professor Michael Kryzanek decried both the quality of education many of today’s college students receive and the lack of interest they have in actual learning. He cites a study by the National Center for Education Statistics that found that only 31 percent of college graduates could read a “complex book and extrapolate from it.” The same study also found that many students graduate from college lacking “the skills needed to comprehend routine data, such as reading a table about the relationship between blood pressure and physical activity.”

Kryzanek, who has taught at the college level for more than thirty years, is not surprised by the study’s findings. Based on his own experience and frequent discussions with colleagues, he concludes that

students today have little interest in what past generations of college students accepted as an essential education. Reading the literature of “dead white guys,” studying the relevancy of a 400-year-old historical event, and thinking about the meaning of life’s mysteries are not of great interest to a growing number of college students. Now it’s all about focusing on a career path, studying narrowly about the skills required of that career path, and then crossing the stage on graduation day.

When syndicated columnist Walter Williams wrote about the sorry state of higher education, many readers responded with telling tales of their own. An English professor shared this anecdote, which Williams included in a follow-up column:

One of the items that I assigned was a two-page essay that described a favorite vacation or holiday. One student turned in two pictures drawn with crayon depicting the beach. When I gave her a failing grade, she was indignant and said that she put a great deal of work into the pictures. When I told her that she did not do the assignment and that she was supposed to write an essay, she said, “But I don’t know what an essay is!”

Students who are ill-prepared for higher education have become the subject of serious concern. According to a US Department of Education study, nearly half of college students have to take remedial courses in reading and math. And according to the National Center for Education Statistics, nearly 80 percent of colleges provided some type of remedial services in 2000.

In an address to North Carolina State University last year, US Secretary of Education Margaret Spellings spoke about some of the findings of the Commission on the Future of Higher Education. “Less than half of all [high-school] graduates are prepared for college-level math and science,” she said. “As a result, college students and taxpayers spend over a billion dollars a year on remedial education just to teach students the basic skills they should have learned in high school.”

And speaking of billions of dollars, there is also the issue of college costs, addressed by Secretary Spellings in the same speech:

Over the last 25 years, college tuition increases have outpaced inflation, family income, even health care. In the past five years alone, tuition at four-year colleges has skyrocketed by 35 percent. . . . The reality is that as costs skyrocket, it becomes increasingly difficult for middle-class families to afford college. And for low-income, mostly minority students, college is becoming virtually unattainable.

According to statistics cited by The New York Times in 2006, annual private-college costs 30 years ago equaled 21 weeks of average pay for an American worker. Now that figure is 53 weeks. In other words, on average, it takes more than a year of work to pay for a year at a private college. The business of going to college has become just that—big business. Colleges need students, and as Peter Sacks writes, they have become “customer-driven” as a result. Absurd course offerings and lower standards attest to that.

And then there is the parent factor. Baby-Boom parents have been famously accused of over-scheduling their kids from toddlerhood on up, in effect building their résumés starting with preschool. Proof that parents want their kids to get into good colleges and will do almost anything to make that happen is evidenced by what one educator calls the “multi-billion-dollar industry of SAT prep courses, tutors and college-visiting weekends.” And as one high-school guidance counselor working in an upscale suburban community put it, “For many parents, it’s all about the right college sticker on the back of the Lexus.”

The sheer cost of college brings parents into play in other ways, too. The same high-school guidance counselor explains:

Many parents aren’t willing to shell out $45,000 a year so their kids can explore Greek literature or major in medieval music. In previous generations, college was more about the free flow of ideas, as opposed to being purely a path to a career.

At current prices, parents can hardly be blamed for wanting to see some bang for their buck, so to speak.

One educator with more than 30 years’ experience, split between teaching at the graduate-school level and serving as a high-school guidance counselor, thinks that there’s a lot to be said for kids coming up with their own college plans. Many, he believes, would be better off working part-time and completing college in six years. He also thinks that there is a “significant under-utilization of community colleges.” In the Midwest, there isn’t a stigma attached to going to community colleges such as there is in the Northeast and elsewhere. “And frankly, I think it makes for healthier kids in some ways,” he says.

It’s difficult to quantify how many students attend college just because it has become the thing to do, a kind of entitlement, a rite of passage, something parents push for and orchestrate. There can be no question that there are far too many college students who are ill-prepared and who have little desire to do more than make it to graduation day. One wonders whether they would even comprehend—much less be able to write an essay on—Thomas Jefferson’s vision of higher education as expressed in his plan for the University of Virginia. The purpose of college, according to Jefferson, is

to develop the reasoning facilities of our youth, enlarge their minds, and instill in them the precepts of virtue and order; to enlighten them with mathematical and physical sciences, which advance the arts and administer to the health, the subsistence and the comforts of human life.

Unfortunately, the chief “precept of virtue” that the current college generation seems never to have learned—or been taught—is to take responsibility for themselves and their actions. Perhaps this is why, lacking such discipline, so many students these days experience college as a business transaction or a means to an end—a mere quid pro quo—rather than as the heady privilege that both Jefferson and their own forebears held so dear. (more)