Reason Online
A behavioral economist explores the interaction of moral sentiments and self-interest
Remember how you reacted to your micromanaging boss in a past job? He was forever looking over your shoulder, constantly kibitzing and threatening you. In return, you worked as little as you could get away with. On the other hand, perhaps you've had bosses who inspired you—pulling all-nighters in order to finish up a project so that you wouldn't disappoint her. You kept the first job only because you couldn't get another and because you needed the money; you stayed with the second one even though you might have earned more somewhere else.
In the June 20 issue of Science, Samuel Bowles, director of the Behavioral Sciences Program at the Santa Fe Institute, looks at how market interactions can fail to optimize the rewards of participants—e.g., the micromanager who gets less than he wants from his employees. For Bowles, the key is that policies designed for self-interested citizens may undermine "the moral sentiments." His citation of the "moral sentiments" obviously references Adam Smith's The Theory of Moral Sentiments (1759), in which Smith argued that people have an innate moral sense. This natural feeling of conscience and sympathy enables human beings to live and work together in mutually beneficial ways.
To explore the interaction of moral sentiments and self-interest, Bowles begins with a case where six day care centers in Haifa, Israel imposed a fine on parents who picked their kids up late. The fine aimed to encourage parents to be more prompt. Instead, parents reacted to the fine by coming even later. Why? According to Bowles: "The fine seems to have undermined the parents' sense of ethical obligation to avoid inconveniencing the teachers and led them to think of lateness as just another commodity they could purchase."
Bowles argues that conventional economics assumes that "policies that appeal to economic self-interest do not affect the salience of ethical, altruistic, and other social preferences." Consequently, material interests and ethics generally pull in the same direction, reinforcing one another. If that is the case, then how can one explain the experience of the day care centers and the micromanager?
Bowles reviews 41 behavioral economics experiments to see when and how material and moral incentives diverge. For example, researchers set up an experiment involving rural Colombians who depend on commonly held forest resources. In the first experiment, the Colombians were asked to decide how much to anonymously withdraw from a beneficial common pool analogous to the forest. After eight rounds of play, the Colombians withdrew an amount that was halfway between individually self-interested and group-beneficial levels. Then experimenters allowed them to talk, thus boosting cooperation. Finally, the experimenters set up a condition analogous to "government regulation," one where players were fined for self-interestedly overexploiting the common resource. The result? The players looked at the fine as a cost and pursued their short-term interests at the expense of maximizing long-term gains. In this case, players apparently believed that they had satisfied their moral obligations by paying the fine.
While this experiment illuminates how bad institutional designs can yield bad social results, I am puzzled about why Bowles thinks this experiment is so telling. What would have happened if the Colombians in the experiment were allocated exclusive rights to a portion of the common pool resources—e.g., private property? Oddly, Bowles himself recognizes this solution when he discusses how the incentives of sharecropping produced suboptimal results. He recommends either giving the sharecropper ownership or setting a fixed rent.
In fact, Bowles recognizes that markets do not leave us selfish calculators. He cites the results of a 2002 study that looked at how members of 15 small-scale societies played various experimental economics games. In one game, a player split a day's pay with another player. If the second player didn't like the amount that the first player offered, he could reject it and both would get nothing.
The findings would warm the hearts of market proponents. As Bowles notes, "[I]ndividuals from the more market-oriented societies were also more fair-minded in that they made more generous offers to their experimental partners and more often chose to receive nothing rather than accept an unfair offer. A plausible explanation is that this kind of fair-mindedness is essential to the exchange process and that in market-oriented societies individuals engaging in mutually beneficial exchanges with strangers represent models of successful behavior who are then copied by others." In other words, as people gain more experience with markets, morals and material incentives pull together.
Interestingly, neuro-economics is also beginning to delve deeper into how we respond to various institutions. In one experiment done by Oregon University researchers, MRIs scanned the brains of students as they chose to give—or were required to give—some portion of $100 to a food bank. The first was a charitable act and the second analogous to a tax. In both cases, their reward centers "lit up," but much less so under the tax condition. As Oregon economist William Harbaugh told the New York Times, "We're showing that paying taxes does produce a neural reward. But we're showing that the neural reward is even higher when you have voluntary giving."
Bowles, with some evident regret, observes, "Before the advent of economics in the 18th century, it was more common to appeal to civic virtues." Bowles does recognize that such appeals "are hardly adequate to avoid market failures." How to resolve these market failures was the subject of Smith's second great book, The Wealth of Nations (1776), where he explained: "By pursuing his own interest (the individual) frequently promotes that of society more effectually than when he really intends to promote it."