Tuesday, December 7, 2010

What if liberals and conservatives were the same people?

I can think of few situations where I am inclined to feel so at odds with someone than amidst a contentious political or theological debate. By 'at odds', I don't mean merely with regard to the specific policy or question at hand, but fundamental opposition spiritually, intellectually, culturally, and morally. Well, I should qualify this in that I don't always feel this way in the face of disagreement, but the closer the argument hits home to my core beliefs and attitudes the more apt I am to do so. Indeed, there are a small few topics for which I hold my opinions so tightly that I cannot believe someone holding a contrary opinion could even be the same species as I. Wouldn't be ironic if it turned out my adversaries were really two a kind—and by 'kind' I mean phenotype.

In a recent paper, behavioral social social scientists Smirnov, Dawes, Fowler, Johnson, and McElreath discovered evidence that may be the case. The conventional view of political partisanship is equated with political party identification, which is presumed to emerge from a “general consistency” in one's attitudes, ideals, and expectations about the world (Smirnov et al, 2010). The more consistent an individual's views are with those typical of his party's platform, the greater partisan he is. Using computational simulation and laboratory experimentation, Smirnov and his co-authors challenged this view, showing evidence that one's penchant for partisanship, independent of their attitudes, constitutes a unique dimension on which people vary. Specifically, they suggest that partisanship is a manifestation of an underlying disposition toward strong reciprocity. Reciprocity in general refers to one's tendency to 'reciprocate' gestures of good will, e.g., “I'll scratch your back if you scratch mine”, or the way you might expect a neighbor to lend you a hedge clipper after you lent him a mower. Strong reciprocity, more specifically, refers to one's tendency to engage in these cooperative behaviors even when there is no expectation of reciprocation; it is more altruistic in the sense that they are “taking one for the team”.

Numerous studies find that altruism can evolve by natural selection in the presence of intergroup competition (Boyd, Gintis, Bowles and Richardson, 2003; Sober and Wilson, 1998). Amidst fierce intergroup competition, collective cohesion to standards of cooperation can make the difference between survival and death for the group. In practice, ideals and beliefs act like glue to hold communities together, homogenizing a population with respect to norms, and thereby delineating the boundaries of the group against the world of others (Boyd, also see Kurzban and Sidanius, 2003). In other words, an individual's ideals function to inform him with whom he should cooperate, as well as who stands as a threat to his group's by holding contrary beliefs, or holding the right beliefs but without adequate enthusiasm.

Individuals are ever tempted to defect from norms as they deem fit to ensure their private interests. For any group, maintaining the commitment of its members to norms in the face of individual self-interest is a basic challenge for survival. The capacity to enforce norms, therefore, can be considered a public good. In the behavioral economics literature, experiments have repeatedly found that participants are willing to incur substantial costs to themselves in order to punish defectors (Fehr and Gachter, 2002). This behavior is observed not only experimentally, but empirically. Citing several studies, Smirnov and his co-authors point out that “various forms of costly self-enforcement of cooperative behavior are customary in communities around the world and it is common to punish those who free ride on others' personally costly efforts to use natural resources like fisheries, water, grazing lands, forests, and wildlife (also see Ostrom, 1990, Henrich et al., 2006, and Smirnov, 2007). This punishment, in turn, promotes cooperation ultimately yielding benefits for everyone in the group. Individuals who act selflessly to maintain public goods thus engage in a form of strong reciprocity.

When we decry the intransigence of blind “partisans” in a political debate, aren't we then rebelling against their honest, selfless attempt to look out for group cohesion? In Smirnov et al's study, they find that the individuals most likely to punish non-cooperators in a public goods game were also the most likely to be strong political partisans. Interestingly, this behavior did not significantly predict party identification, and by extension cannot predict the content of beliefs. Thus, for these strongly reciprocating punishers, the content of political thoughts, beliefs, and attitudes are arbitrary and irrelevant—so long as they are held strongly and uniformly across the community, they are doing their job, which is to be the mortar that cements together a united front in the struggle for intergroup dominance.

So what if it turned out that my interlocutor in a heated debate, this person that so repulses that I scarcely can allow myself to believe we are of the same species, is actually my evolutionary brother, who fate determined for its own inexplicable reasons to place on the opposite side? Our beliefs are arbitrary, but we are alike driven by an innate intolerance for challenges to accepted norms, and a willingness to sacrifice our own interests for what we believe to be good for society. If we can for a moment subdue the blind righteousness with which we hold our respective views, perhaps we can see a way to respect each other's tenacity, determination, and selflessness. Much like two opposing samurai on a battlefield, we might look past the fate's arbitrary choice that we should find ourselves in the service of warring lords, and admire each other for the honor, loyalty, and courage with which we serve.


________________

Smirnov, Dawes, Fowler, Johnson and McElreath. (2010). The Behavioral Logic of Collective Action: Partisans Cooperate and Punish More Than Non-Partisans. Political Psychology. Vol 31. No. 4.

Bowles, S., & Gintis, H. (2002). Social capital and community governance. Economic Journal, 112

Boyd, R., Gintis, H., Bowles, S., & Richerson, P. J. (2003). The evolution of altruistic punishment. PNAS.

Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature.

Henrich, J., McElreath, R., Barr, A., Ensimger, J., Barrett, C., Bolyanatz, A., Cardenas, J. C., Gurven, M., Gwako, E., Henrich, N., Lesorogol, C., Marlowe, F., Tracer, D., & Ziker, J. (2006) Costly punishment across human societies. Science.


No comments:

Post a Comment