In what Charles Taylor calls our “age of authenticity,” the greatest virtue is sincerity. And the greatest vice? Paternalism.
I used to worry about this. After all, who wants to be identified as a paternalist? It’s the new bigotry.
But I’m getting over it. Quite simply, I don’t think you can sign up for pursuing “the common good” and hope to avoid at least some implicit commitment to paternalism—some sense that one knows what is good for others.
The very language of “the common good” hearkens back to Aristotle and Aquinas—to the language of virtue and the common pursuit of the Good. If “the common good” is going to have any force, it has to be embedded in this teleological tradition that specifies “the good” we’re after. In my Reformed tradition, we often talk about this in terms of shalom—the biblical vision of the good that God desires for all of creation. That “good”—or telos, as Aristotle would put it—is also a norm. The Good is also “the way things ought to be.”
So to pursue “the common good” is to have some substantive sense of what that good is. The common good is not just a lowest-common-denominator consensus about what we can all agree on. “The common good” is not just “common ground.” The common good is a shared orientation of our communities—both civil and political—toward a substantive vision of the good life.
This is not an inherently sectarian claim, nor is it necessarily an authoritarian stance. There can be a great degree of overlapping consensus about the shape of that substantive telos across traditions and faith communities. But if it is a shared pursuit of a common good, then some normative claim is being made. To say you know what is good for people is pretty much the very definition of paternalism. So why not be honest about that and sign up for it?
Granted, to sign up for paternalism is to renounce liberalism—at least as defined in John Stuart Mill’s classic, On Liberty. As Cass Sunstein notes in an excellent article about paternalism in The New York Review of Books, Mill believed government could only impose on individuals insofar as their choices would harm others. This is rooted in Mill’s assumption that the individual “is the person most interested in his own well-being.” This claim, Sunstein observes, “has a great deal of intuitive appeal. But is it right?”
Indeed, that is the question. And it is just this assumption that deserves to be called into question. Sunstein has his own reasons for skepticism, particularly the growing body of evidence—summarized in books like David Brooks’ The Social Animal and Daniel Kahnemann’s Thinking, Fast and Slow—that points to just the opposite conclusion: we often don’t act in our own best interests because much of our action is not the outcome of rational, self-interested deliberation. Quite simply, people make a lot of mistakes because they don’t think about their own well-being. So people’s well-being is actually benefited by “nudges” from others, when the features of our “choice architecture” are designed (by others) to direct us toward decisions that are for our own good. (Prime Minister David Cameron has created “the Nudge Unit” to incorporate this insight into policy initiatives.)
So Mill’s root assumption is put into question by overwhelming evidence that we don’t always act for our own well-being. Such suspicions are amplified in Thomist skeptics like Alasdair MacIntyre who point out that not only do we fail to act in the interest of our own well-being, “we” as a culture or society have simply lost any substantive vision of what that well-being is. Having settled for a liberal compromise that lets each individual specify her own vision of the good, we live After Virtue, as MacIntyre famously put it. Virtue-talk requires a commitment to formation that is, if we’re honest, paternalistic.
These questions returned when I read Rod Dreher’s thoughtful reflection on the limits of politics in The American Conservative. He rightly notes the limits of politics for effecting cultural change. Recognizing that, as a culture, “we have come to value choice over what is chosen,” Dreher nonetheless seems to default to something like Mill’s misguided assumption:
[T]he rot is not primarily a political problem. You can’t pass laws to change the character of individuals or communities. Given the realities of our postmodern, post-Christian culture, the best we can hope is to create a legal and political framework in which people are free to make good choices.
Is it? Granted, Dreher goes on to describe the formation of a community bent on “specifying the good,” you might say (a “hobbits-at-prayer venture,” he calls it). But is there really any ground for his assumption that creating political space for (negative) “freedom” will be making room for “good choices?” I’m not so sure.
Responses like Dreher’s—akin to responses from Wendell Berry or Stanley Hauerwas—seem to settle for a kind of macro-laissez-faire approach in order to make room for micro-communities of virtue. I’m all for the multiplication of hobbits-at-prayer ventures. I just wonder if we have more responsibility to our neighbours—a responsibility to pursue policy that nudges them toward the Good they may not know.