Governance: An international journal of policy, administration and institutions

Comment: In defense of big questions

This draft comment has been prepared for a panel on public management research and the state to be held at the research conference of the Public Management Research Association at the University of Aarhus in June 2016.  Comments and responses are welcome.  See other posts relating to this discussion.

GOVE_DraftsBy Donald F. Kettl.  Bubbling in the background of public management research is a huge puzzle: are researchers spending far too little time on the really big questions in the field, because of a growing instinct to drill ever-deeper into ever-smaller questions?

In a 2015 Public Administration Review article, Bradley E. Wright applauds the progress that the field has made in resolving some of its fundamental questions. The reason, he argues, is that the field “has become much more scientific” in the last two generations, because of “increasing rigor in both the qualitative and quantitative research conducted in the field.” Drilling deeper with better tools, Wright contends, has helped the field shake loose of the questions from Robert A. Dahl in 1947 about whether the field ever could be truly a science. But in his analysis of a symposium at the University of Michigan this same year, Andrew J. Hoffman points to “a crisis of relevance,” especially in ensuring that scholars engage the fundamental questions that matter and write about them in ways that will have an impact.

It’s impossible to quibble with Wright’s conclusion. The explosion of research in the field’s journals—and the increase in the number of highly regarded journals—represents advances that Dahl and other critics of the 1940s and 1950s would scarcely have imagined.  But it’s also impossible to ignore the complaints of practitioners and theorists outside the field that public management is missing big trends and the potential for big impacts on big questions. Francis Fukuyama, for example, has written a devastating critique arguing that many governments—most of all in the United States—are plagued by “political decay” that hinders their capacity to do what they promise. No matter what advances the field has made in becoming more scientific, that progress will matter little if the fundamental capacity of the state to deliver on its policies is in doubt, and if the field misses the chance to engage this debate.

Other big questions litter the landscape. New governance strategies—what some call “the fourth sector”—are emerging as government collaborates with the private and nonprofit sectors to create new strategies for producing social benefit. As Dahl himself might ask, who governs these collaborations? What kind of mechanisms will they create to deliver on their promises? How are they held accountable? What standards of transparency should they have?

And, for that matter, what does “transparency” mean, since anything labeled “not transparent” is viewed as inherently wrong? Are there limits to what should be transparent, and to what degree? Should the costs of transparency matter? Then there’s the argument for more “civic engagement” and “public participation.” Are these unalloyed goods? Or do demands to open up the process create privileged opportunities for some interests to increase regulatory capture?

When Dahl criticized the field for its lack of scientific grounding, there was a counterbalance: the connection between practitioners and researchers was strong, and there was a strong sense at the time that the research bore directly on practical problems. One cost of the growing emphasis on science has been a literature increasingly impermeable to those outside the research community. That’s happening at precisely the moment that the appetite for insights into delivering public services among policymakers, journalists, and those who care about governance has never been greater.

It’s sometimes argued that the field has to build its scientific base before it can speak truth to power. There’s no doubt that knowing what one’s talking about is essential before trying to talk. But there’s great value in listening carefully to policymakers about the questions to which they most need answers, and in trying to provide insights on the struggles that are most important. The launch of the Affordable Care Act, for example, struggled because top policymakers paid little attention to the administrative details. Public management research contributed little to the debate, before or after that disastrous launch.

It’s also argued that it’s hard to attack these big questions because many of them lack the datasets required for careful research. There are, however, invaluable datasets that have barely been tapped. For example, the Federal Employee Viewpoint Survey, which the Office of Personnel Management conducts annually to gauge the views of government workers, offers vast potential for understanding how different strategies of leadership and different kinds of policy problems produce different impacts on employees—and vice versa. A step back to ask which questions most need answers could help the field develop the new tools and data it needs to answer them, instead of focusing on questions defined by the tools at hand.

Perhaps most important, it’s often argued that delving into these big questions is simply too risky for junior scholars, who need to publish to get tenure and who have the best opportunities to publish if they do workmanlike studies on existing questions using existing datasets. There’s a powerful logic there, but it risks making the field ever-narrower. Once trained in their harnesses, horses tend to work most easily in the same traces. If “big questions” are those left for the future, the future might well never come.

It’s worth remembering that big careers have been made by young scholars asking big questions. A very promising 30-old scholar, James G. March, teamed in 1958 with future Nobel laureate Herbert Simon to write one of the great books of the field, Organizations. Dahl was 32 when his challenge to the science of administration appeared. Big questions have often led to big careers.

For scholars interested in exploring big questions, moreover, there are lots of relatively unexplored datasets, like FEVS. At a time when government is starved for insight and committed to open data, opportunities for looking at big questions through the lens of careful data analysis are growing.

So what would it take to ask the big questions? First, public management scholars need to spend more time looking outside the field to understand the big trends shaping the world of governance. There’s a fundamental paradox that one of the biggest questions facing public management—whether the practice of public management itself is in decay—is being asked and debated almost completely outside the field. The field needs to lean forward in defining its research agenda, even as it looks back to define holes in existing theory that need to be filled.

Second, those in the field need to think much more creatively about how to develop new research tools and how to develop new datasets on which to use them. Many government agencies have big puzzles, rich data, and not enough staff to analyze them. Partnerships, including memoranda of understanding, can open doors to manageable questions that need better research, which is far more likely to have an impact.

Finally, those in the field need to think about how to mentor younger scholars to take these steps. Most of them came into public management with a keen interest in making government work better and with a taste for the important questions. It’s not necessary to wring that out of them in the pursuit of a more scientifically grounded discipline. More-established scholars in the field can frame discussions on the big questions, help younger scholars figure out how best to attack them, and then support these scholars when it comes time to write references in the tenure process.

There’s never been a more exciting time in the history of governance and public management. Fundamental models, like authority and hierarchy, that governed the practice of government for hundreds of years are under assault. New and untried models are rising to challenge them. We need more deep dives into existing theory to cement the propositions on which we build. But we surely need to encourage—indeed, to make it safe—for scholars, including younger ones to ask the big questions.

Those questions won’t go away. They are reshaping governance. And they’ll reshape it without the insights of the field if public management does not engage them.

Donald F. Kettl is a Professor in the School of Public Policy at the University of Maryland, a nonresident Senior Fellow at the Brookings Institution, and a nonresident Senior Fellow in the Volcker Alliance. His next book, Beyond Jurassic Government: How to Recover America’s Lost Commitment to Competence, will be published by the Brookings Institution Press in 2016.

Written by Governance

November 13, 2015 at 3:42 pm

2 Responses

Subscribe to comments with RSS.

  1. In his insightful essay, Don Kettl is right to call attention to Francis Fukuyama’s critique of contemporary US governance. But Fukuyama has also provided a strong defense of the discipline of public administration in his book, State Building. Fukuyama’s two most recent books, The Origins of Political Order and Political Order and Decay, which address the most important “big questions,” are destined to become classics of political theory and governance.

    James Pfiffner

    November 20, 2015 at 11:12 am

  2. Don Kettl provides an excellent assessment of the divide between academic research and practitioner concerns in US public administration. He is correct that we have lost focus toward “big questions” in the field to some extent. Indeed, Robert Durant responds to Kettl’s essay with even more striking criticisms of the institutional incentives in academia that can lead junior scholars down this road of ever more incremental and irrelevant probing (see The diminished impact of public management scholarship to the Zeitgeist of public management practice and consultation is also a product of the lack of investment and subsequent (lack of) rigor used in human capital research within the US federal government itself.

    This is an issue that stakeholder groups (e.g., academics, federal practitioners, good government groups) must mutually engage in tackling. For example, Kettl points to the FEVS as a relatively underused data set. Kettl is correct that the FEVS “offers vast potential for understanding how different strategies of leadership and different kinds of policy problems produce different impacts on employees.” However, he is incorrect that this data source has been relatively “untapped.” I think the more accurate aspect of his insight is pointing to the “potential” of such data sources.

    In recent work that I and colleagues published in Public Administration Review, we find that there has been quite a lot of academic work produced using the FEVS (footnote #1). The issue is not the relative use of this data. In our analysis, we find public management researchers had used FEVS data to produce dozens of peer-reviewed publications on a range of topics of interest to policy makers, practitioners, and academics. Despite the proliferation of these empirical studies, it is the value of instrument itself that might be of question.
    There is little question that the implementation of FEVS and the subsequent representativeness of its sample are unequaled. Moreover, the breadth of concepts that the survey covers are ambitious and abundant. This is reflected in the size of the survey itself (up to almost 100 items) and the general topics it attempts to cover (e.g., “Work Experience,” “Leadership,” “Satisfaction,” and “Work/Life”). Finally, as it has been implemented 10 times since 2002 with many of these items appearing in each wave, it offers us opportunities to identify trends and even impacts across time.

    With all this said, Kettl is most correct in his allusion to such data sources as FEVS that the public management community needs to “step back to ask which questions most need answers… [and] help the field develop the new tools and data it needs to answer them, instead of focusing on questions defined by the tools at hand.” In our opinion, the on-hand tool in question (the FEVS) is in dire need of redevelopment. While there are several items that are excellent examples of validated, concise measures of important organizational and work phenomenon, we also find several weaknesses and limitations in the questions that the instrument currently uses.

    For example, we argue that “OPM does not appear to have capitalized on existing research by using measures of concepts that have been validated across settings and samples, even though management researchers have often gone to great lengths to demonstrate the reliability and validity of measures.” We recommend a thorough accounting of item selection for the survey that avoids some of the common missteps (e.g., double-barreled questions, using unsubstantiated ad hoc measures, using one method of response measurement across the survey). At the same time, our recommendations are intended to retain the value of existing trend items that will minimize any potential interruptions.

    I am hopeful for the partnerships that Kettl recommends. Currently, the Government Accountability Office and NASPAA have partnered with OPM and interested academics to form the G3 Data Users Group, which seeks to foster collaborations between academics and stakeholder agencies. The Human Capital Network ( being formed through NASPAA is a new pilot that is attempting to harness those capacities. Emerging organizations, like the Volcker Alliance, could further advance these efforts.

    But, this is an enterprise that practitioners and those good government groups must also proactively embrace in real ways for academics to direct their efforts in complement. Responses from those stakeholders, however, were somewhat tepid to our article’s suggestion to redesign the survey or that any changes were necessary (footnote #2). There is a relative path dependency involved with these types of efforts that is difficult to break. When individuals and organizations adopt a common technology, “the cost of adopting once-possible alternatives” increases, thereby “providing individuals with a strong incentive to identify and stick with a single option” (footnote #3). I agree with Kettl that advances in knowledge derived from using data sources like the FEVS, as well as those that would ensue from further improvements to those instruments, are critical. His commentary is a welcomed call to action.

    - William G. Resh, University of Southern California


    1. Fernandez, S., Resh, W. G., Moldogaziev, T., & Oberfield, Z. W. (2015). Assessing the Past and Promise of the Federal Employee Viewpoint Survey for Public Management Research: A Research Synthesis. Public Administration Review, 75(3), 382-394. doi: 10.1111/puar.12368

    2. See for the entire Virtual Issue on the topic.

    3. Pierson, Paul. 2000. The Limits of Design: Explaining Institutional Origins and Change. Governance 13(4): 475–99. (p. 492)

    William Resh

    November 23, 2015 at 1:42 pm

Comments are closed.


Get every new post delivered to your Inbox.

Join 462 other followers

Build a website with
%d bloggers like this: