Case study methods underwent a renaissance over the last decade that encompasses three key developments. First, methodologists have clarified philosophy of science foundations in case study methods, and their comparative advantages vis-à-vis statistical methods. Second, scholars have improved their practical advice on how best to conduct case studies. Third, researchers have institutionalized the teaching and advancement of qualitative methods through a new American Political Science Association section on qualitative methods and an interuniversity Consortium on Qualitative Research Methods.
While qualitative methods encompass many approaches— including ethnography, participant observation, focus groups, and other techniques—new ways of using case studies to develop and test concepts and theories about complex political phenomena include typological theorizing, fuzzy set analysis, and two-level theories. Recent practical advice has also emerged on how to carry out case studies, including how to select cases for study, identify negative cases, execute withincase analysis using process tracing, conduct counterfactual thought experiments, and carry out multimethod research. While recent developments all hold significance, ongoing development of qualitative research methods will continue the welcome trend toward methodological pluralism in the social sciences.
Philosophical And Theoretical Issues
In 2006, James Mahoney and Gary Goertz identified ten key differences between qualitative and statistical methods. The most fundamental of these have to do with the often implicit assumptions that the two approaches have about explanation and causation. Mahoney and Goertz argue that a central goal in qualitative research is the historical explanation of individual cases, such as the causes of major wars, financial crises, or transitions to or from democracy. This preoccupation with the causes of effects leads to questions such as, “Through what processes and mechanisms did this outcome arise in this case?” In contrast, statistical researchers are more interested in the general effects of causes on specified populations. These researchers raise questions such as, “How much, on average, would a one-unit change in this variable affect the outcomes for this population of cases?”
This difference relates to a deeper philosophy-of-science debate on causal explanation. Gary King, Robert Keohane, and Sidney Verba draw upon probabilistic notions of causality and the metaphor of controlled experiments to define causation in terms of causal effects. In contrast, case study researchers, such as Alexander George and Andrew Bennett, draw upon a scientific realist view of science to argue that causal explanation involves reference to hypothesized causal mechanisms. In this view, causal mechanisms are entities in the world independent of one’s mind, which, if they operate as one theorizes, would generate and account for the processes and outcomes one observes.
Either approach faces thorny philosophical and practical questions. Probabilistic ideas of causation and the notion of causal effects have difficulty providing satisfying explanations for individual cases. They also raise perplexing questions about whether the kinds of inherently probabilistic relations found in quantum mechanics are relevant to political life or can be considered causes or explanations. Moreover, earlier attempts to explicate causation in terms of probabilistic relations have had difficulty distinguishing between correlations and causation. Readings of a barometer, for example, correlate with the weather but they do not cause the weather. Yet explanations relying on correlations cannot distinguish whether a barometer qualifies as a cause of weather.
The explanation of cases via reference to causal mechanisms raises difficult issues as well. Are causal mechanisms in some ultimate sense unobservable, and if so, how do they relate to the observations one makes in the world? How can one address the problem of a potentially infinite regress of explaining mechanisms within mechanisms at ever-finer levels of detail, lower levels of analysis, and smaller increments of time? Does explanation via reference to hypothesized causal mechanisms entail a commitment to methodological individualism or the study of politics at the level of the individual, or does it allow for development and testing of macrolevel theories? How do mechanism-based explanations generalize from one context to another?
George and Bennett, in their 2005 book Case Studies and Theory Development in the Social Sciences, address each of these issues regarding causal mechanisms. They argue that hypothesized mechanisms be tested against their observable implications. Although what is observable changes with new instruments of observation, there is always some ultimately unobservable horizon that one would like to study, beyond the current processes. In addition, researchers must make pragmatic and potentially flawed decisions on when to stop pursuing ever-finer levels of explanation. It is possible to err on the side of stopping too soon, when a better explanation is just around the corner, or too late, when the researcher has begun to tell curve-fitting stories. As for macrolevel testing of theories, George and Bennett argue this is possible but that a macrolevel theory is subject to challenge if it can be shown that individuals did not behave as the theory predicts, even if the theory accurately attains aggregate outcomes. Process tracing is subsequently a key case study method for examining whether hypothesized mechanisms operated within a case as predicted.
Finally, George and Bennett maintain that contingent generalizations—or theoretical statements with specified and often narrow-scope conditions—are typically the most that students of politics can achieve. There are very few nontrivial political science theories that use just a few variables to make broad, detailed, and accurate predictions. These arguments, although still subject to debate, strengthen the philosophical underpinnings of case study methods.
A second set of developments concerns innovations in social science concepts and a turn toward more complex theorizing about politics. Gary Goertz, in Social Science Concepts: A User’s Guide (2005) distinguishes between necessary and sufficient concepts versus family resemblance concepts. In the former, a single variable may be conceptualized as either necessary or sufficient for a theory to hold or an entity to qualify as an instance of a concept. For example, the democratic peace theory posits that the absence of democracy in one of two contending states is a necessary condition for these states to go to war against one another. A family resemblance concept suggests that some combination of a number of substitutable characteristics qualifies an entity to be classified as an instance of a concept. A family resemblance concept of democracy, for example, might define a state as democratic if it has competitive elections plus any three of the four following attributes: viable political parties, an independent judiciary, universal suffrage, or freedom of the press.
In the 2005 article, “Two-level Theories and Fuzzy Set Analysis,” Goertz and Mahoney note that many types of two-level theories are possible by combining necessary and sufficient concepts at one level of a theory with family resemblance and substitutable concepts at another. Their analysis of Theda Skocpol’s theory of social revolutions, for example, shows that in her two-level theory, both state crisis and agrarian revolt are necessary conditions at one level for a social revolution. At the same time, at a prior level, there are several substitutable conditions that can lead to agrarian revolt and other substitutable conditions that result in a state crisis.
These distinctions among kinds of concepts overlap with several other approaches to complexity. James Mahoney, Erin Kimball, and Kendra Koivu, in “The Logic of Historical Explanation in the Social Sciences” (2009), discuss five kinds of causal relationships invoked in historical explanations: (1) necessary but not sufficient, (2) sufficient but not necessary, (3) necessary and sufficient, (4) insufficient but necessary part of an unnecessary but sufficient (INUS), and (5) sufficient but unnecessary parts of insufficient but necessary (SUIN). The first three are widely familiar, but INUS and SUIN relationships require explanation. In an INUS variable, for example, A is an INUS variable with respect to Y if: A in conjunction with B is sufficient to cause Y, neither A nor B can cause Y by themselves, and other conjunctions like DE can also cause Y. Conversely, the authors illustrate SUIN variables with democratic peace theory. This theory, again, argues that lack of democracy in one of two contending countries is a necessary condition for war. If any one of several conditions can by itself constitute a lack of democracy—major electoral fraud, authoritarianism, and so on—but none of these conditions is by itself sufficient for war, then each is a SUIN variable with respect to war. Mahoney and colleagues discuss how the careful evaluation of sequences in historical cases can help assess the relative importance of these five kinds of causes in explaining historical cases.
George and Bennett discuss the use of typological theorizing as another approach to complexity. Typological theories provide contingent generalizations on how different combinations of variables interact to produce outcomes. Because typological theories are seldom fully specified for all possible combinations of variables, and because history does not provide natural experiments of all possible combinations, George and Bennett’s approach involves careful iteration between deductive theorizing about combinations of variables, or types, and empirical knowledge about extant cases. Using the example of a study on burden sharing in the Gulf War (1990–1991), George and Bennett show how combinations of variables from theories such as balance of threat, collective action, and alliance dependence can explain why countries contributed as they did, or did not, to the U.S. coalition in the Gulf War. This example also illustrates how typological theorizing assists in choosing the most informative cases for study, and how subsequent burden-sharing episodes could be used to further develop contingent generalizations on burden sharing. One challenge of typological theories is that they become combinatorially more complex with each additional variable. George and Bennett, along with Colin Elman in “Explanatory Typologies in Qualitative Studies of International Politics” (2005), discuss ways to both simplify typological theories and focus on selected subtypes.
Charles Ragin’s fuzzy set analysis, presented in his 2000 book Fuzzy Set Social Science, constitutes a third approach to complexity. In this method, the analyst assigns fuzzy set scores between 0 and 1.0 for how clearly a case fits as an instance of a concept. A full democracy, for example, might be coded a 1.0, a full autocracy would be 0, and a country that allows elections and eschews electoral fraud but limits the opposition parties’ access to media might be a .75. Such fuzzy set scores can be superior to traditional crisp measures when variation does not matter above a certain threshold. For example, democratic peace theory holds that all fully democratic countries refrain from war with one another, and with this theory, for those cases above the threshold of established democracy, it does not matter whether one country is more democratic than another. Ragin elucidates methods for fuzzy set analysis of populations that typically range from about ten to sixty cases. This number of cases is generally too small for traditional statistical analysis and too large for detailed case studies of the full population, so within this range of cases, fuzzy set analysis can have advantages over alternative research designs.
Path dependency is a final form of complexity that case study methodologists have addressed. In a path dependent process, patterns set in a period of contingency become locked in through increasing returns to scale, learning effects, positive or negative externalities, or other mechanisms that make the new outcome strongly self-reinforcing. As Bennett and Elman argue in “Complex Causal Relations and Case Study Methods: The Example of Path Dependence” (2006), case study methods are well-suited to unraveling the choices made in the contingent period, examining the mechanisms that sustain the new path thereafter, and exploring the instances in which the established equilibrium might have broken down or might yet break down. Many comparative historical analyses, which focus on explaining big and important outcomes over long periods of time, use path dependency models and case study methods of assessing them.
Practical Methods In Case Study Research
Qualitative methodologists have elaborated on practical means of carrying out case studies as well. John Gerring and Jason Seawright, in Case Study Research: Principles and Practices (2007), analyze nine different case selection criteria and their uses, advantages, and disadvantages. In their view, a researcher might select a typical case for study, or a case that is deemed to be representative by some criteria (e.g., having average values on the variables, or a small error term in a statistical study), and study this case in detail to see whether the mechanisms hypothesized to explain population outcomes in a statistical study are actually evident in a typical case. Selection of diverse cases might show how cases at either end of a distribution operate, while study of extreme value cases might show causal mechanisms in sharp relief, though such cases may not be representative of the population. Study of deviant outlier cases, or cases with a large error term in a prior statistical study, might help identify omitted variables, though further analysis is necessary to determine if such variables are relevant only to the outlier case or the population. Study of influential cases—or cases in which removal from the population in a statistical analysis would have the largest effect on the results—can help deter mine if these cases are truly part of the hypothesized processes relevant to the full population, or are in some sense deviant and need to be either recoded or dropped from statistical analysis of the full population. Study of most likely cases that fail to have the expected outcome, and least likely cases whose outcomes fit a hypothesis even in unpropitious circumstances, can help identify the scope conditions of theories. Comparison of most similar cases—or cases that are similar in all but one independent variable and differ on the dependent variable—and least similar cases—which differ on all but one independent variable and have the same value on the dependent variable—can be strong research designs to assess the roles of the independent variables isolated by each comparison. Finally, pathway cases can each illustrate different paths to similar outcomes when there are alternative paths to the same outcome; this is a condition known as equifinality.
Mahoney and Goertz, in “The Possibility Principle: Choosing Negative Cases in Qualitative Research” (2004), note that negative cases, or cases that could have had the outcome of interest but did not, are often neglected in both statistical and case study analysis. Negative cases are often harder to identify, and potentially far more numerous, than cases that are positive on the outcome of interest. It is more difficult to identify situations that could have led to war and countries that might have gone to war, for example, than to identify actual wars. Researchers can err in either of two directions: excluding relevant cases that could have had the outcome of interest but did not, or including irrelevant cases in which the outcome of interest was not possible. Mahoney and Goertz devise a possibility principle for identifying cases that could have had the outcome of interest. This principle consists of a rule of inclusion, which would include cases in which at least one independent variable predicts the outcome of interest, and a rule of exclusion, which excludes cases in which at least one variable makes the outcome of interest impossible or nearly so. Researchers can adjust the tightness of their criteria for inclusion or exclusion depending on the theory building or policy consequences of mistakenly including an inappropriate case versus those of mistakenly excluding a relevant case.
Methodologists have also focused on how to do within case analysis, particularly through the technique of process tracing. Process tracing involves looking within a single case for the observable implications of hypothesized causal mechanisms, or the processes they predict should have been evident in the events leading up to the outcome of the case. Analogous to detective work, process tracing examines the detailed sequences through which outcomes arose. It addresses questions of who knew what, did what, and when, in order to affirm or disconfirm the predictions made by alternative explanations. It proceeds both deductively, from hypothesized observable implications, and inductively, from details in the case that may surprise the researcher and that need to be theorized and tested against additional observable implications within the case or in other cases.
In many respects, the logic of process tracing parallels that of Bayesian inference. Both approaches stress the importance of diverse evidence, of casting the net widely for alternative explanations, of never placing 100 percent confidence in an explanation, and of putting the greatest value on evidence that helps differentiate between competing explanations (i.e., evidence that affirms one explanation while at the same time disconfirming others). Both perspectives indicate that the degrees of freedom problem, which arises in frequents statistical analysis when the researcher has more parameters to be estimated than cases to study, is not applicable to within-case analysis in which a single piece of evidence might disprove many possible explanations. Whether there is (in)determinacy in distinguishing between alternative explanations of a case depends on the nature of the evidence with respect to the rival explanations, not of the number of cases or pieces of evidence relative to the number of variables. Process tracing can thus enable causal inference even from a single case with many variables, and it can thereby compensate for the limitations of cross-case comparisons. Process tracing can help test, for example, whether the one independent variable that differs between most similar cases relates to the difference in these cases’ outcomes. Process tracing is not a panacea, however, because it is time-consuming, requires lots of information, and may be indeterminate if the right kind of evidence is not available.
Counterfactual analysis can supplement both within-case analysis and cross-case comparisons. Every causal explanatory statement—“Y happened in this way at this time because of X”—implies a counterfactual: “If not X, then not Y in the same way or at the same time.” Because researchers cannot run perfect experiments or rerun history, counterfactuals are ultimately untestable. Yet thinking through the counterfactual implications of causal arguments can help check for logical inconsistency in one’s own thinking. If researchers do not find a counterfactual claim equally convincing as the logically equivalent causal claim that they are asserting, then they need to fix the inconsistency in their thinking. In their 1996 work Counterfactual Thought Experiments in World Politics, Philipp Tetlock and Aaron Belkin suggest criteria for good counterfactuals, including clarity, logical consistency, minimizing the rewriting of history necessary to sustain the counterfactual, and projectibility. Projectibility, or ability to get back toward testable implications, is particularly important. Although counterfactuals are ultimately untestable, they may have some degree of testability. For example, one could assess whether actors made contingency plans in case events took a different path, or if powerful actors considered or advocated options other than those that they ultimately chose. Gary Goertz and Jack Levy, in Explaining War and Peace: Case Studies and Necessary Conditions Counterfactuals (2007), provide extended analysis of some of the counterfactual arguments that have been made concerning the outbreak of World War I (1914–1918) and the end of the cold war.
Finally, methodologists provide techniques for combining case study methods with statistical and formal analysis within a single research project. Much of the advice on case selection from Gerring and Seawright, for example, requires prior statistical work to identify outlier cases or influential cases before such cases can be selected for within-case analysis or paired case comparisons. Similarly, in the 2005 article “Nested Analysis as a Mixed-method Research Strategy for Comparative Research,” Evan Lieberman discusses how statistical analysis and case study analysis can be nested into a multimethod research design. The basic premise of multimethod analysis, and one increasingly recognized by methodologists of all kinds, is that every methodological approach has strengths and weaknesses, so combining methods can allow the strengths of one to address the limits of another.
Conclusion
Innovations in case study methods have provided them with a more equal basis relative to the decades of refinements in statistical and other methods. This has contributed to a welcome methodological pluralism and to growing interest in multimethod research. However, it has also raised the level of effort required to master best practices in case study methods, or even to become adept enough at these methods to critically read case study research. Still, these emerging trends provide a valuable basis even where methodology is a secondary focus, or for those whose primary methods are not case studies.
Bibliography:
- Adcock, Robert, and David Collier. “Measurement Validity: A Shared Standard for Qualitative and Quantitative Research.” American Political Science Review 95, no. 3 (2001): 529–546.
- Bennett, Andrew. “Process Tracing: A Bayesian Approach.” The Oxford Handbook of Political Methodology. Edited by Janet Box-Steffensmeier, Henry Brady, and David Collier, 702–721. New York: Oxford University Press, 2008.
- Bennett, Andrew, and Colin Elman. “Complex Causal Relations and Case Study Methods: The Example of Path Dependence.” Political Analysis 14, no. 3 (2006): 250–267.
- Brady, Henry, and David Collier, eds. Rethinking Social Inquiry: Diverse Tools, Shared Standards. Lanham, Md.: Rowman and Littlefield, 2004.
- Checkel, Jeffrey. “Tracing Causal Mechanisms.” International Studies Review 8, no. 2 (2006): 362–370.
- Collier, David, and Colin Elman. “Qualitative and Multimethod Research: Organizations, Publication, and Reflections on Integration.” The Oxford Handbook of Political Methodology. Edited by Janet Box-Steffensmeier, Henry Brady, and David Collier, 779–795. New York: Oxford University Press, 2008.
- Elman, Colin. “Explanatory Typologies in Qualitative Studies of International Politics.” International Organization 59, no. 2 (2005): 293–326.
- George, Alexander L., and Andrew Bennett. Case Studies and Theory Development in the Social Sciences. Cambridge, Mass.: MIT Press, 2005.
- Gerring, John, and Jason Seawright. “Techniques for Choosing Cases.” Case Study Research: Principles and Practices. Cambridge: Cambridge University Press, 2007.
- Goertz, Gary. Social Science Concepts: A User’s Guide. Princeton: Princeton University Press, 2006.
- Goertz, Gary, and Jack Levy. Explaining War and Peace: Case Studies and Necessary Conditions Counterfactuals. New York: Routledge, 2007.
- Goertz, Gary, and James Mahoney. “Two-level Theories and Fuzzy Set Analysis.” Sociological Methods and Research 33, no. 4 (2005): 497–538.
- Lieberman, Evan. “Nested Analysis as a Mixed-method Research Strategy for Comparative Research.” American Political Science Review 99, no. 3 (2005): 435–452.
- King, Gary, Robert Keohane, and Sidney Verba. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press, 1994.
- Mahoney, James, and Gary Goertz. “The Possibility Principle: Choosing Negative Cases in Qualitative Research.” American Political Science Review 98, no. 4 (2004): 653–670.
- Mahoney, James, and Gary Goertz. “A Tale of Two Cultures: Contrasting Quantitative and Qualitative Research.” Political Analysis 14, no. 3 (2006): 227–249.
- Mahoney, James, Erin Kimball, and Kendra Koivu. “The Logic of Historical Explanation in the Social Sciences.” Comparative Political Studies 42, no. 1 (2009): 114–146.
- Mahoney, James, and Dietrich Reuschemeyer, eds. Comparative Historical Analysis in the Social Sciences. Cambridge: Cambridge University Press, 2003.
- Ragin, Charles. Fuzzy Set Social Science. Chicago: University of Chicago Press, 2000.
- Tetlock, Philipp, and Aaron Belkin. Counterfactual Thought Experiments in World Politics. Princeton: Princeton University Press, 1996.
This example Qualitative Methodologies Essay is published for educational and informational purposes only. If you need a custom essay or research paper on this topic please use our writing services. EssayEmpire.com offers reliable custom essay writing services that can help you to receive high grades and impress your professors with the quality of each essay or research paper you hand in.
See also:
- How to Write a Political Science Essay
- Political Science Essay Topics
- Political Science Essay Examples