Comparative Research Methods PDF
Document Details
Uploaded by TopSugilite7562
Frank Esser, Rens Vliegenthart
Tags
Related
- Quantitative Research PDF
- Practical Research 2 PDF
- Qualitative Methodology and Comparative Politics PDF
- Qualitative and Quantitative Analysis of Political Data PDF
- Comparative Effectiveness Research Trial for Antidepressant Incomplete and Non-Responders with Treatment-Resistant Depression (ASCERTAIN-TRD) PDF
- مقارنة التربية PDF
Summary
Comparative research methods in communication science are examined. The article focuses on the foundations and logic of comparative research, highlighting the importance of case selection, equivalence, and statistical techniques. The authors investigate how globalization impacts research practices in this field.
Full Transcript
Comparative Research Methods FRANK ESSER University of Zurich, Switzerland RENS VLIEGENTHART University of Amsterdam, The Netherlands Introduction In recent years, comparative research in communication science has gained consid- erable ground. On the one hand, this can be interpreted as a sign of...
Comparative Research Methods FRANK ESSER University of Zurich, Switzerland RENS VLIEGENTHART University of Amsterdam, The Netherlands Introduction In recent years, comparative research in communication science has gained consid- erable ground. On the one hand, this can be interpreted as a sign of communication science maturing as a research discipline. On the other hand, both in terms of quan- tity and quality, comparative communication research is lagging behind compared to neighboring disciplines, such as political science and sociology. In a review, specifically of comparative political communication research, Pippa Norris (2009) identifies sev- eral reasons for this, including shortage of comparative frameworks due to an overly strong focus on the United States, a lack of standardized measurement instruments, and the limited availability of large archival datasets. We expect that the unfamiliarity with the possibilities and challenges of comparative research among communication scholars also contributes to the paucity of solid comparative studies, and thus, we hope this entry fills part of this void. More specifically, we introduce those possibilities and challenges that are particular to comparative research by drawing on the available literature from communication science and borrowing from, for example, political science literature. Focusing on the foundations and basic logic of comparative research and potential research goals, we pay ample attention to case selection, both in small- and large-N studies, and to the fundamental choice between most similar and most different systems designs. A key issue in conducting comparative empirical research is to ensure equivalence, that is, the ability to validly collect data that are indeed comparable between different contexts and to avoid biases in measurement, instruments, and sampling. We introduce a typology of different types of research questions that can be addressed with comparative research, as well as the most common statistical techniques associated with those research questions. Finally, we briefly outline potentially useful theoretical frameworks and discuss how trends such as globalization alter our understanding and practice of conducting comparative research. In many respects, the challenges to conduct solid comparative research are tremendous, as will become clear throughout this entry. However, the opportunities are equally tremendous, especially in a time when archival data have become increas- ingly accessible digitally and when comparative datasets that are of particular interest The International Encyclopedia of Communication Research Methods. Jörg Matthes (General Editor), Christine S. Davis and Robert F. Potter (Associate Editors). © 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc. DOI: 10.1002/9781118901731.iecrm0035 2 C O M PA R AT I V E R E S E A R C H M E T H O D S to communication scholars are collected and made publicly available. As comparative research offers the opportunity to address a particular set of questions that are of crucial importance for our understanding of a wide range of communicative processes, it deserves a central position in communication science. Foundations Comparative research in communication and media studies is conventionally under- stood as the contrast among different macro-level units, such as world regions, coun- tries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time. A recent synthesis by Esser and Hanitzsch (2012a) concluded that comparative communication research involves comparisons between a minimum of two macro-level cases (systems, cultures, markets, or their sub-elements) in which at least one object of investigation is relevant to the field of communica- tion. Comparative research differs from non-comparative work in that it attempts to reach conclusions beyond single cases and explains differences and similarities between objects of analysis and relations between objects against the backdrop of their contex- tual conditions. Generally speaking, comparative analysis performs several important functions that are closely interlinked. More specifically, comparative analysis enhances the understanding of one’s own society by placing its familiar structures and routines against those of other systems (understanding); comparison heightens our awareness of other systems, cultures, and patterns of thinking and acting, thereby casting a fresh light on our own political communication arrangements and enabling us to contrast them critically with those prevalent in other countries (awareness); comparison allows for the testing of theories across diverse settings and for the evaluating of the scope and significance of certain phenomena, thereby contributing to the development of universally applicable theory (generalization); comparison prevents scholars from over-generalizing based on their own, often idiosyncratic, experiences and challenges claims to ethnocentrism or naïve universalism (relativization); and comparison provides access to a wide range of alternative options and problem solutions that can facilitate or reveal a way out of similar dilemmas at home (alternatives). In addition to these general benefits, comparison also has specific scientific advan- tages. To fully exploit these benefits, it is essential that the objects of analysis are compared on the basis of a common theoretical framework and that this is performed by drawing on equivalent conceptualizations and methods. It is further noted that spatial (cross-territorial) comparisons should be supplemented wherever possible by a longitudinal (cross-temporal) dimension to account for the fact that systems and cultures are not frozen in time; rather, they are constantly changing under the influence of transformation processes, such as Americanization, Europeanization, globalization, modernization, or commercialization. Combining cross-sectional and longitudinal designs helps to understand these transformation processes and makes clear that different contexts affect the results in different ways. C O M PA R AT I V E R E S E A R C H M E T H O D S 3 What distinguishes comparative research from simple border-transgressing kinds of (international/transnational) research is that comparativists carefully define the boundaries of their cases. This can be accomplished in a variety of ways based on structural, cultural, political, territorial, functional, or temporal qualities. Thus, it is not only territories that can be compared. That said, if territories are compared, the comparison can occur at many levels above and below the nation-state and can incorporate other relevant social, cultural, and functional factors. Macro-level cases, however defined, are assumed to provide characteristic contextual conditions for a certain object that is investigated across cases. Different contextual conditions (=factors of influence) are used to explain different outcomes regarding the object under investigation (=embedded in these contexts and hence affected by them), while similar contextual conditions are used to explain similar outcomes. The corresponding research logic of Most Similar Systems Design and Most Different Systems Design is introduced and explained herein. It is crucial to understand this basic logic of comparative research. Comparative research guides our attention to the explanatory relevance of the contextual envi- ronment for communication outcomes and aims to understand how the systemic context shapes communication phenomena differently in different settings. The research is based on the assumption that different parameters of political and media systems differentially promote or constrain communication roles and behaviors of organizations and actors within those systems. Thus, comparativists often use factors at the macro-societal level as explanatory variables for differences found in lower level communication phenomena embedded within the societies. Additionally, macro-level factors are considered moderators that influence relationships between variables at the lower level. This recognition of the (causal) significance of contextual conditions is why comparative research is so exceptionally valuable. In the words of Mancini and Hallin, “theorizing the role of context is precisely what comparative analysis is about” (2012, p. 515). This explanatory logic can be distinguished from a mere descriptive comparison that is considered less mature, and it also extends clearly beyond the general advantages of comparison as outlined before and constitutes the status of comparative analysis as a separate and original approach. Overall, there are several conditions that should be fulfilled before labeling a comparison as a mature comparative analysis. First, the purpose of comparison must be explicated early in the project, and it should be a defining component of the research design. Second, the macro-level units of comparison must be clearly delineated, irrespective of how the boundaries are defined. In the contextual envi- ronments, specific factors that are assumed to characteristically affect the objects of analysis—be they people, practices, communication products or other structural or cultural elements—must be identified. Third, the objects of analysis should be compared with respect to at least one common, functionally equivalent dimension. Methodologically, an emic (culture-specific) or etic (universal) approach may be applied. Fourth, the objects of analysis must be compared on the basis of a common theoretical framework and must draw on equivalent conceptualizations and methods rather than be analyzed separately. These elements will be further discussed in the sections that follow. 4 C O M PA R AT I V E R E S E A R C H M E T H O D S Research goals Comparative communication research is a combination of substance (specific objects of investigation studied in different macro-level contexts) and method (identification of differences and similarities following established rules and using equivalent concepts). Thus, the question is: how does comparative communication research proceed in prac- tice? To answer this question, we distinguish five practical steps, each connected to a specific research goal. On the most basic level, comparison involves the description of differences and similarities. Providing contextual descriptions of a set of foreign systems or cultures enhances our understanding and our ability to interpret diverse communication arrangements. Furthermore, rounded and detailed descriptions provide knowledge and initial hunches about interesting topics and about factors that may be important for explaining similarities and differences. Second, contextual descriptions provide the knowledge necessary for recognizing functional equivalents. A fundamental problem in comparative studies, as trivial as it may sound, is comparability. For example, having drawn a media sample in country A, what are the equivalents in countries B, C, and D? The same holds for specific objects and concepts of analysis. Only objects that meet the same function (or role) may be meaningfully compared with each other. In a third step, which builds on the previous two, classifications and typologies must be established. Classifications seek to reduce the complexity of the world by grouping cases into distinct categories with identifiable and shared characteristics. Key charac- teristics that allow for a theoretically meaningful differentiation between systems or cultures serve as dimensions to construct a classification scheme. An example is Hallin and Mancini (2004), who first clarified the concepts mass press, political parallelism, professionalization, and state intervention and then used these concepts as dimensions to classify media systems into three prototypical models: polarized pluralist, democratic corporatist, and liberal. Typologies can be considered the beginning of a theory on a subject matter, such as media systems, and can help to classify cases in terms of their similarities and differences. The fourth step is explanation. As Landman (2008) states, “once things have been described and classified, the comparativist can move on to search for those factors that may help explain what has been described and classified” (p. 6). Comparative research aims to understand how characteristic factors of the contextual environment shape communication processes differently in different settings. To understand the relationship between divergent contextual influences and the respective implications for the object of investigation, scholars identify and operationalize key explanatory and outcome variables, which can be arranged in various forms to pose different kinds of explanatory research questions (see later). Confirmed hypotheses are extremely valuable, as they offer the opportunity for pre- diction. Based on generalizations from the initial study, scholars can make claims about other countries not actually studied or about future outcomes. The ability to make predictions provides a basis for drawing lessons across countries and contributes to finding solutions to problems prevalent in many countries. C O M PA R AT I V E R E S E A R C H M E T H O D S 5 Case selection and research designs For all five research goals, the selection of which cases are included in the comparison is crucial. Hantrais (1999, pp. 100–101) makes an important point by arguing, “any similarities or differences revealed by a cross-national study may be no more than an artifact of the choice of countries.” The rationale for the case selection must be linked to a conceptual framework that justifies all design decisions made by the comparativist. In reality, however, investigators of comparative communication projects sometimes fail to present such a rationale, as their case selection is driven by the availability of data. Furthermore, they only select countries to which they have access, which predictably results in an overrepresentation of wealthier countries with better access to academic resources. While this is not problematic per se, it does limit the generalizability of the findings and thus the opportunities for prediction. Presenting a justification for case selection is particularly important for smaller sam- ples. The smaller the sample, the more important it is that a convincing theoretical justification be provided that explicitly states the basis of each case. As an inexpen- sive shortcut, scholars increasingly, albeit thoughtlessly, refer to existing typologies of media systems, such as the three models of media/politics relationships in Western Europe and North America by Hallin and Mancini (2004), without any deeper engage- ment and without proving, in detail, that the variables of their own study are directly linked to Hallin and Mancini’s dimensions. Neither do they link their selection to ratio- nales of most similar or most different systems design (see further later), which suggest careful case selection based on the research question the researcher seeks to answer. Many scholars are not only unaware of the many alternative comparative frameworks to Hallin and Mancini (see later), but they are also unfamiliar with the many biases involved with uninformed case selection. Depending on the sample size, the following research strategies are available. Comparative case study analysis Mono-national case studies can contribute to comparative research if they are composed with a larger framework in mind and follow the Method of Structured, Focused Comparison (George & Bennett, 2005). For case studies to contribute to cumulative development of knowledge and theory they must all explore the same phenomenon, pursue the same research goal, adopt equivalent research strategies, ask the same set of standardized questions, and select the same theoretical focus and the same set of variables. Even an isolated single-country study can possess broader significance if it is conducted as an “implicit” comparison. Implicit comparisons use existing typologies as a yardstick to interpret and contextualize the single case at hand. Implicit comparisons need to fulfill several requirements. First, they must be embedded in a comparative context and their analytical tools must come from the comparative literature. Second, the case selection must be justified by arguing that it is either a “representative” case (typical of a category) or a “prototypical” (expected to become typical), “exemplary” (creating a category), “deviant” (the exception to the rule), or a “critical” case (if it works here, it will work everywhere). Third, it must 6 C O M PA R AT I V E R E S E A R C H M E T H O D S be shown that the findings are building blocks for revising or expanding an existing comparative typology or theory. Case studies that meet these criteria and follow the method of structured, focused comparison can even accomplish the important step from “description” to “explanation.” They do so by employing tools of causal inference from qualitative methodology like “analytic narratives” or “process tracing.” Drawing on concepts like detailed narrative, sequencing, path dependence and critical events, “analytic narratives” and “process tracing” provide an explanation based on causal chains rather than general laws or statistical relationships (for details, see George & Bennett, 2005). Small-N comparative analysis Today, “the” standard form of comparative analysis is usually equated with research methods based on John Stuart Mill’s (1843) Methods of Agreement and Difference and Adam Przeworski and Henry Teune’s (1970) Most Different and Most Similar Systems Designs. Both strategies have many parallels and can be pulled together under the rubrics of Most Similar Systems – Different Outcomes and Most Different Systems – Similar Outcomes. The number of systems compared is here usually some- where between 3 and 10, and the selection of systems occurs with a specific purpose in mind. Most Similar Systems – Different Outcome designs seek to identify those key features that are different among otherwise fairly similar systems and which account for the observed outcome in the object under study. Most Different Systems – Similar Outcome designs, on the other hand, seek to identify those features that are the same among otherwise dramatically different communication systems in an effort to account for similarities in a particular outcome. With both strategies, the systems are selected with regard to the specific contextual conditions influencing the object under investigation (for details, see Landman, 2008). According to this “quasi-experimental logic,” comparativists select their systems in such a way that specific hypotheses about the relationship between structural features of a given media system (independent variables) and outcomes in media performance (dependent variables) can be tested. Let us assume that one is interested in the relationship between press subsidies (i.e., state aid available to newspapers in some media systems but not in others) and press diversity (measured by the number of newspapers in the market): to examine whether press subsidies generally promote press diversity or not requires a comparative analysis. This logic is inherent in all most similar systems designs. Formally speaking, most similar systems designs “manipulate” the independent variable by purposefully selecting cases for the analysis that in many ways are very similar (e.g., Scandinavian media systems) but differ in the one critical variable (e.g., granting press subsidies or not). The challenge to establishing a causal link lies in the question of how to deal with all the other known and unknown variables that also differentiate these media systems (for example, market size) and may have plausible effects on the outcome variable (that is market pluralism). While carefully selecting cases using a most similar approach can hold a lot of crucial variables “constant,” units will never be identical on all but one variable, leaving room for alternative explanations. Such quasi-experimental research designs C O M PA R AT I V E R E S E A R C H M E T H O D S 7 often forbid a strongly causal attribution of explanatory factors for the determined variance of the dependent variable. However, “soft control” of the variance can be achieved by supplementing with qualitative tools of causal inference like process tracing or analytical narratives. Additionally, careful theoretical argumentation is crucial. A sophisticated extension of the most different and most similar logic was devel- oped by Charles Ragin (1987, 2008). His approach, Qualitative Comparative Analysis (QCA), is a configurational or holistic comparative method which considers each case (system, culture) as a complex entity, as a “whole,” which needs to be studied in a case- sensitive way. It combines quantitative, variable-based logic and qualitative, case-based interpretation. It is important to understand that QCA uses a more complex under- standing of causality than the most different and most similar logic. As Rihoux (2006, p. 682) points out, QCA assumes that (a) causality is often a combination of “conditions” (explanatory variables) that in interaction eventually produces a phenomenon—the “outcome” (phenomenon to be explained); (b) several different combinations of con- ditions may produce the same outcome; and (c) depending on the context a given condition may very well have a different impact on the outcome. Thus different causal paths—each path being relevant, in a distinct way—may lead to the same outcome. We will return to this method further later. Large-N comparative analysis Comparative analysis is about control (Sartori, 1994). The influence of potentially significant variables is either controlled for by employing a most similar or a most different system design or, if we are dealing with larger number of cases, by way of statistical control. In the latter case, descriptive comparative analysis employs statistical techniques, such as factor analysis or cluster analysis, whereas explanatory comparative analysis employs statistical techniques, such as regression analysis or analysis of variance. In large-N studies, scholars no longer use theoretically justified purposive samples but larger-sized samples. Hence, comparative statistical analysis is less interested in the unique quality of the cases under study (countries, systems, or cultures) and more interested in the abstract relationships between the variables. The contextual units of analysis (countries, cultures, etc.) are regarded as cases with theoretically relevant attributes. The goal is to determine the extent to which two or more variables co-vary. For instance, scholars may want to study a vast number of countries to explore whether the “level of negativity in the media about politics” allows them to predict the “level of mistrust and cynicism in the general public.” The focus of a large-N analysis is on parsimonious explanatory designs where the impact of a few key variables is tested on as many cases as possible, thereby identifying universal laws that can be widely generalized. Large-N studies work best in areas where data are available for secondary analysis from international data archives, something that is rarely the case in communication studies (appropriate strategies for analyzing smaller-N and larger-N studies will be presented in more detail). 8 C O M PA R AT I V E R E S E A R C H M E T H O D S Securing equivalence in comparative surveys and content analyses Holtz-Bacha and Kaid (2011) note that in comparative communication research the “study designs and methods are often compromised by the inability to develop con- sistent methodologies and data-gathering techniques across countries” (pp. 397–398). Consequently, they call for “harmonization of the research object and the research method” across studies to guarantee the best possible comparability and generalizabil- ity. But even within the same comparative study, achieving comparability across data gathered in various countries can be challenging. This question of comparability leads us to the problem of equivalence, as differences and similarities between cases can only be established if equivalence has been secured at various levels. We distinguish equivalence at the level of constructs, measurements, samples, and administration. Avoiding construct bias The first question to be addressed is whether a relevant construct, such as the profes- sionalization of journalists, consists of the same dimensions in two cultures, thereby allowing for identical measurement tools. If preliminary analyses based on literature research, expert testimony, pretests and triangulation of alternative methods reveal that different indicators are necessary to capture the same underlying meaning of the concept, researchers can still undertake their study by following an emic approach. The idea behind an emic procedure is to measure professionalization nationally and construct a country-sensitive, culturally specific instrument. Reese (2001) argues, for instance, that the meaning of journalistic professionalism varies across countries depending on factors of influence on several layers of analysis. For another example, Rössler (2012) reminds us that a liberal political affiliation is measured differently in Northern America and Western Europe due to different respective standards of the national political cultures. Therefore, it may be reasonable for a cross-cultural com- parison to be grounded on functional equivalency between the constructs rather than among the single items. However, if it goes as far as yielding different measurements due to using extremely different instruments, it will cause the integration of national results into one analysis to be quite challenging as it would then require additional external reference data to support the argument that the data is, in fact, comparable (i.e., equivalent at the construct level). This is one reason why some scholars prefer most similar system designs to most different system designs, because construct equivalence can be assumed more quickly in similar systems. Once construct equivalence is proven convincingly, an etic approach is acceptable. The instruments employed herein are the same because the construct under study can be assumed to operate similarly, and therefore, it can be tapped with the same attitude and behavior measures in every culture. Note that the (survey) instruments need not be 100% identical but can still be adapted to account for relevant variations. An etic approach is preferable for those studies where scholars wish to apply rig- orous tests against possible construct bias. To do so, scholars have two options. The first option is to determine the extent to which they have actually achieved construct C O M PA R AT I V E R E S E A R C H M E T H O D S 9 equivalence after the fact, primarily by means of statistical analysis. The second option is to develop the key concepts collaboratively by incorporating the collective expertise of international researchers at the outset of a comparative study. An ideal study com- bines both options by first developing a conceptual framework based on multinational input and then identifying the extent to which conceptual equivalence can be assumed on the basis of the investigated empirical material. For testing conceptual equivalence post hoc, several statistical techniques can be used. For example, scholars can calculate and compare Cronbach’s alphas to check whether a battery of questions forms a reliable scale for each separate system/culture. A similar logic applies when the researcher anticipates more dimensions to be present in the data. That is, if exploratory factor analyses result in similar factors and factor loadings for various items across countries, it is interpreted as a good sign (Vliegenthart, 2012). Alternatively, multidimensional scaling can be used to check the cross-cultural validity of a survey scale. That is, if the value items yield similar patterns of correlations across all countries under study, external construct equivalence is assumed to be ensured (Wirth & Kolb, 2012). Those interested in more advanced techniques for testing and for optimizing equivalence, such as congruence coefficient analysis, multigroup confirmatory factor analysis or latent class analysis, may refer to the work of Wirth and Kolb (2012). However, as these authors also note, though the techniques just mentioned work well for multi-country surveys, they are less efficient for comparative content analy- sis. The first reason for this is that content analyses are usually based on categorical rather than metric data, and the second reason is that in content analysis a single item often represents a construct. Both characteristics prevent higher-level statistics from being used effectively for testing construct equivalence in cross-national content anal- yses. As an alternative, Wirth and Kolb (2012) suggest that scholars offer qualitative discussions of functional equivalence based on explorations of the concept’s dimen- sions, theoretical considerations, additional information, and expert advice. They also suggest working more often with multiple indicators (instead of just one indicator) for concepts addressed in comparative content analyses. Avoiding measurement bias Comparisons using an etic approach may suffer from measurement bias if the verbal- ization of survey questions or the categories in the content analyses are not translated adequately for the various country versions. As a result, people from different cul- tures who take the same standing on a certain construct may score differently on a question item, either because it is worded inconsistently across cultures or because it triggers inappropriate connotations in one of the cultures. We will first discuss some precautionary measures a researcher can take at the outset of a comparative study to avoid measurement inconsistencies across cultures before coming to techniques that check for such inconsistencies once the data are collected. Language equivalence and measurement reliability are of paramount importance. To ensure equal meaning of survey questions and coding instructions, a specific prior action is the translation/back-translation procedure wherein a translated version of the 10 C O M PA R AT I V E R E S E A R C H M E T H O D S questionnaire or codebook is first produced and is then back-translated into the original language. The result from the back-translation is then compared with the original ver- sion to evaluate the quality of the translation. Ideally, this procedure is iterated until a reliable match of the two versions is achieved (Wirth & Kolb, 2004). An important motivation for such procedures is cultural decentering, meaning the removal of culture- specific words, phrases, and concepts that are difficult to translate from the original version of the instrument. An important tool may be the committee approach, in which an interdisciplinary multicultural team of individuals who have expert knowledge of the cultures, languages, and specific research fields jointly develop the research tools (Van de Vijver & Leung, 1997). The language issue has particular implications for calculating reliability in cross-national content analyses. Thus, a native language approach where all coding instruments are translated into the various native languages is a less than ideal approach because it is essentially impossible to determine meaningful reliability coefficients among the coder groups in different languages. A workable alternative is the project language approach where all researchers and coders agree upon one common lingua franca—usually English—for instruments, training, and reliability testing. Peter and Lauf (2002) calculated intercoder reliability for the native language approach and project language approach and found that reliability scores are generally somewhat lower for coding in a project language, probably due to variations in individual linguistic proficiencies. Though this appears to support opting for the native language procedure, one cannot turn a blind eye to the fact that the native language option enhances the risk that differences found in the results are confounded with differences among coder groups in varying languages (Rössler, 2012). In order to further check whether all survey items and codebook categories were indeed measured the same way across all countries, additional statistical strategies have been developed to test and enhance measurement equivalence once the data are collected. Although still rarely done, measurement invariance should generally be tested in all comparative communication studies. Of the various strategies that are available for that purpose (see Davidov, Meuleman, Cieciuch, Schmidt, & Billiet, 2014; Wirth & Kolb, 2012), multigroup confirmatory factor analysis is probably the most important for cross-national survey analyses (Kühne, 2018). The communication field is likely to see more comparative survey research in the future, partly due to the growing data availability on media use from projects such as the World Internet Project. But even in this project, which was designed with a comparative goal from the outset, meaningful conclusions can only be drawn after careful tests of measurement invariance (see Büchi, 2016). And the same applies to comparative research based on content analyses; here similar tests of measurement equivalence can be conducted (Wirth & Kolb, 2012). Avoiding instrument bias Instrument bias refers primarily to equal survey modes (mail, telephone, personal, online) and culture-specific habits related to those modes on the part of interview- ers and interviewees. In comparative survey research, a problem on the side of C O M PA R AT I V E R E S E A R C H M E T H O D S 11 interviewees is response bias, which refers to the systematic tendency by individuals in some cultures to either select extreme or modest answers or to exhibit peculiar forms of social desirability. Such differences in communication styles may have interesting substantial reasons embedded in a certain culture, but they make it difficult to compare data cross-culturally without additional tests. Those tests include differential item functioning techniques and confirmatory factor analysis (Vliegenthart, 2012; Wirth & Kolb, 2012). With respect to content analysis, instrument bias refers to the coding instructions in the codebook and the fact that (a) coders understand/interpret the instructions differently, (b) may possess different levels of knowledge regarding the instructions, and (c) may not consistently apply the codes. In particular, measuring complex news frames, latent evaluations, and ambiguous meanings continues to present a tremen- dous challenge, and cross-cultural research reinforces problems associated with this measurement. This applies to both computer-assisted and human coding. In an ideal setting, coder training is intense and repeated until intercoder reliability is sufficiently satisfactory. Moreover, coders, in an optimal situation, are closely supervised and constantly retested to assess their work quality throughout the term of the project. One way for cross-cultural content analysis to achieve these standards is to concentrate the coding, to as great a degree as possible, in one professionally managed setting and to monitor all satellite/distance coders via efficient means of communication from the testing center. Avoiding sampling bias Sample equivalence refers in surveys to an equal selection of respondents and in media content analyses to an equal selection of news outlets. While surveys strive for probability samples, cross-national content analyses typically rely on systematic rather than representative samples and examine either the most widely distributed media in a market (as measured by circulation or ratings), or the most influential outlets in the inter-media agenda setting process (as measured by news tenor leadership and media citations), or the media most relevant to the issue being studied (as measured by amount of coverage or expert assessment). As Rössler (2012) notes, any selection based on these criteria must also be discussed with reference to proportionality if the media markets or relevant market sectors differ in size. Sectors refer to the relative significance of broadcast versus print, public versus private television, daily versus weekly newspapers, right versus left leaning, upmarket versus mass market, online versus offline, and so forth. Unfortunately, no reliable catalogues exist that classify international media outlets according to these categories, a situation that emphasizes the need for clear justification. Despite these difficulties, these considerations must be factored into the choices made on the basis of the most similar and most different systems design rationale to avoid skewed samples as they, in turn, might cause mis- interpretations of findings. In sum, the structure of different media systems must be considered when drawing samples for cross-national content analyses. Similarly, when drawing samples for cross-national surveys, external statistical data on the structure of a country’s population must be considered. 12 C O M PA R AT I V E R E S E A R C H M E T H O D S In conclusion, well-written manuals, clear instructions and the commitment of all participating researchers to these instructions at the outset of a comparative study are crucial to the establishment of method equivalence. While equivalence relies on the cultural expertise of collaborators and their analytical abilities to develop a unified theoretical framework, methodological and administrative equivalence largely entails managerial capacities. Types of research questions and appropriate statistical analyses As already mentioned, more mature comparative research is explanatory in nature. Vliegenthart (2012) distinguishes four types of research questions. These include descriptive, basic explanatory, comparison of relation, and comparative explanatory questions. These four types of questions differ in the degree of sophistication in regard to explanatory ambitions and have different requirements in terms of quality and quantity of cases. Descriptive comparisons The most basic research questions are often descriptive in nature, and seek to describe the occurrences of certain phenomena and how these occurrences vary between cases. For example, a study may examine how newspapers and television reports differ across two countries, that is, Sweden and Belgium, with respect to the framing of an election campaign (Strömbäck & Van Aelst, 2010). In this example, the cases being compared are two countries. In this study and in similar studies, the analyses are descriptive in nature, and as such, they involve comparisons regarding the presence of issue framing and game framing in various newspapers. However, very often, the overarching question of the study is (implicitly or explicitly) framed in an explanatory way: in terms of how one can account for differences and similarities across cases? This is also the case in the study by Strömbäck and Van Aelst. They hypothesize that due to the similarities between political and media system characteristics and their selection of cases based on a most similar system design, differences between Belgium and Sweden will be minimal. When they do find differences, the authors find it hard to explain them, and they do not advance much beyond noting that country or political communication system is “what matters” (pp. 56–57). Similarities and differences in election campaign coverage between the two countries might be consistent with expectations derived theoretically from different political and media system characteristics, a relationship that is not tested statistically. As a result, there may be multiple explanations for differences between two cases, even if they are comparable, as in the case of Sweden and Belgium. Additionally, similarities might occur due to general journalistic practices rather than similar system characteristics. Statistically, the descriptive comparison of two (or more) countries is not too difficult, and comparisons of means (e.g., t-tests) and analyses of variance (e.g., ANOVAs) are often sufficient. In some instances, especially those with a mid-range number of cases, C O M PA R AT I V E R E S E A R C H M E T H O D S 13 one might be interested in a more systematic grouping of cases, for example, to identify two or more clusters of countries that are highly similar. In those instances, techniques such as multidimensional scaling, correspondence analysis, or cluster analysis may be warranted. These three techniques share the underlying logic of positioning cases in comparison to each other and highlighting those cases that are similar or different based on a specific set of criteria or variables. While multidimensional scaling has similarities with more widely applied factor analysis, it has fewer restrictions on the data. In factor analysis, correlation matri- ces are used, and interval variables with (roughly) normal distribution and linear association are required. Multidimensional scaling, however, can be based on any similarity/dissimilarity matrix. Most commonly, the outcome of a multidimensional scaling analysis is two or three dimensions on which each individual case can be positioned. Each dimension must be interpreted post hoc based on its underlying variables. An application of the technique can be found in a comparative survey of journalists by Hanitzsch and colleagues (2010), in which they compare similarities in perceived influences on journalists across 17 countries. A similar technique is correspondence analysis, a technique in which specifically nominal variables are used to construct dimensions. Esser (2008) applies this technique and identifies, based on a comparative content analysis of television election news, three different political news cultures across five Western countries. In a follow-up correspon- dence analysis of six national press systems over a 40-year time span, he finds similar political news cultures (Esser & Umbricht, 2013). A somewhat deviating technique is one that aims to divide cases in several similar groups into a cluster analysis. Again, this input includes the scores of a mid-range number of cases according to a predefined set of variables. A common application of this technique is found in political science, where it is used, for example, to compare party manifestos and investigate which parties take similar stances on certain issues (see, for example, Pennings, Keman, & Kleinnijenhuis, 2006). Various techniques can be used to calculate the distance between the cases and the best way to cluster them into groups. The study by Brüggemann and colleagues (2014) provides a good application of a cluster analysis wherein, based on a cluster analysis of 17 Western countries and relying on a wide variety of data sources, they suggest an adjustment to Hallin and Mancini’s (2004) classification of Western countries in media systems. Among other things, the analysis refers to the existence of four rather than three groups of countries. Basic explanatory analysis The second type of research question addressed in comparative research is a basic explanatory one. The key question is whether certain variables at the unit level impact other variables measured at the same level. Schuck et al. (2013), for example, studies how political system characteristics (closed-list proportional system or not) affected the level of conflict framing in national outlets during the 2009 European Parliamentary election campaign. In such instances, multivariate analyses, such as regression analysis, can be applied. An issue for many of the studies, however, is the 14 C O M PA R AT I V E R E S E A R C H M E T H O D S limited number of cases. To conduct multivariate statistical techniques, a considerable number of cases are required, and in many instances, data for a sufficient number of cases are unavailable. In such situations, two solutions exist, specifically, to introduce an additional longitudinal component to the design or to rely on the comparative logic of QCA, as mentioned previously, and its extension of fuzzy sets. When one is not able to extend the sample to include more countries (or other types of units), one may be able to obtain data from the same cases at multiple points in time. In such instances, not only do the number of observations increase, but also both cross-sectional and cross-time variance in the dependent variable can be considered. In this case, one deals with a pooled dataset and can subject it to pooled time series analysis. For example, Vliegenthart et al. (2008) examine the impact of the presence of various frames in national newspapers on aggregate-level EU support as measured in the biannual Eurobarometer. They consider seven countries over an extended period of time—almost 20 years. Thus, each biannual observation can be considered a separate case. While this solves the issue of too few observations, it also poses additional chal- lenges related to the particular characteristics of pooled time series, in particular, the fact that the observations are not independent. Accordingly, there are four issues to be considered. The first is heterogeneity, which refers to the differences in levels of the dependent variable across units (in this example, countries) that cannot be explained by the independent variables included in the model. The second, autocorrelation and stationarity, is the temporal dependence of observations, that is, the degree to which the current observation depends on the previous observation (autocorrelation). If this dependency is high, it might be a sign of non-stationarity, thus indicating that the mean of the dependent variable is not stable over time. The third issue to consider is that of contemporaneous correlation, which indicates there is correlation between observations within different units that are measured at the same time. This occurs, for example, due to external events that affect the different units simultaneously. The fourth issue is that of unit-level heteroscedasticity, which suggests that the model explains variance for one unit better than it does for the other. How to address these issues is part of ongoing sci- entific debates in, for example, political science (e.g., Beck & Katz, 1995; Wilson & But- ler, 2007), and thus, a detailed discussion of the topic is beyond the scope of this entry. A second alternative to address the limited number of observations is to rely on qualitative comparative analysis, or QCA. As previously mentioned, this method assumes that a constellation of factors (independent variables) results in a certain outcome (dependent variable) and that different constellations (paths) may yield the same outcome. This alternative is mainly applied in neighboring fields such as sociology and political science. One key characteristic of QCA is that it dichotomizes the variables included such that a certain phenomenon (being an explanatory variable or an outcome variable) is absent (out) or present (in). Recent years have seen an extension of this method to allow for more variation where phenomena are not fully in or out of a category, thus allowing for several intermediate values. This fuzzy-set logic (fsQCA) has, for example, been applied by Downey and Stanyer (2010) in their investigation of the presence of personalization of political communication in 20 countries. Their analysis suggests that there are two paths to personalization of political communication. The first combines a presidential system with low party identification, C O M PA R AT I V E R E S E A R C H M E T H O D S 15 and the second is low party identification combined with professionalized campaigns and strong tabloid media. Comparison of relation A third type of research question is the comparison of relation, which involves investi- gating in different contexts the relationship between an independent and a dependent variable. The comparison of contexts serves as a robustness check to determine whether a relationship holds in various situations. Holtz-Bacha and Norris (2001), for example, test the effects of public television preferences on political knowledge and find that in 10 out of the 14 countries they studied, a positive and significant relationship was present. They rely on a set of regression analyses, one for each single country. Alterna- tively, one can pool the data and use dummy variables for the countries and interaction terms between the independent variable of interest and the dummy variables. If these interaction terms are not significant, the relationship is similar across countries. Comparative explanatory The final type of question is labeled comparative explanatory. It goes one step beyond the comparison of relation question in that it addresses explanations for different relationships across units by taking characteristics of those units into consideration. An example of a comparative explanatory question is found in the study by Schuck, Vliegenthart, and de Vreese (2016). They investigate the effect of exposure to conflict framing on turnout for the 2009 European parliamentary elections campaign. This relationship is positioned at the individual level wherein the individuals are nested within the various EU member states. Schuck et al. hypothesize and find that the strength of the effect depends on a country characteristic, namely, the overall evalu- ation of the EU in media coverage. More specifically, the more positive the coverage, the stronger the effect of conflict framing. In this case, two levels are combined, the individual (micro) level and the macro (country) level, where the first is nested in the latter. In comparable cases, even additional levels can be considered such as journalists nested in organizations nested in countries (Hanitzsch & Berganza, 2012). In these instances, it makes sense to rely on multilevel modeling, though alternative strategies can also be considered when the number of higher level units is limited (e.g., clustered standard errors). As with pooled time series, the main challenge posed by the nested structure of many comparative datasets is that observations are not independent, which is one of the main assumptions for many multivariate analyses, such as regression analysis. If we take the example of citizens in various EU countries where respondents from the same coun- tries are likely to have many commonalities, their scores on certain variables will be more highly correlated, and they might display certain particular relationships between variables. It is exactly these relationships, and how they potentially differ across coun- tries, that lead scholars conducting comparative research to rely on multilevel modeling. To conduct a multilevel model, one needs a reasonable number of higher level units, specifically, a minimum of 10–15 units is required. Otherwise, it is more appropriate 16 C O M PA R AT I V E R E S E A R C H M E T H O D S either to work with dummies for the higher level units (see previous section) and rea- son theoretically about different relations across countries, or to use standard errors clustered at the higher level. When conducting a multilevel analysis, two models exist. The first model relies on fixed effects of the independent variables, wherein the effects are modeled as being the same across all higher level units and only the intercept varies across these higher level units (random intercept). The second model relies on random effects wherein relationships across variables measured at the lower level differ across higher level units. When the independent variable interacts with a variable measured at the higher level, a comparative explanatory question is addressed. A final important assumption in multilevel modeling is that the selection of higher level units resembles a true random sample of the larger population. In many instances, it may be possible to obtain data for a relatively solid sample of Western countries, but it may be more difficult for countries from other parts of the word. This is important to keep in mind when considering the generalizability of findings. A hands-on primer on how to conduct multilevel models is available in Hayes (2005). Suitable theoretical frameworks The micro-macro links must be integrated into the theoretical framework that under- lies the comparative analysis. Norris (2009) states that without a guiding theoretical map, comparativists “remain stranded in Babel” (p. 323) and that only the develop- ment of widely shared core theoretical concepts and standardized operational mea- sures can reduce the “cacophonous Babel” in comparative communication research (p. 327). Unfortunately, even today, many comparativists fail to explicate their objec- tives and theoretical foundations, and hence, they end up with little more than merely descriptive findings. There are, of course, exceptions. Esser and Hanitzsch (2012b), in their Handbook of Comparative Communication Research, introduce several suitable frameworks, such as the political communication system, media system, media market, media audience, media culture, journalism culture, election communication system, and news-making system, among others. It is now vital that these concepts and frame- works be used, criticized, amended, and refined as beginning from scratch in each new publication will not advance the field of comparative communication research. More- over, using complaints about the alleged immaturity of the field as an excuse for deliv- ering yet more immature studies will only serve to negatively affect the advancement of research in the field. Accordingly, given that considerable progress has been made in this field, we must continue to build on it. Adjusting research designs to account for effects of globalization Norris and Inglehart (2009) produced a ranking that vividly illustrates the extent to which the world’s countries have become cosmopolitan, that is, absorbent of transborder influences. Esser and Pfetsch concluded some time ago that “[i]n times of C O M PA R AT I V E R E S E A R C H M E T H O D S 17 growing globalization and supranational integration … it is becoming increasingly difficult to treat societies and cultures as isolated units” (2004, p. 401). For this reason, Kohn (1989) created, with great vision, another model for international comparison, which he called the transnational model, a model that treats countries as loci of border-transgressing trends. As a consequence, two sub-approaches are identified. The objective of the first sub-approach is to investigate transnational phenomena and determine how they can be observed in different countries. An example would be to investigate transnational broadcasters, such as Al Jazeera or the BBC World Ser- vice, transnational Internet platforms, such as Facebook or YouTube, and transnational entertainment formats, such as Big Brother, Who Wants to Be a Millionaire, or Disney productions. This sub-approach is interested in how transnational media products are domesticated and how local settings influence differently the reception and interpre- tation of transnational products. Thus, the comparative research question is, “How are transnational or transcultural phenomena revealed in different countries?” (Hasebrink, 2012, p. 386). Accordingly, this approach acknowledges that countries are exposed to similar trends and developments but that those developments play out differently in different contexts. This sub-approach further acknowledges that producers, products, and audiences are no longer primarily defined by membership to national communities. Instead, other forms of belonging come to the forefront. This brings us to the second sub-approach. Many examples suggest that in a globalized world, transnational flows of communication intersect in new spaces that do not necessarily correspond with national boundaries. These new spaces are de- territorialized, that is, they are no longer confined to territorial borders and have been variously called translocal mediascapes (Appadurai, 1996), “emerging transnational mediated spheres” (Hellman & Riegert, 2012), “new forms of public connectivity” (Voltmer, 2012) or, more briefly, “media cultures” (Couldry & Hepp, 2012). Examples of de-territorialized communities include international movements of online activism, audiences of global crisis events and international entertainment programs, fans of international celebrities, followers of international religious movements, and satellite television viewerships of ethnic diasporas dispersed over many countries. Whereas the first sub-approach, glocalization, asks how global media phenom- ena are appropriated within distinct national borders, the second sub-approach, de-territorialization, questions the idea of fixed national borders and asks what new border-transgressing scapes and spheres have emerged that are cutting across national borders. The goal of the second sub-approach is to develop new classification schemes that may serve as a foundation for new forms of comparison. It is ironic, however, that just at the point when the communication discipline has gained a firmer grip on methodological approaches, useful frameworks and role-model studies (see Esser & Hanitzsch, 2012b), the processes of transnationalization are apparently undermining the comparative rationale. For this reason, comparativists may wish to update their research strategies to account for the challenges of transnationalization. Accordingly, we offer four necessary extensions. First, comparativists must realize that explanatory variables for certain communi- cation outcomes will no longer be derived from domestic contexts alone, but will also come from foreign models. Additional variables must be incorporated in comparative 18 C O M PA R AT I V E R E S E A R C H M E T H O D S designs, namely, those that represent international relations. These external influences can express structural power or dependency relations between media systems (i.e., the hegemonic impact of core powers on peripheral systems in a given network), cultural imperialism of values (“Americanization”), penetration of ideological or economic values (from the West to the South or East), or more neutral processes of interconnectivity and diffusion of ideas. Here, the longitudinal aspect might become more important compared to the cross-sectional aspect. Second, in addition to incorporating the linking mechanisms between individual cases and transnational structures, comparativists must study the interplay between external (border-transgressing) and internal (domestic) factors, as it will help them understand how media systems respond to transnational influences. Media systems are not empty containers, and journalists and news organizations are not passive receivers of outside stimuli. Thus, the manners in which the various media systems respond are likely to demonstrate valuable information about the specific conditions of the media system in question. Put differently, transborder influences are likely to trigger cultural shifts and structural transformations within media systems. However, as these processes still occur within national contexts, these national pathways can still be subjected to comparative analysis. This notion of path-dependency is also reflected in the concept of glocalized hybrid cultures and hybrid media formats. An early framework that attempted to account for the complex relationships between supranational forces and individual cases is Tilly’s (1984) idea of encompass- ing comparisons. It is a concept that requires the researcher to explicitly detail the relationship of an individual system to a larger, more potent connecting structure, such as its membership to a European film industry, a shared and border-transgressing journalism culture, an Asian media market, and so on, that affects the behaviors of its parts. “With this logic,” as Comstock (2012) explains, “the encompassing method selects cases on the basis of their representativeness of common positions in the overall system” (p. 376). Thus, the goal of the analysis is “to identify patterns of difference in how hierarchically related localities respond to the same system-level dynamics and perpetuate systemic inequality” (p. 376), for example, between more and less powerful components. A third innovation, in addition to incorporating external variables and examining their interplay with domestic variables, is to integrate de-territorialization into com- parative designs. In this sense, it may no longer suffice to compare one nation’s jour- nalists with another national sample. Rather, it may also be necessary to compare both to a third emerging type, specifically, a transnationally oriented community of jour- nalists working in different countries for transnationally oriented media, including Al Jazeera, Financial Times, The Wall Street Journal, International Herald Tribune, TIME, The Economist, BBC World Service, and so on. Thus, comparativists may need to increase the number of cases in their designs by including additional globalized control groups to allow for a better assessment of how relevant the national is in relation to the transna- tional (Reese, 2008). A fourth innovation, which was mentioned previously, is the adoption of a multilevel approach in comparative communication research where the national level is merely one among many levels. As the nation-state has long ceased to be the only meaningful C O M PA R AT I V E R E S E A R C H M E T H O D S 19 category, additional levels of analyses, both above and below the nation-state, must be included, depending on the research question of inquiry. With these modifications, the comparative approach will continue to contribute substantially to the progression of knowledge in the communication discipline. Outlook Only in the past decade have communication scientists slowly begun to more widely integrate comparative elements into their research. Nonetheless, substantial progress has been made due to the increased application of comparative conceptual frameworks and the availability of comparative, mainly cross-national, data. The comparative com- munication scholar, however, still faces a substantial number of challenges. One such challenge is the rigid application of comparative logics, which is in line with work on most similar and most different systems designs as found in political science. The sec- ond challenge is to increase the number of cases included in the analyses by mov- ing beyond the two- or three-country comparisons. This would offer opportunities to respond to comparative explanatory questions and to more fully understand the role of context on the effects of communicative processes. Finally, trends such as internation- alization and globalization require the researcher to consider multiple units of analysis and integrate them into a single empirical design to better understand more of today’s complex reality. SEE ALSO: Comparative Research Methods; Content Analysis, Quantitative; Descrip- tion and Explanation; Emic Approach to Qualitative Research; Etic Approach to Qualitative Research; Generalizability; Longitudinal Data Analysis, Panel Data Analysis; Measurement Invariance (Time, Samples, Contexts); Multilevel Modeling; Qualitative Methodology; Quantitative Methodology; Regression Analysis, Linear; Reliability; Sampling, Content Analysis; Sampling, Nonrandom; Survey Methods, Traditional, Public Opinion Polling; Triangulation References Appadurai, A. (1996). Modernity at large: Cultural dimensions of globalization. Minneapolis, MN: University of Minnesota Press. Beck, N., & Katz, J. (1995). What to do (and not to do) with time-series-cross-section data in comparative politics. American Political Science Review, 89(3), 634–647. doi:10.2307/2082979 Brüggemann, M., Engesser, S., Büchel, F., Humprecht, E., & Castro, L. (2014). Hallin and Mancini (2004) revisited: Four empirical types of Western media systems. Journal of Communication, 64(6), 1037–1065. doi:10.1111/jcom.12127 Büchi, M. (2016). Measurement invariance in comparative Internet use research. Studies in Com- munication Sciences, 16(1), 61–69. doi:10.1016/j.scoms.2016.03.003 Comstock, S. C. (2012). Incorporating comparison. In S. Babones & C. Chase-Dunn (Eds.), Handbook of world-systems analysis (pp. 375–376). London: Routledge. Couldry, H., & Hepp, A. (2012). Comparing media cultures. In F. Esser & T. Hanitzsch (Eds.), Handbook of comparative communication research (pp. 249–261). London: Routledge. 20 C O M PA R AT I V E R E S E A R C H M E T H O D S Davidov, E., Meuleman, B., Cieciuch, J., Schmidt, P., & Billiet, J. (2014). Measurement equiva- lence in cross-national research. Annual Review of Sociology, 40, 55–75. doi:10.1146/annurev- soc-071913-043137 Downey, J., & Stanyer, J. (2010). Comparative media analysis: Why some fuzzy thinking might help. Applying fuzzy set qualitative comparative analysis to the personalization of mediated political communication. European Journal of Communication, 25(4), 331–347. doi:10.1177/0267323110384256 Esser, F. (2008). Dimensions of political news cultures: Sound bite and image bite news in France, Germany, Great Britain, and the United States. International Journal of Press/Politics, 13(4), 401–428. doi:10.1177/1940161208323691 Esser, F., & Hanitzsch, T. (2012a). On the why and how of comparative inquiry in communication studies. In F. Esser & T. Hanitzsch (Eds.), Handbook of comparative communication research (pp. 3–22). London: Routledge. Esser, F. & Hanitzsch, T. (Eds.). (2012b). Handbook of comparative communication research. Lon- don: Routledge. Esser, F., & Pfetsch, B. (2004). Meeting the challenges of global communication and political inte- gration: The significance of comparative research in a changing world. In F. Esser & B. Pfetsch (Eds.), Comparing political communication. Theories, cases, and challenges (pp. 384–410). New York: Cambridge University Press. Esser, F., & Umbricht, A. (2013). Competing models of journalism? Political affairs coverage in U.S., British, German, Swiss, French and Italian newspapers. Journalism, 15(8), 989–1007. doi:10.1177/1464884913482551 George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. Cambridge, MA: MIT Press. Hallin, D., & Mancini P. (2004). Comparing media systems: Three models of media and politics. Cambridge, UK: Cambridge University Press. Hanitzsch, T., Anikina, M., Berganza, R., Cangoz, I., Coman, M., Hamada, B. I., … Yuen, K. W. (2010). Modeling perceived influences on journalism: Evidence from a cross- national survey of journalists. Journalism & Mass Communication Quarterly, 87(1), 7–24. doi:10.1177/107769901008700101 Hanitzsch, T., & Berganza, R. (2012). Explaining journalists’ trust in public institutions across 20 countries: Media freedom, corruption and ownership matter most. Journal of Communication, 62(5), 794–814. doi:10.1111/j.1460-2466.2012.01663.x Hantrais, L. (1999). Cross contextualization in cross-national comparative research. Interna- tional Journal of Social Research Methodology, 2(2), 93–108. doi:10.1080/136455799295078 Hasebrink, U. (2012). Comparing media use and reception. In F. Esser & T. Hanitzsch (Eds.), Handbook of comparative communication research (pp. 382–399). London: Routledge. Hayes, A. F. (2005). Statistical methods for communication science. Mahwah, NJ: Erlbaum. Hellman, M., & Riegert, K. (2012). Emerging transnational news spheres in global crisis reporting: A research agenda. In I. Volkmer (Ed.), The handbook of global media research (pp. 156–174). Malden, MA: Wiley-Blackwell. Holtz-Bacha, C., & Kaid, L. L. (2011). Political communication across the world: Method- ological issues involved in international comparisons. In E. P. Bucy & R. L. Holbert (Eds.), Sourcebook for political communication research: Methods, measures, and analytical techniques (pp. 395–416). New York: Routledge. Holtz-Bacha, C., & Norris, P. (2001). To entertain, inform, and educate: Still the role of public television? Political Communication, 18(2), 123–140. doi:10.1080/105846001750322943 Kohn, M. L. (1989). Cross-national research as an analytic strategy. In M. L. Kohn (Ed.), Cross- national research in sociology (pp. 77–102). Newbury Park, CA: SAGE. C O M PA R AT I V E R E S E A R C H M E T H O D S 21 Kühne, R. (2018). Measurement invariance. In J. Matthes (Gen. Ed.), C. S. Davis & R. F. Potter (Assoc. Eds.), The international encyclopedia of communication research methods. Malden, MA: John Wiley & Sons, Inc. Landman, T. (2008). Issues and methods in comparative politics (3rd ed.). London: Routledge. Mancini, P., & Hallin, D. C. (2012). Some caveats about comparative research in media stud- ies. In H. A. Semetko & M. Scammell (Eds.), The SAGE handbook of political communication (pp. 509–517). Thousand Oaks, CA: SAGE. Mill, J. S. (1843). A system of logic. London: Longman. Norris, P. (2009). Comparative political communications: Common frameworks or Babelian confusion? Government and Opposition, 44(3), 321–340. doi:10.1111/j.1477- 7053.2009.01290.x Norris, P., & Inglehart, R. (2009). Cosmopolitan communications: Cultural diversity in a globalized world. New York: Cambridge University Press. Pennings, P., Keman, H., & Kleinnijenhuis, J. (Eds.). (2006). Doing research in political science. An introduction to comparative methods and statistics (2nd ed.). London: SAGE. Peter, J., & Lauf, E. (2002). Reliability in cross-national content analysis. Journalism and Mass Communication Quarterly, 79(4), 815–832. doi:10.1177/107769900207900404 Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry. New York: John Wiley & Sons, Inc. Ragin, C. C. (1987). The comparative method: Moving beyond qualitative and quantitative strate- gies. Berkeley and Los Angeles, CA: University of California Press. Ragin, C. C. (2008). Qualitative comparative analysis using fuzzy sets (fsQCA). In B. Rihoux & C. C. Ragin (Eds.), Configurational comparative methods. Qualitative comparative analysis and related techniques (pp. 87–122). Thousand Oaks, CA: SAGE. Reese, S. (2001). Understanding the global journalist: A hierarchy-of-influences approach. Jour- nalism Studies, 2(2), 173–187. doi:10.1080/14616700118394 Reese, S. D. (2008). Theorizing a globalized journalism. In M. Loeffelholz & D. Weaver (Eds.), Global journalism research: Theories, methods, findings, future (pp. 240–252). Chichester, UK: Wiley-Blackwell. Rihoux, B. (2006). Qualitative comparative analysis (QCA) and related systematic comparative methods. Recent advances and remaining challenges for social science research. International Sociology, 21(5), 679–706. doi:10.1177/0268580906067836 Rössler, P. (2012). Comparative content analysis. In F. Esser & T. Hanitzsch (Eds.), Handbook of comparative communication research (pp. 459–468). London: Routledge. Sartori, G. (1994). Compare why and how? In M. Dogan & A. Kazancigil (Eds.), Comparing nations: Concepts, strategies, substance (pp. 14–34). Oxford: Blackwell. Schuck, A., Vliegenthart, R., & de Vreese, C. (2016). Who’s afraid of conflict? The mobilizing effect of conflict framing in campaign news. British Journal of Political Science, 46(1), 177–194. doi:10.1017/S0007123413000525 Schuck, A. R. T., Vliegenthart, R., Boomgaarden, H. G., Elenbaas, M., Azrout, R., Van Spanje, J., & de Vreese, C. H. (2013). Explaining campaign news coverage: How medium, time and context explain variation in the media framing of the 2009 European Parliamentary elections. Journal of Political Marketing, 12(1), 8–28. doi:10.1080/15377857.2013.752192 Strömbäck, J., & Van Aelst, P. (2010). Exploring some antecedents of the media’s framing of election news: A comparison of Swedish and Belgian election news. International Journal of Press/Politics, 15(1), 41–59. doi:10.1177/1940161209351004 Tilly, C. (1984). Big structures, large processes, huge comparisons. Los Angeles, CA: SAGE. Van de Vijver, F., & Leung, K. (1997). Methods and data analysis of comparative research. In J. W. Berry, Y. P. Poortinga, & J. Pandey (Eds.), Handbook of cross-cultural psychology (2nd ed., Vol. 1, pp. 257–300). Needham Heights, MA: Allyn & Bacon. 22 C O M PA R AT I V E R E S E A R C H M E T H O D S Vliegenthart, R. (2012). Advanced strategies for data analysis: Opportunities and challenges of comparative data. In F. Esser & T. Hanitzsch (Eds.), Handbook of comparative communication research (pp. 486–500). New York: Routledge. Vliegenthart, R., Schuck, A. R. T., Boomgaarden, H. G., & de Vreese, C. H. (2008). News cover- age and support for European integration, 1990–2006. International Journal of Public Opinion Research, 20(4), 415–439. doi:10.1093/ijpor/edn044 Voltmer, K. (2012). The media in transitional democracies. Cambridge, UK: Polity Press. Wilson, S. E., & Butler, D. M. (2007). A lot more to do: The sensitivity of time-series cross-section analyses to simple alternative specifications. Political Analysis, 15(2), 101–123. doi:10.1093/pan/mpl012 Wirth, W., & Kolb, S. (2004). Designs and methods of comparative political communication research. In F. Esser & B. Pfetsch (Eds.), Comparing political communication: Theories, cases, and challenges (pp. 87–111). New York: Cambridge University Press. Wirth, W., & Kolb, S. (2012). Securing equivalence: Problems and solutions. In F. Esser & T. Hanitzsch (Eds.), The handbook of comparative communication research (pp. 469–485). London: Routledge. Further reading Esser, F., & Hanitzsch, T. (Eds.). (2012). Handbook of comparative communication research. London: Routledge. Landman, T. (2008). Issues and methods in comparative politics (3rd ed.). London: Routledge. Frank Esser is professor of international and comparative media research at the Uni- versity of Zurich, where he co-directs an 80-person strong National Research Center on the Challenges to Democracy in the 21st Century (NCCR Democracy). He has held visiting positions at the Universities of Oklahoma, Texas–Austin, and California–San Diego. His research focuses on cross-national studies of news journalism and political communication. His books include Comparing Political Communication (2004, with B. Pfetsch), Handbook of Comparative Communication Research (2012, with T. Hanitzsch), and Comparing Political Journalism (2016, with de Vreese and Hopmann). Rens Vliegenthart is professor of media and society in the Department of Communica- tion Science and the Amsterdam School of Communication Research at the University of Amsterdam. His research focuses on interactions between media and politics and effects of media coverage on public opinion.