Research Methods: Problems, Questions, and Ethics PDF
Document Details

Uploaded by ColorfulBildungsroman4017
Tags
Summary
These research notes cover topics such as research problems, questions, hypotheses, ethics in nursing, and quantitative research design. Key ethical principles like beneficence, respect for human dignity, and justice are discussed. The document also delves into quantitative research methodologies, including experimental, quasi-experimental, and nonexperimental designs.
Full Transcript
CHAPTER 4: Research Problems, Research Questions, and Hypotheses A research problem is a perplexing or enigmatic situation that a researcher wants to address through disciplined inquiry. Researchers usually identify a broad topic, narrow the problem scope, and identify questions...
CHAPTER 4: Research Problems, Research Questions, and Hypotheses A research problem is a perplexing or enigmatic situation that a researcher wants to address through disciplined inquiry. Researchers usually identify a broad topic, narrow the problem scope, and identify questions consistent with a paradigm of choice. Common sources of ideas for nursing research problems are clinical experience, relevant literature, quality improvement initiatives, social issues, theory, and external suggestions. Key criteria in assessing a research problem are that the problem should be clinically significant, researchable, feasible, and of personal interest. Feasibility involves the issues of time, researcher skills, cooperation of participants and other people, availability of facilities and equipment, and ethical considerations. Researchers communicate their aims as problem statements, statements of purpose, research questions, or hypotheses. Problem statements, which articulate the nature, context, and significance of a problem, include several components organized to form an argument for a new study: problem identification; the background, scope, and consequences of the problem; knowledge gaps; and possible solutions to the problem. A statement of purpose, which summarizes the overall study goal, identifies key concepts (variables) and the population. Purpose statements often communicate, through the use of verbs and other key terms, the underlying research tradition of qualitative studies, or whether study is experimental or nonexperimental in quantitative ones. A research question is the specific query researchers want to answer in addressing the research problem. In quantitative studies, research questions usually concern the existence, nature, strength, and direction of relationships. In quantitative studies, a hypothesis is a statement of predicted relationships between two or more variables. Directional hypotheses predict the direction of a relationship; nondirectional hypotheses predict the existence of relationships, not their direction. Research hypotheses predict the existence of relationships; null hypotheses, which express the absence of a relationship, are the hypotheses subjected to statistical testing. Hypotheses are never proved or disproved in an ultimate sense—they are accepted or rejected and supported or not supported by the data CHAPTER 7: Ethics in Nursing Researchers face ethical dilemmas in designing studies that are both ethical and rigorous. Codes of ethics have been developed to guide researchers. Three major ethical principles from the Belmont Report are incorporated into most guidelines: beneficence, respect for human dignity, and justice. Beneficence involves the performance of some good and the protection of participants from physical and psychological harm and exploitation. Respect for human dignity involves participants’ right to self-determination, which means they are free to control their own actions, including voluntary participation. Full disclosure mseans that researchers have fully divulged participants’ rights and the risks and benefits of the study. When full disclosure could bias the results, researchers sometimes use covert data collection (the collection of information without the participants’ knowledge or consent) or deception (providing false information). Justice includes the right to fair treatment and the right to privacy. In the United States, privacy has become a major issue because of the Privacy Rule regulations that resulted from the Health Insurance Portability and Accountability Act (HIPAA). Various procedures have been developed to safeguard study participants rights. For example, researchers can conduct a risk/benefit assessment in which the potential benefits of the study to participants and society are weighed against the costs. Informed consent procedures, which provide prospective participants with 236 information needed to make a reasoned decision about participation, normally involve signing a consent form to document voluntary and informed participation. In qualitative studies, consent may need to be continually renegotiated with participants as the study evolves, through process consent procedures. Privacy can be maintained through anonymity (wherein not even researchers know participants’ identities) or through formal confidentiality procedures that safeguard the information participants provide. U.S. researchers can seek a Certificate of Confidentiality that protects them against the forced disclosure of confidential information (e.g., by a court order). Researchers sometimes offer debriefing sessions after data collection to provide participants with more information or an opportunity to air complaints. Vulnerable groups require additional protection. These people may be vulnerable because they are unable to make a truly informed decision about study participation (e.g., children), because of diminished autonomy (e.g., prisoners), or because circumstances heighten the risk of physical or psychological harm (e.g., pregnant women). External review of the ethical aspects of a study by an ethics committee, Research Ethics Board (REB), or Institutional Review Board (IRB) is often required by either the agency funding the research or the organization from which participants are recruited. In studies in which risks to participants are minimal, an expedited review (review by a single member of the IRB) may be substituted for a full board review; in cases in which there are no anticipated risks, the research may be exempted from review. Researchers need to give careful thought to ethical requirements throughout the study’s planning and implementation and to ask themselves continually whether safeguards for protecting humans are sufficient. Ethical conduct in research involves not only protection of the rights of human and animal subjects but also efforts to maintain high standards of integrity and avoid such forms of research misconduct as plagiarism, fabrication of results, or falsification of data CHAPTER 9: Quantitative Research Design Many quantitative nursing studies aim to elucidate cause-and-effect relationships. The challenge of research design is to facilitate inferences about causality. One criterion for causality is that the cause must precede the effect. Another is that a relationship between a presumed cause (independent variable) and an effect (dependent variable) cannot be explained as being caused by other (confounding) variables. In an idealized model, a counterfactual is what would have happened to the same people simultaneously exposed and not exposed to a causal factor. The effect is the difference between the two. The goal of research design is to find a good approximation to the idealized (but impossible) counterfactual. Experiments (or randomized controlled trials, RCTs) involve manipulation (the researcher manipulates the independent variable by introducing a treatment or intervention), control (including use of a control group that is not given the intervention and represents the comparative counterfactual), and randomization or random assignment (with people allocated to experimental and control groups at random so that they are equivalent at the outset). Subjects in the experimental group usually get the same intervention, as delineated in formal protocols, but some studies involve patient-centered interventions (PCIs) that are tailored to meet individual needs or characteristics. Researchers can expose the control group to various conditions, including no treatment, an alternative treatment, standard treatment (“usual care”), a placebo or pseudointervention, different doses of the treatment, or a delayed treatment (for a wait- list group). Random assignment is done by methods that give every participant an equal chance of being in any group, such as by flipping a coin or using a table of random numbers. Randomization is the most reliable method for equating groups on all 313 characteristics that could affect study outcomes. Randomization should involve allocation concealment that prevents foreknowledge of upcoming assignments. Several variants to simple randomization exist, such as permuted block randomization, in which randomization is done for blocks of people—for example, six or eight at a time, in randomly selected block sizes. Blinding (or masking) is often used to avoid biases stemming from participants’ or research agents’ awareness of group status or study hypotheses. In double-blind studies, two groups (e.g., participants and investigators) are blinded. Many specific experimental designs exist. A posttest-only (after-only) design involves collecting data after an intervention only. In a pretest–posttest (before after) design, data are collected both before and after the intervention, permitting an analysis of change. Factorial designs, in which two or more independent variables are manipulated simultaneously, allow researchers to test both main effects (effects from manipulated independent variables) and interaction effects (effects from combining treatments). In a crossover design, subjects are exposed to more than one condition, administered in a randomized order, and thus, they serve as their own controls. Experimental designs are the “gold standard” because they come closer than any other design in meeting criteria for inferring causal relationships. Quasi-experimental designs (trials without randomization) involve an intervention but lack randomization. Strong quasi- experimental designs include features to support causal inferences. The nonequivalent control group pretest–posttest design involves using a nonrandomized comparison group and the collection of pretreatment data so that initial group equivalence can be assessed. Comparability of groups can sometimes be enhanced through matching on individual characteristics or by using propensity matching, which involves matching on a propensity score for each participant. In a time series design, information on the dependent variable is collected over a period of time before and after the intervention. Time series designs are often used in single-subject (N-of-1) experiments. Other quasi-experimental designs include quasi-experimental dose-response analyses and the quasi-experimental (nonrandomized) arms of a partially randomized patient preference (PRPP) randomization design (i.e., groups with strong preferences). In evaluating the results of quasi-experiments, it is important to ask whether it is plausible that factors other than the intervention caused or affected the outcomes (i.e., whether there are credible rival hypotheses for explaining the results). Nonexperimental (or observational) research includes descriptive research— studies that summarize the status of phenomena—and correlational studies that examine relationships among variables but involve no manipulation of independent variables (often because they cannot be manipulated). 314 Designs for correlational studies include retrospective (case-control) designs (which look back in time for antecedent causes of “caseness” by comparing cases that have a disease or condition with controls who do not), prospective (cohort) designs (studies that begin with a presumed cause and look forward in time for its effect), natural experiments (in which a group is affected by a random event, such as a disaster), and path analytic studies (which test causal models developed on the basis of theory). Descriptive correlational studies describe how phenomena are interrelated without invoking a causal explanation. Univariate descriptive studies examine the frequency or average value of variables. Descriptive studies include prevalence studies that document the prevalence rate of a condition at one point in time and incidence studies that document the frequency of new cases, over a given time period. When the incidence rates for two groups are estimated, researchers can compute the relative risk of “caseness” for the two. The primary weakness of correlational studies for cause-probing questions is that they can harbor biases, such as self-selection into groups being compared. Chapter 12: Sampling in Quantitative Research Sampling is the process of selecting a portion of the population, which is an entire aggregate of cases, for a study. An element is the most basic population unit about which information is collected—usually humans in nursing research. Eligibility criteria are used to establish population characteristics and to determine who can participate in a study—either who can be included (inclusion criteria) or who should be excluded (exclusion criteria). Researchers usually sample from an accessible population but should identify the target population to which they want to generalize their results. A sample in a quantitative study is assessed in terms of representativeness—the extent to which the sample is similar to the population and avoids bias. Sampling bias refers to the systematic over- or underrepresentation of some segment of the population. Methods of nonprobability sampling (wherein elements are selected by nonrandom methods) include convenience, quota, consecutive, and purposive sampling. Nonprobability sampling designs are practical but usually have strong potential for bias. Convenience sampling uses the most readily available or convenient group of people for the sample. Snowball sampling is a type of convenience sampling in which referrals for potential participants are made by those already in the sample. Quota sampling divides the population into homogeneous strata (subpopulations) to ensure representation of subgroups; within each stratum, people are sampled by convenience. Consecutive sampling involves taking all of the people from an accessible 386 population who meet the eligibility criteria over a specific time interval, or for a specified sample size. In purposive sampling, elements are handpicked to be included in the sample based on the researcher’s knowledge about the population. Probability sampling designs, which involve the random selection of elements from the population, yield more representative samples than nonprobability designs and permit estimates of the magnitude of sampling error. Simple random sampling involves the random selection of elements from a sampling frame that enumerates all the population elements; stratified random sampling divides the population into homogeneous strata from which elements are selected at random. Cluster sampling involves sampling of large units. In multistage random sampling, there is a successive, multistaged selection of random samples from larger units (clusters) to smaller units (individuals) by either simple random or stratified random methods. Systematic sampling is the selection of every kth case from a list. By dividing the population size by the desired sample size, the researcher establishes the sampling interval, which is the standard distance between the selected elements. In quantitative studies, researchers should use a power analysis to estimate sample size needs. Large samples are preferable to small ones because larger samples enhance statistical conclusion validity and tend to be more representative, but even large samples do not guarantee representativeness. Chapter 17: Inferential Statistics Inferential statistics, which are based on laws of probability, allow researchers to make inferences about a population based on data from a sample; they offer a framework for deciding whether the sampling error that results from sampling fluctuations is too high to provide reliable population estimates. The sampling distribution of the mean is a theoretical distribution of the means of an infinite number of samples drawn from a population. The sampling distribution of means follows a normal curve, and so the probability that a given sample value will be obtained can be ascertained. The standard error of the mean (SEM)—the standard deviation of this theoretical distribution—indicates the degree of average error of a sample mean; the smaller the SEM, the more accurate are the sample estimates of the population mean. Statistical inference consists of two approaches: estimating parameters and testing hypotheses. Parameter estimation is used to estimate a population parameter from a sample statistic. Point estimation provides a descriptive value of the population estimate (e.g., a mean or odds ratio). Interval estimation provides the upper and lower limits of a range of values—the confidence interval (CI)—between which the population value is expected to fall at a specified probability. A 95% CI indicates a 95% probability that the true population value lies between the upper and lower confidence limits. Hypothesis testing through statistical procedures enables researchers to make objective decisions about the validity of their hypotheses. The null hypothesis states that there is no relationship between research variables and that any observed relationship is due to chance. Rejection of the null hypothesis lends support to the research hypothesis. A Type I error occurs when a null hypothesis is incorrectly rejected (a false positive). A Type II error occurs when a null hypothesis is wrongly accepted (a false negative). Researchers control the risk of a Type I error by establishing a level of significance (or alpha [α] level), which is the probability that such an error will occur. The.05 level means that in only 5 out of 100 samples would the null hypothesis be rejected when it should have been accepted. In testing hypotheses, researchers compute a test statistic and then determine whether the statistic falls at or beyond the critical region on a relevant theoretical distribution. If the value of the test statistic indicates that the null hypothesis is “improbable,” the result is statistically significant (i.e., obtained results are not likely to result from chance fluctuations at the specified level of significance). Most hypothesis testing involves two-tailed tests, in which both ends of the sampling distribution are used to define the region of improbable values; a one tailed test may be appropriate if there is a strong rationale for an a priori directional hypothesis. Parametric tests involve the estimation of at least one parameter, the use of interval- or ratio-level data, and the assumption of normally distributed variables; nonparametric tests are used when the data are nominal or ordinal or when a normal distribution cannot be assumed—especially when samples are small. Tests for independent groups compare different groups of people (between subjects design), and tests for dependent groups compare the same group of people over time or conditions (within-subjects designs). Two common statistical tests are the t-test and analysis of variance (ANOVA), both of which are used to test the significance of the difference between group means; ANOVA is used when there are three or more groups (one-way ANOVA) or when there is more than one independent variable (e.g., two-way ANOVA). Repeated-measures ANOVA (RM-ANOVA) is used when there are multiple means being compared over time. The chi-square test (χ2) is used to test hypotheses about differences in proportions. For small samples or small cell sizes, Fisher’s exact test should be used. Statistical tests to measure the magnitude of bivariate relationships and to test whether the relationship is significantly different from zero include Pearson’s r for continuous data, Spearman’s rho and Kendall’s tau for ordinal-level data, and the phi coefficient and Cramér’s V for nominal-level data. A point-biserial correlation coefficient can be computed when one variable is dichotomous and the other is continuous. Confidence intervals can be constructed around almost any computed statistic, including differences between means, differences between proportions, and correlation coefficients. CI information is valuable to clinical decision makers, who need to know more than whether differences are probably real. Power analysis is a method of estimating either the likelihood of committing a Type II error or sample size requirements. Power analysis involves four components: desired significance level (α), power (1 − β), sample size (N), and estimated effect size (ES). Effect size estimates convey important information about the magnitude of effects in a study and are a useful supplement to p values and CI values. Cohen’s d is a widely used effect size index summarizing mean difference effects between two groups Chapter 21: Qualitative Research Design and Approaches Qualitative research involves an emergent design—a design that emerges in the field as the study unfolds. Although qualitative design is flexible, qualitative researchers plan for broad contingencies that pose decision opportunities for study 686 design in the field. As bricoleurs, qualitative researchers tend to be creative and intuitive, putting together an array of data drawn from many sources to develop a holistic understanding of a phenomenon. Qualitative research traditions have their roots in anthropology (e.g., ethnography and ethnoscience), philosophy (phenomenology and hermeneutics), psychology (ethology and ecologic psychology), sociology (grounded theory, ethnomethodology, and semiotics), sociolinguistics (discourse analysis), and history (historical research). Ethnography focuses on the culture of a group of people and relies on extensive fieldwork that usually includes participant observation and in-depth interviews with key informants. Ethnographers strive to acquire an emic (insider’s) perspective of a culture rather than an etic (outsider’s) perspective. Ethnographers use the concept of researcher as instrument to describe the researcher’s significant role in analyzing and interpreting a culture. The product of ethnographic research is typically a holistic written description of the culture, but sometimes the products are performance ethnographies (interpretive scripts that can be performed). Nurses sometimes refer to their ethnographic studies as ethnonursing research. Other types of ethnographic work include institutional ethnographies (which focus on the organization of professional services from the perspective of the frontline workers or clients) and autoethnographies or insider research (which focus on the group or culture to which the researcher belongs). Phenomenology seeks to discover the essence and meaning of a phenomenon as it is experienced by people, mainly through in-depth interviews with people who have had the relevant experience. In descriptive phenomenology, which seeks to describe lived experiences, researchers strive to bracket out preconceived views and to intuit the essence of the phenomenon by remaining open to meanings attributed to it by those who have experienced it. Interpretive phenomenology (hermeneutics) focuses on interpreting the meaning of experiences rather than just describing them. In an approach called interpretive phenomenologic analysis (IPA), researchers focus on people’s subjective experiences (their lifeworlds). Grounded theory aims to discover theoretical concepts grounded in the data. Grounded theory researchers try to account for people’s actions by focusing on the main concern that the behavior is designed to resolve. The manner in which people resolve this main concern is the core variable. The goal of grounded theory is to discover this main concern and the basic social process (BSP) that explains how people resolve it. Grounded theory uses constant comparison: Categories elicited from the data are constantly compared with data obtained earlier. A controversy among grounded theory researchers concerns whether to follow the 687 original Glaser and Strauss’s procedures or to use the adapted procedures of Strauss and Corbin; Glaser argued that the latter approach does not result in grounded theories but rather in conceptual descriptions. More recently, Charmaz’s constructivist grounded theory has emerged as a method that emphasizes interpretive aspects in which the grounded theory is constructed from shared experiences and relationships between the researcher and study participants. Case studies are intensive investigations of a single entity or a small number of entities, such as individuals, groups, organizations, or communities; such studies usually involve collecting data over an extended period. Case study designs can be single or multiple, and holistic or embedded. Narrative analysis focuses on story in studies in which the purpose is to explore how people make sense of events in their lives. Several different structural approaches can be used to analyze narrative data, including, for example, Burke’s pentadic dramatism. Descriptive qualitative studies do not fit into any disciplinary tradition. Such studies may be referred to as qualitative studies, naturalistic inquiries, or as qualitative content analyses. Qualitative description has been expanded into a realm called interpretive description, which emphasizes the importance of having a disciplinary conceptual frame, such as nursing. Research is sometimes conducted within an ideologic perspective, and such research tends to rely primarily on qualitative research. Critical theory entails a critique of existing social structures; critical researchers strive to conduct inquiries that involve collaboration with participants and foster enlightened self-knowledge and transformation. Critical ethnography applies the principles of critical theory to the study of cultures. Feminist research, like critical research, is designed to be transformative; the focus is on how gender domination and discrimination shape women’s lives and their consciousness. Participatory action research (PAR) produces knowledge through close collaboration with groups or communities that are vulnerable to control or oppression by a dominant social group; in PAR research, methods take second place to emergent processes that can motivate people and generate community solidarity. Chapter 22: Sampling in Qualitative Research Qualitative researchers use the conceptual demands of the study to select articulate and reflective informants with certain types of experience in an emergent way, typically capitalizing on early learning to guide subsequent sampling decisions. Qualitative samples tend to be small, nonrandom, and intensively studied. Sampling in qualitative inquiry may begin with a convenience (or volunteer) sample. Snowball (chain) sampling may also be used. Qualitative researchers often use purposive sampling to select data sources that enhance information richness. Various purposive sampling strategies have been used by qualitative researchers and can be loosely categorized as (1) sampling for representativeness or comparative value, (2) sampling special or unique cases, or (3) sampling sequentially. An important purposive strategy in the first category is maximum variation sampling, which entails purposely selecting cases with a range of variation. Other strategies used for comparative purposes or representativeness include homogeneous sampling (deliberately reducing variation), typical case sampling (selecting cases that illustrate what is typical), extreme case sampling (selecting the most unusual or extreme cases), intensity sampling (selecting cases that are intense but not extreme), stratified purposeful sampling (selecting cases within defined strata), and reputational case sampling (selecting cases based on a recommendation of an expert or key informant). Purposive sampling in the “special cases” category include critical case sampling (selecting cases that are especially important or illustrative), criterion sampling (studying cases that meet a predetermined criterion of importance), revelatory case sampling (identifying and gaining access to a case representing a phenomenon that was previously inaccessible to research scrutiny), and sampling 710 politically important cases (searching for and selecting or deselecting politically sensitive cases or sites). Although many qualitative sampling strategies unfold while in the field, purposive sampling in the “sequential” category involve deliberative emergent efforts and include theoretical sampling (selecting cases on the basis of their representation of important constructs) and opportunistic sampling (adding new cases based on changes in research circumstances or in response to new leads that develop in the field). Another important sequential strategy is sampling confirming and disconfirming cases—that is, selecting cases that enrich or challenge the researchers’ conceptualizations. A guiding sample size principle is data saturation—sampling to the point at which no new information is obtained and redundancy is achieved. Factors affecting sample size include data quality, researcher skills and experience, and scope and sensitivity of the problem. Ethnographers make numerous sampling decisions, including not only whom to sample but also what to sample (e.g., activities, events, documents, artifacts); sampling decision making is often aided by key informants who serve as guides and interpreters of the culture. Phenomenologists typically work with a small sample of people (often 10 or fewer) who meet the criterion of having lived the experience under study. Grounded theory researchers typically use theoretical sampling in which sampling decisions are guided in an ongoing fashion by the emerging theory. Samples of about 20 to 30 people are typical in grounded theory studies. Generalizability in qualitative research is a controversial issue, with some writers claiming it to be unattainable because of the highly contextualized nature of qualitative findings. Yet most qualitative researchers strive to have their findings be relevant and meaningful beyond the confines of their particular study participants and settings. Two models of generalizability have relevance for qualitative research. In analytic generalization, researchers strive to generalize from particulars to broader conceptualizations and theories. Transferability involves judgments about whether findings from an inquiry can be extrapolated to a different setting or group of people. Thick description—richly thorough depictions of research settings and participant—is needed in qualitative reports to support transferability. Chapter 24: Qualitative Data Analysis Qualitative analysis is a challenging, labor-intensive activity, with few standardized rules. The first major step in analyzing qualitative data is to organize and index materials for easy retrieval, typically by coding the content of the data according to a coding scheme. Traditionally, researchers organized their data by developing conceptual files— physical files in which coded excerpts of data relevant to specific categories are placed. Computer programs are now widely used to perform indexing functions and to facilitate analysis. The actual analysis of data usually begins with a search for categories and themes, which involves the discovery not only of commonalities across participants but also of natural variation and patterns in the data. Some qualitative analysts use metaphors or figurative comparisons to evoke a visual and symbolic analogy. The next analytic step often involves validating the thematic analysis. Some researchers use quasi-statistics, which involves a tabulation of the frequency with which certain themes or relations are supported by the data. In a final analytic step, analysts weave thematic strands together into an integrated picture of the phenomenon under investigation. Researchers whose focus is qualitative description may say that they used qualitative content analysis as their analytic method. Content analysis can vary in terms of an emphasis on manifest content or latent content and in the role of induction. In ethnographies, analysis begins as the researcher enters the field. Ethnographers continually search for patterns in the behavior and expressions of study participants. One approach to analyzing ethnographic data is Spradley’s method, which involves four levels of data analysis: domain analysis (identifying domains, or units of cultural knowledge), taxonomic analysis (selecting key domains and constructing taxonomies or systems of classification), componential analysis (comparing and contrasting terms in a domain), and a theme analysis (uncovering cultural themes). Leininger’s ethnonursing method involves four phases: collecting and recording data, categorizing descriptors, searching for repetitive patterns, and abstracting major themes. There are numerous approaches to phenomenologic analysis, including the descriptive methods of Colaizzi, Giorgi, and Van Kaam, in which the goal is to find common patterns of experiences shared by particular instances. 779 In Van Manen’s approach, which involves efforts to grasp the essential meaning of the experience being studied, researchers search for themes, using either a holistic approach (viewing text as a whole), selective approach (pulling out key statements and phrases), or detailed approach (analyzing every sentence). Central to analyzing data in a hermeneutic study is the notion of the hermeneutic circle, which signifies a methodologic process in which there is continual movement between the parts and the whole of the text under analysis. Hermeneutics has several choices for data analysis. Diekelmann’s team approach calls for the discovery of a constitutive pattern that expresses the relationships among themes. Benner’s approach consists of three processes: searching for paradigm cases, thematic analysis, and analysis of exemplars. Grounded theory uses the constant comparative method of data analysis, which involves identifying characteristics in one piece of data and comparing them with those of others to assess similarity. Developing categories in a substantive theory must fit the data and not be forced. One approach to grounded theory is the Glaser and Strauss’s (Glaserian) method, in which there are two broad types of codes: substantive codes (in which the empirical substance of the topic is conceptualized) and theoretical codes (in which relationships among the substantive codes are conceptualized). Substantive coding involves open coding to capture what is going on in the data, and then selective coding, in which only variables relating to a core category are coded. The core category, a behavior pattern that has relevance for participants, is sometimes a basic social process (BSP) that involves an evolving process of coping or adaptation. In the Glaser and Strauss’s method, open codes begin with level I (in vivo) codes, which are collapsed into a higher level of abstraction in level II codes. Level II codes are then used to formulate level III codes, which are theoretical constructs. Through constant comparison, the researcher compares concepts emerging from the data with similar concepts from existing theory or research to explore which parts have emergent fit with the theory being generated. Strauss and Corbin’s method is an alternative grounded theory method whose outcome is a full preconceived conceptual description. This approach to grounded theory analysis involves two types of coding: open (in which categories are generated) and axial coding (where categories are linked with subcategories and integrated). A controversy in the analysis of focus group data is whether the unit of analysis is the group or individual participants— some analysts examine the data at both levels. A third analytic option is the analysis of group interactions. Chapter 29: Systematic Reviews of Research Evidence: Meta-Analysis, Metasynthesis, and Mixed Studies Review Evidence-based practice relies on rigorous integration of research evidence on a topic through systematic reviews. A systematic review methodically integrates research evidence about a specific research question using carefully developed sampling and data collection procedures that are spelled out in advanced in a protocol. Systematic reviews of quantitative studies often involve statistical integration of findings through meta-analysis, a procedure whose advantages include objectivity, enhanced power, and precision; meta-analysis is not appropriate, however, for 937 broad questions or when there is substantial inconsistency of findings. The steps in both quantitative and qualitative integration are similar and involve formulating the problem, designing the study (including establishing sampling criteria), searching the literature for a sample of primary studies, evaluating study quality, extracting and encoding data for analysis, analyzing the data, and reporting the findings. There is no consensus on whether systematic reviews should include the grey literature—that is, unpublished reports. In quantitative studies, a concern is that there is a bias against the null hypothesis, a publication bias stemming from the underrepresentation of nonsignificant findings in the published literature. In meta-analysis, findings from primary studies are represented by an effect size index that quantifies the magnitude and direction of relationship between variables (e.g., an intervention and its outcomes). Three common effect size indexes in nursing are d (the standardized mean difference), the odds ratio, and Pearson’s r. Effects from individual studies are pooled to yield an estimate of the population effect size by calculating a weighted average of effects, often using the inverse variance as the weight—which gives greater weight to larger studies. Statistical heterogeneity (diversity in effects across studies) affects decisions about using a fixed effects model (which assumes a single true effect size) or a random effects model (which assumes a distribution of effects). Heterogeneity can be examined using a forest plot. Nonrandom heterogeneity (moderating effects) can be explored through subgroup analyses or meta-regression, in which the purpose is to identify clinical or methodologic features systematically related to variation in effects. Quality assessments (which may involve formal ratings of overall methodologic rigor) are sometimes used to exclude weak studies from reviews, but they can also be used to differentially weight studies or in sensitivity analyses to test whether including or excluding weaker studies changes conclusions. Metasyntheses are more than just summaries of prior qualitative findings; they involve a discovery of essential features of a body of findings and, typically, a transformation that yields new insights and interpretations. Numerous approaches to metasynthesis (and many terms related to qualitative integration) have been proposed. Metasynthesis methods that have been used by nurse researchers include meta-ethnography, metastudy, metasummary, critical interpretive synthesis (CIS), grounded formal theory, and thematic synthesis. The various metasynthesis approaches have been classified on various dimensions of difference, including epistemologic stance, extent of iteration, and degree of “going beyond” the primary studies. Another system classifies approaches according to the degree to which theory building and theory explication are achieved. One approach to qualitative integration, meta-ethnography, as proposed by Noblit and Hare, involves listing key themes or metaphors across studies and then 938 reciprocally translating them into each other; refutational and line of argument syntheses are two other types. Paterson and colleagues’ metastudy method integrates three components: (1) metadata analysis, the study of results in a specific substantive area through analysis of the “processed data”; (2) metamethod, the study of the studies’ methodologic rigor; and (3) metatheory, the analysis of the theoretical underpinnings on which the studies are grounded. Sandelowski and Barroso distinguish qualitative findings in terms of whether they are summaries (descriptive synopses) or syntheses (interpretive explanations of the data). Both summaries and syntheses can be used in a metasummary, which can lay the foundation for a metasynthesis. A metasummary involves developing a list of abstracted findings from the primary studies and calculating manifest effect sizes. A frequency effect size is the percentage of studies in a sample of studies that contain a given findings. An intensity effect size indicates the percentage of all findings that are contained within any given report. In the Sandelowski and Barroso approach, only studies described as syntheses can be used in a metasynthesis, which can use a variety of qualitative approaches to analysis and interpretations (e.g., constant comparison). Mixed methods research has contributed to the emergence of systematic mixed studies reviews, which refer to systematic reviews that use disciplined procedures to integrate and synthesize findings from qualitative, quantitative, and mixed methods studies. An explicit reporting guideline called PRISMA (Preferred Reporting Items for Systematic reviews and Meta- Analyses) is useful for writing up a systematic review of RCTs, and another called MOOSE (Meta-analysis of Observational Studies in Epidemiology) guides reporting of meta-analyses of observational studies. Chapter 30: Disseminating Evidence: Reporting Research Findings In developing a dissemination plan, researchers select a communication outlet (e.g., journal article versus conference presentation), identify the audience whom they wish to reach, and decide on the content that can be effectively communicated. In the planning stage, researchers need to decide authorship credits (if there are multiple authors), who the lead author and corresponding author will be, and in what order authors’ names will be listed. Quantitative reports (and many qualitative reports) follow the IMRAD format, with the following sections: introduction, method, results, and discussion. The introduction acquaints readers with the research problem. It includes the problem statement and study purpose, the research hypotheses or questions, a brief literature review, and description of a framework. In qualitative reports, the introduction indicates the research tradition and, if relevant, the researchers’ connection to the problem. The method section describes what researchers did to solve the research problem. It includes a description of the study design (or an elaboration of the research tradition), the sampling approach and a description of study participants, instruments and procedures used to collect and evaluate the data, and methods used to analyze the data. Standards for reporting methodologic elements now abound. Researchers reporting an RCT follow CONSORT guidelines (Consolidated Standards of Reporting Trials), which includes use of a flowchart to show the flow of study participants. Other guidelines include STROBE for observational studies, and COREQ for certain qualitative studies. Guidelines for reporting aspects of an intervention include CReDECI and TIDieR. In the results section, findings from the analyses are summarized. Results sections in qualitative reports necessarily intertwine description and interpretation. Quotes from interview transcripts are essential for giving voice to study participants. Both qualitative and quantitative researchers include figures and tables that dramatize or succinctly summarize major findings or conceptual schema. The discussion section presents the interpretation of results, how the findings relate to earlier research, study limitations, and implications of the findings for nursing practice and future research. The major types of research reports are theses and dissertations, journal articles, and presentations at professional meetings. Theses and dissertations normally follow a standard IMRAD format, but some schools now accept paper format theses, which include an introduction, two or more publishable papers, and a conclusion. In selecting a journal for publication, researchers consider the journal’s goals and audience, its prestige, and how often it publishes. Another major consideration is whether to publish in a traditional journal or in an online open-access journal. An advantage of open-access journals is speedy, worldwide dissemination. One proxy for a journal’s prestige is its impact factor, the ratio between citations to a journal and recent citable items published. More than 100 nursing journals are now evaluated for their impact factors. Before beginning to prepare a manuscript for submission to a journal, researchers need to carefully review the journal’s Instructions to Authors. Most nursing journals that publish research reports are refereed journals with a policy of basing publication decisions on peer reviews that are usually double blind reviews (identities of authors and reviewers are not divulged). Nurse researchers can also present their research at professional conferences, either through a 10- to 15-minute oral report to a seated audience or in a poster session in which the “audience” moves around a room perusing information about the study on the posters. Sponsoring organizations usually issue a Call for Abstracts for the conference 6 to 9 months before it is held.