Fundamentals Of Research PDF
Document Details
Uploaded by DazzledDoppelganger
Nanyang Technological University
Tags
Summary
These lecture notes cover the Fundamentals Of Research for students at Nanyang Technological University. The lecture explains the scientific method, characteristics, objective, and cumulative nature of research, and differences between academic and commercial research and their approaches.
Full Transcript
lOMoARcPSD|30289352 For - Lecture notes Fundamentals Of Research (Nanyang Technological University) Scan to open on Studocu Studocu is not sponsored or endorsed by any college or university...
lOMoARcPSD|30289352 For - Lecture notes Fundamentals Of Research (Nanyang Technological University) Scan to open on Studocu Studocu is not sponsored or endorsed by any college or university Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Chapter 1: Science & Research LECTURE The Scientific Method Science as a formal approach to finding answers (systematic in its design) One method of knowing (C. S. Peirce) “Scientific research is an organized, objective, and controlled empirical analysis of one or more variables.” - Wimmer & Dominick Characteristics of the Scientific Method Public o Freely available information, shared routinely o Open science movement Objective o Fact-based and rule-based, meant to minimize errors by researchers o Error is a type of noise, sloppiness, lack of precision Empirical o Concerned with a world that is knowable and measurable o Moving from abstract concepts to observable things o Use of operational definitions Systematic and Cumulative o Use previous studies as building blocks (sequence and order) o Theories guide scientific inquiry (theorizing, hypothesis, testing, analysis, reporting) Cumulative: need for replication, reveal our finding in other contexts and subjects Predictive o Relating the present to the future o Data supporting predictions prompts extension of theory Self-correcting o Theories can be altered or rejected (subjected to multiple rounds of modifications, alterations, theories evolve) Situations change, approaches change Commercial VS Academic Research Applied vs. basic (or fundamental) research o Problem orientation Applied: Focus on specific task or problem that needs to be improved or ameliorated, (Commercial) Basic: advance theories (Academic) Public vs. proprietary (owned by somebody) data o Commercial will want to have an edge over competitors -> make data proprietary Time and money o Commercial research moves quicker, time pressure o Money -> commercial has to be profitable Rigor Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 o Academic: Quality of data and integrity of research o Commercial: “quick and dirty”, cut corners where possible What makes for a Good Research Topic Solves a problem Fills a gap in our understanding (typical of basic research) o Lacuna (a blank space or missing gap) Eight questions o Is the topic too broad? o Can it actually be investigated (as stated)? o Can the data be analyzed? o Is the problem significant? o Can the results be generalized? o Are the costs or time required too great? o Is the approach appropriate o Is there any potential harm to participants? The Need for Validity Research must be valid to be meaningful Internal validity o Internal = within the study o The way the study was designed and conducted o We need to be able to rule out plausible but incorrect explanations of results o Extraneous variables: potentially influence the relationship between variables under investigation Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 o Confounding variables: things that actually do influence the relationship being studied They are a subset of extraneous variables -> extraneous variables can become confounding variables Wimmer & Dominick call them artifacts o The extent to which a study is well designed and permits inferences that are warranted o Threats to internal validity → alternative explanations History Maturation (people grow and learn) Testing (the very activity of assessing people influences them and people learn we repeated tests) Instrumentation (degradation of your measurement instrument) Statistical regression Experimental mortality (people dropping out of study) Selection (unequal selection) Demand characteristics Experimenter bias Evaluation apprehension Causal time order Diffusion of treatments Compensation Compensatory rivalry Demoralization o Research design choices affect internal validity o Scientific studies need replication External validity o How well can the results of a study be generalised across populations, settings, and time? o Can be affected by interactions among threats to internal validity Idiosyncratic How to improve external validity o Use random samples o Use heterogeneous samples and replicate o Use representative samples o Representativeness is key to external validity READINGS FOR READINGS – Chapter 1: Science & Research Introduction Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Mass Communication - which is any form of communication transmitted through a medium (channel) that simultaneously reaches a large number of people. Mass media are any communication channel used to simultaneously reach a large number of people, including radio, TV, newspapers, magazines, billboards, films, recordings, books, the Internet, and smart media. What is Research? Epistemology Major branch of philosophy concerned with theory, nature, and scope of knowledge Empiricism --> theory of knowledge at the heart of science and the scientific method Knowledge comes from experience, truth claims must be based on evidence “Knowledge comes from direct observation.” Research: an attempt to discover something/ process of asking questions and finding answers Research is a Systematic investigation Of a problem That involves gathering evident to make inferences Scientific research attempts to reveal objective reality To get a common consensus that A is A and B is B However, certain aspects of research are inherently subjective The meaning of evidence can involve interpretation - Can be very informal, with only a few (or no) specific plans or steps - Can be formal, where a researcher follows highly defined and exacting procedures Three-Step Philosophy of Success 1. Find out what the target audience wants (one or more customers, friends, family, colleagues, etc.). 2. Give it to them. 3. Tell them that you gave it to them 4 goals of social scientific research 1. Description - phenomena 2. Explanation - patterns 3. Causation - establish that 1 thing leads to another 4. Prediction - the goal, to use our understanding of relationships to predict whats going to happen in the future Inductive and Deductive Reasoning Reasoning involves deduction that is, formal steps of logic Reasoning involves induction; that is verification through empirical observation Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Empirical Generalisation Generalisations made from observation. Inductive-deductive cycle Moving from observations to generalisations (theories) then back to implications (hypothesis) Why do research? - No universal: No ideal suits all. - Diversity of desires: Different people want different things Basic qns a researcher must answer: (1) how to use research methods and statistical procedures and (2) when to use research methods and statistical procedures. Statisticians generate statistical procedures, or formulas, called algorithms; Researchers use these algorithms to investigate research questions and hypotheses. Research in mass media is used to verify or refute opinions or intuitions for decision makers. Research is not limited only to decision-making situations. It is also widely used in theoretical areas to attempt to describe the media, to analyze media effects on consumers, to understand audience behavior, and so on. Development of Mass Media Research Phase 1 of the research, there is an interest in the medium itself. Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Phase 2 research begins once the medium is developed. In this phase, specific information is accumulated about the uses and the users of the medium. Phase 3 includes investigations of the social, psychological, and physical effects of the medium. Phase 4, research is conducted to determine how the medium can be improved, either in its use or through technological developments. Research phases are not linear once a medium is developed and established, research may be conducted simultaneously in all four phases Hypodermic needle model of communication, suggested that mass communicators need only “shoot” messages at an audience and those messages would produce preplanned and almost universal effects. Contributors to growth of mass media research: 1. WWI 2. The realization by advertisers in the 1950s and 1960s that research data are useful in developing ways to persuade potential customers to buy products and services 3. Increasing interest of citizens in the effects of the media on the public, especially on children 4. Increased competition among the media for advertising dollars. - increasing dependency on data to support the decisions they make. - mass media now focus on audience fragmentation, which means that the mass of people is divided into small groups, or niches Media Research and the Scientific Method Scientific research is an organized, objective, controlled, qualitative or quantitative empirical analysis of one or more variables. Methods of Knowing Methods of knowing: tenacity, intuition, authority, and science. To this list, we add self-discovery Tenacity: something is true because it has always been true. – idea that nothing changes, what was good before will continue to be good. Intuition: a person assumes that something is true because it is “self-evident” or “stands to reason.” – people believe that they know, hence they resist researching. Authority: a belief in something because a trusted source, such as a parent, a news correspondent, or a teacher, says it is true. Self-discovery: Things we learn and know without intervention from an outside source. – Despite using info gathered from other sources to answer a qn, a person synthesises a variety of info to come to a conclusion. Involves 1 or more methods of knowing, but discovery was made alone. Scientific: approaches learning as a series of small steps Characteristics of the Scientific Method Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 1. Scientific research is public. – freely avail info, shared routinely a. Hence researchers need to include detailed info abt their research to allow other researchers to independently verify a given study and support or refute the initial research findings 2. Science is objective. – science rules out judgement; requires that scientific research deal with facts rather than interpretations of facts. Explicit rules and procedures are developed for researchers to follow. Science rejects its own authorities if statements conflict with direct observation. Fact-based, rule-based, meant to minimise errors by researchers. 3. Science is empirical. Researchers must be able to perceive and classify what they study and reject metaphysical and nonsensical explanations (research that rely on superstion/ non- scientific methods of knowing like astrology) of events. Concepts must be strictly defined to allow for objective observation and measurement. A world that is knowable and measurable. a. Abstract ideas/notions must be directly/ indirectly linked to the empirical world through direct observations done by framing an operational definition (brief intro + backtracking) b. A constitutive definition: defines a word by substituting other words/ concepts for it c. Operational definitions: specific procedures that allow one to experience or measure a concept. 4. Science is systematic and cumulative. Astute researchers always use previous studies as building blocks for their own work + research on prior research projects to identify problem areas and impt factors that may be relevant to current study. a. A theory is a set of related propositions that presents a systematic view of phenomena by specifying relationships among concepts. b. A law is a statement of fact meant to explain, in concise terms, an action or set of actions that is generally accepted to be true and universal. c. Theories guide scientific inquiry. 5. Science is predictive. : relating the present to the future. Data supporting predictions prompts extension of theory. 6. Science is self-correcting. The scientific method approaches learning in a series of small steps changes in thoughts, theories, or laws are appropriate when errors in previous research are uncovered. Theories can be altered or rejected. Research Procedures The purpose of the scientific method of research is to provide an objective, unbiased collection and evaluation of data. The typical research process consists of these eight steps: 1. Select a problem. 2. Review existing research and theory (when relevant). (optional in the private sector) 3. Develop hypotheses or research questions. 4. Determine an appropriate methodology/research design. - Decide between Qualitative research (focus groups/ 1-1 interviews w small samples) or Quantitative research (telephone interviews, large samples allow generalisation to the population under the study) 5. Collect relevant data. Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 6. Analyze and interpret the results. 7. Present the results in an appropriate form. 8. Replicate the study (when necessary). (optional in the private sector) 2 Sectors of Research: Academic and Private Academic/ Basic Private/ Applied Scholars from colleges and Nongovernmental companies or their research universities consultants Has a theoretical or scholarly Generally applied research; that is, the results are approach results are intended to intended to facilitate decision making/ solve a particular help explain the mass media and problem their effects on individuals - EG include media content and consumer preferences, acquisitions of additional businesses or facilities, analysis of on-air talent, advertising and promotional campaigns, public relations approaches to solving specific informational problems, sales forecasting, and image studies of the properties owned by the company Public Any other researcher or Proprietary data that are the sole property of the research organization that wishes to sponsoring agency and usually cannot be obtained by use the information gathered by other researchers. academic researchers can to do so - Stuff that udw others to know to stand a by asking the original researcher for competitive edge against others the raw data Time: do not have specific deadlines Time: operate under some type of deadline. imposed for their research projects usually by management or by an outside agency or a client that at a pace that accommodates their needs to make a decision teaching schedules Less expensive. More expensive. - do not need to cover overhead costs for office rent, equipment, facilities, computer analysis, subcontractors, and personnel Academic research and private-sector research are not independent of each other – there is overlap Selecting a Research Topic: - Secondary Analysis: Look at archive data – Historical data may be used to investigate questions different from those that the data were originally intended to address. - Saves time and resources, inexpensive. There are no questionnaires or measurement instruments to construct and validate, interviewers and other personnel do not need to be paid, and there are no costs for subjects and special equipment - Limited in the types of hypotheses or research questions that can be investigated. . The data already exist, and because there is no way to go back for more information. Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 - No guarantee that data is good. data were poorly collected, inaccurate, fabricated, or flawed. Determining Topic Relevance A good research topic: - Solves a problem - Fills a gap in our understanding 8 Basic Questions: 1. Is the topic too broad? 2. Can the problem really be investigated? Availability of test subjects 3. Can data be analysed? Reliability of data, Ability of researchers to interpret results (Common error: researchers selecting a statistical method without understanding what the method produces—is called the law of the instrument.) 4. Is the problem significant? does the study have merit?/ Purpose of study 5. Can results of study be generalised? – Study must have external validity: possible to generalize the results to other situations. 6. What costs and time are involved in analysis? Costs also determines if research is feasible, must have cost analysis 7. Is the planned approach appropriate to the project? 8. Is there any potential harm to the subjects? Physical/Psychological (fear, embarrassment) Stating a Hypothesis or Research Question Hypothesis: A formal statement regarding the relationship between variables and is tested directly. - Predicted relationship is either true or false. Research question: A formally stated question intended to provide indications about something; it is not limited to investigating relationships between variables - When a researcher is unsure abt the nature of the problem intent is to gather information/ preliminary data collect information to help define and make hypotheses. Research questions pose only general areas of investigation vs hypotheses are testable statements about the relationship(s) between the variables. Data Analysis and Interpretation The time and effort required for data analysis and interpretation depend on the study’s purpose and the methodology used. Internal Validity Does the study really investigate the proposed research question? For researchers to rule out plausible but incorrect explanations of results. The extent to which a study is well designed and permits inferences that are warranted. Threats to internal validity – alternative explanations Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Extraneous variables: All variables, which are not the independent variable (not being studied), but could affect the results of the experiment. Confounding variable/ artefact: Any extraneous variable that creates a possible but incorrect explanation of results. Artifacts/ Confounding variables may arise from several sources: 1. History 2. Maturation 3. Testing. Pretesting vs posttesting pretest may make subjects sensitive to the material 4. Instrumentation or Instrument decay. deterioration of research instruments or methods over the course of a study. 5. Statistical regression. 6. Experimental mortality. possibility that subjects will drop out for one reason or another (select more subjects than required) 7. Sample selection. Must be selected randomly and tested for homogeneity to ensure that results are not due to the type of sample used 8. Demand Characteristics. to describe subjects’ reactions to experimental situations 9. Experimenter bias. 10. Evaluation apprehension. emphasizes that subjects are essentially afraid of being measured or tested 11. Casual time order. 12. Diffusion or imitation of treatments. respondents may have the opportunity to discuss the project with someone from another session and contaminate the research project. 13. Compensation. People in control group may treat the group differently because it is “deprived” of somethings. 14. Compensatory rivalry. Subjects in a control group may work harder or perform differently to outperform the experimental group. 15. Demoralisation. Subjects in control group may lose interest in a project because they are not experimental subjects Research design choices affect internal validity Scientific studies need replication External Validity Refers to how well the results of a study can be generalized across populations, settings, and time. Guard against external validity: 1. Use random samples. gather information from a variety of subjects rather than from those who may share similar attitudes, opinions, and lifestyles. 2. Use heterogeneous samples and replicated the study several times. 3. Select a sample that is representative of the group to which the results will be generalised. 4. Conduct research over a long period of time. the immediate effects of a treatment are negligible. Keyword: Representativeness Presenting Results Format dependent on purpose of study. All results must be presented in a clear and concise manner appropriate to both the research question and the individuals who will read the report. Replication: 1 study provides indication of what might exist; replication allows for certainty. Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Research methods and designs must be altered to eliminate design-specific results – results based on design used. Subjects with a variety of characteristics should be studied from many angles to eliminate sample- specific results. Statistical analyses need to be varied to eliminate method-specific results. 4 basic types of replication: 1. Literal replication exact duplicate 2. Operational replication duplicate only sampling and experimental procedures 3. Instrumental replication duplicate dependent measures + vary experimental conditions. 4. Constructive replication test validity of methods by not imitating earlier study. Research Suppliers and Field Services Most media researchers do not conduct every phase of every project they supervise contract with a research supplier or a field service to perform these tasks. Research Suppliers: variety of services – designing of study, data collection, tabulation of data, analysis of results Field Services: specialize in conducting telephone interviews, mall intercepts, and one-on-one interviews and in recruiting respondents for group administration (central location testing, or CLT) projects and focus groups prerecruit. Lecture 2.2: Science and research Overview of the scientific method What makes a good research topic? What the research process entails Internal and External Validity Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Chapter 2: Elements of Research LECTURE Elements in Research Research is driven by theory, theory is the connection of concepts in very specified ways, concepts are abstract notions that we have Constructs as special kinds of concepts o Specified in detail o Often have dimensions or hierarchal structure o Not directly observable, abstract and in our heads o Specific to a research context At empirical level, we have variables, which can be measured in a variety of ways Measures are at empirical level, constructs and concepts are at abstract level Concepts, Constructs, Variables To be useful in research, concepts require thoughtful specification o Delineation and articulation o Explication (a very thorough explanation) Constructs require operationalization (can be measured and observed at an empirical level) Once constructs are operationalized, they become variables Variables have measures that must be specified as some are more suitable than others Example Cultural orientation as a concept One’s relationship with one’s parents as a concept Both concepts are very broad, need explication Filial piety (narrower concept) o In the context of ATLG (Attitudes towards lesbians and gays) Subdimensions o Saving face/keeping parents from disgrace, FP-Shame o Continuing the bloodline, FP-Succession Operationalized as self-expressed values Variables A phenomenon, characteristic, or event whose value varies within a sample Independent variable (IV) o Presumed to cause/ determine/ influence a dependent variable o Is systematically varied by the researcher Dependent variable (DV) o Presumed to be caused or influenced by another variable o Observed and measured; varies with respect to independent variable (or not) Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Measured vs. manipulated (or experimental) Discrete vs. Continuous o Discrete variables typically nominal/categorical o For continuous variables, scale and range matter Predictor vs. criterion o Labels used for non-experimental research o Example: contact and ATLG Control variables vs. variables of interest o Left-handedness, gender, educational attainment Variables under study require operationalization Qualitative vs. quantitative research Measurement Stevens (1951): Measurement “is the assignment of numbers to objects or events according to the rules” “the process of linking abstract concepts to empirical indicators” Problematic for social scientists as many measured phenomena are neither objects nor events Quantification allows for o mathematical processing and probabilistic analyses o making inferences about unobservable phenomena Levels of Measurement o Nominal o Ordinal o Interval o Ratio Scales Scale can refer to two things o Response option range (e.g. not at all – a lot) o Composite measures (items combined) Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Response range depends on o The nature of the construct o Desired and available precision o Human capability to differentiate (5 point scale VS 39 point scale) Scales can be transformed o Rescaled (i.e., change range) o Flipped (i.e., reversed) Nobody uses o Thurstone scales o Guttman scaling Lickert scales are very popular o Assess level of agreement with various statements Summation versus averaging o Knowledge measures are usually not composite measures Composite measures can help to increase measurement reliability (e.g., Cronbach’s α) Reliability and Validity of Measures Reliability: consistently giving the same response o Test-retest reliability o Internal consistency o Intercoder reliability Measurement validity o are you measuring what you claim to be measuring? Four types of measurement validity o Face (judgment-based) o Predictive (criterion-based) o Concurrent (criterion-based) o Construct (theory-based) READINGS Concepts and Constructs Concept: term that expresses an abstract idea formed by generalizing from particulars and summarizing related observations E.g. Speech anxiety, advertising effectiveness, media usage, readability Simplifies the research process by combining characteristics/objects/people into general categories Simplifies communication among those who have a shared understanding of them Construct: A concept that has 3 distinct characteristics: Abstract idea usually broken down into lower-level concepts (combination of concepts) Cannot be observed directly Usually designed for a specific research purpose – its meaning relates only to the context which it is found E.g. Advertising involvement, authoritarianism Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Variables Empirical counterpart of a concept/construct, link empirical and theoretical Important variables are labelled marker variables because they tend to define or highlight the construct under study Independent and Dependent Independent Variables: systematically varied by the researcher Dependent variables: observed and their values are presumed to depend on the effects of independent variables (dependent variables what the researcher wishes to explain) Discrete and Continuous Discrete: finite set of values, cannot be divided into subparts (typically nominal/ordinal, note some underlying continuous measurement may exist) Continuous: take on any value including fractions, can be meaningfully broken into smaller subsections Non-experimental research Predicator/Antecedent Variable: used for predictions or assumed to be causal (analogous to the independent variable) Criterion Variable: predicted or assumed to be affected (analogous to the dependent variable) Control Variable used to ensure that the results of the study are due to the independent variables, not to another source Noise: Difficulty in identifying all the variables that may create spurious or misleading results Defining Variables Operationally Operational definitions specify the procedures to be followed to experience or measure a concept Expressed so concretely, they can communicate exactly what the terms represent Two Types of operational definitions: 1. Measured a. Specifies how to measure a variable (E.g. a specific measurement for the term dogmatism) 2. Experimental a. Explains how an investigator has manipulated a variable i. E.g. “the violent condition,” could contain scenes from a boxing match, “the nonviolent condition,” could depict a swimming race Qualitive and Quantitative Research Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Qualitative Research: Involves several methods of data collection, such as focus groups, field observation, in-depth interviews, and case studies (questioning methods are varied) Advantages o allow a researcher to view behaviour in a natural setting without artificiality o Increase a researcher’s depth of understanding of the phenomenon under investigation o Flexible and allow the researcher to pursue new areas of interest Disadvantages o Sample sizes are sometimes too small (sometimes as small as one) to allow the researcher to generalize the data, therefore often the preliminary step to further investigation rather than the final phase of a project o Data reliability can also be a problem, since single observers are describing unique events o May produce nothing of value if not planned properly Quantitative Research: Involves several methods of data collection, such as telephone surveys, mail surveys, and Internet surveys Questioning is static or standardized—all respondents are asked the same questions and there is no opportunity for follow-up questions Requires that the variables under consideration be measured Advantages o Use of numbers allows greater precision in reporting results Tensions: some friction has existed in the mass media field and in other disciplines between those who favour quantitative methods and those who prefer qualitative methods Triangulation Refers to the use of both qualitative and quantitative methods to fully understand the nature of a research problem Measurement A researcher assigns numerals to objects, events, or properties according to certain rules Assignment is the designation of numerals or numbers to certain objects or events (e.g. 1 to yes, 2 to no, 3 to maybe) Rules specify the way that numerals or numbers are to be assigned (e.g. use stopwatch to measure speed) In social science, usually measure indicators of the properties of individuals or objects rather than the individuals or objects themselves Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Concepts cannot be observed directly, must be inferred from presumed indicators Isomorphism: means identity or similarity of form or structure Measurement systems strive to be isomorphic to reality More straightforward in physical science (e.g. current and resistance can tell degree of conductivity) For social sciences it is less obvious o E.g. measuring persuasibility/subjective/abstract -> there will be a test score and a true score Levels of Measurement Nominal (Categorical) Weakest form of measurement Numerals or other symbols are used to classify people, objects, or characteristics Properties o Equivalence: all objects in the same category (E.g. boys) are considered equal o Exhaustive and Mutually Exclusive Variable measured at nominal level can be used in higher-order statistics o The results of this process are known as dummy variables Ordinal Usually ranked along some dimension, such as from smallest to largest Properties o Equivalence o Order among categories Interval Has all the properties of an ordinal scale and the intervals between adjacent points on the scale are of equal value Disadvantage o Lacks a true zero point, or a condition of nothingness Can use parametric statistics Ratio Have all the properties of interval scales plus one more: the existence of a true zero point Ratio judgments (comparisons) can be made Can use parametric statistics Measurement Scales (Scales represent a composite measure of a variable) Simple Rating Scales 1-10, 1 being not fond, 10 being very fond Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 More points allows for greater differentiation (e.g. 1-3, vs 1-10) Transforming Scales o Compare data from one scale to other data using a diff rating scale o E.g. Converting 1-7 scale to a 1-100 scale Specialised Rating Scales Thurstone Scales (used more in psychology and education research, less in mass media) o Equal appearing interval scales o Typically used to measure the attitude toward a given concept or construct o Advantage: Interval measurement scale o Disadvantage: Time consuming Guttman Scaling (common in pol sci, sociology, anthropology, less in mass media) o Also called Scalogram analysis o Items are arranged along a continuum in such a way that a person who agrees with an item or finds an item acceptable will also agree with or find acceptable all other items expressing a less extreme position o Disadvantage: requires a lot of time and energy Likert Scales (most commonly used scale in mass media) o Also called Summated Rating Approach o A number of statements are developed with respect to a topic, and respondents can strongly agree, agree, be neutral, disagree, or strongly disagree with the statements (see Figure 2.1). Each response option is weighted, and each subject’s responses are added to produce a single score on the topic Semantic Differential Scales o A name or a concept is placed at the top of a series of seven-point scales anchored by bipolar attitudes The bipolar adjectives that typically “anchor” such evaluative scales are pleasant/unpleasant, valuable/worthless, honest/ dishonest, nice/awful, clean/dirty, fair/unfair, and good/bad o Attempts to place a concept in semantic space using an advanced multivariate statistical procedure called factor analysis Reliability and Validity Reliability – consistently give the same answer Reliable if the ratio of the true component of the score to the total score is high (minimal error) Stability - consistency of a result or of a measure at different points in time Internal Consistency - consistency of performance among the items that compose a scale For instance, suppose a researcher designs a 20-item scale to measure attitudes toward newspaper reading. For the scale to be internally consistent, the total score on the first half of the test should correlate highly with the score on the second half of the test. This method of determining reliability is called the split-half technique Cross-Test Reliability (test-retest) – equivalency component of reliability Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Assesses the relative correlation between two parallel forms of a test Two instruments that use different scale items or different measurement techniques are developed to measure the same concept Correlation between the scores on the two forms of the test is taken as a measure of the reliability Intercoder Reliability - Assess the degree to which a result can be achieved or reproduced by other observers For example, if two researchers try to identify acts of violence in television content based on a given operational definition of violence, the degree to which their results are consistent is a measure of intercoder reliability Face Validity - Achieved by examining the measurement device to see whether, on the face of it, it measures what it appears to measure Using accounting problems to measure reading skills – low face validity Predictive Validity - Checking a measurement instrument against some future outcome For example, scores on a test to predict whether a person will vote in an upcoming election can be checked against actual voting behaviour Concurrent Validity – closely related to predictive validity Measuring instrument is checked against some present criterion For example, it is possible to validate a test of proofreading ability by administering the test to a group of professional proof-readers and to a group of non-proof-readers: if the test discriminates well it has concurrent validity Construct Validity - involves relating a measuring instrument to some overall theoretic framework to ensure that the measurement is logically related to other concepts in the framework For example, an investigator might expect the frequency with which a person views a particular television newscast to be influenced by his or her attitude toward that program. Therefore, if an investigator finds a relationship between a measure and other variables that is predicted by a theory and fails to find other relationships that are not predicted by a theory, there is evidence for construct validity Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Chapter 3: Research Ethics LECTURE Research Ethics An integral part of doing research in the social sciences Most interactions with people will need to undergo review o Used to be called Human Subjects Review Exemption for journalism The origins are rather dark Germany 1933 Law for the Prevention of Genetically Defective Progeny o Legalized the involuntary sterilization of anyone with a disease claimed to be hereditary (Schizophrenia, alcohol abuse, blindness, etc) Compulsory Sterilisation Programme o Within 4 years over 300,000 people had been forcibly sterilized o Many did not even know they were being sterilized (no informed consent) o Radiation chambers Told people to fill out forms in an enclosed room Radiation treatment made people sterile Human Experimentation o Prisoners subjected to a range of hazardous experiments o Studies designed to help the Nazi’s military personnel o Wide range of experiments Infectious diseases (e.g., typhus) Toxins (e.g., mustard gas) Barometric experiments Experiments on twins (e.g., dye to change eye colour) Bone transplants o Freezing experiments (1941) Around 400 trials resulting in the deaths of over 300 victims Nuremburg Trials o Military tribunals after WWII in the United States 23 people put on trial Most were sentenced to death for war crimes o The “Nuremburg Code” established in 1947 o Global precedent for ethical considerations in human experimentation Human Experimentation in the US 1932-1972 Tuskegee syphilis study Commissioned by the US Public Health Service Tracked the natural progression of untreated syphilis STD that can cause rash, fever, blistering over the entire body Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Late stage can damage the internal organs including the brain, nerves, eyes, heart, blood vessels, liver, bones, and joints Tuskegee Syphilis Study Recruited 600 poor, African-American men and told them that they were going to receive free health care o 399 already had syphilis o 201 did not They were never told they had syphilis, rather they were being treated for “bad blood” 1947– Penicillin used widely to treat syphilis Study went on for another 25 years Many people died and others were infected Reparations and much later an apology Stanley Milgram Experiments A series of studies conducted at Yale University in 1961 First published in 1963 as “Behavioral Study of Obedience” Ethical critique in 1964 Informed consent obtained, but not safe from harm The Stanford Prison Study Conducted in 1971 by Philip Zimbardo Students randomly assigned to be guards or prisoners Later published as “The Lazarus Effect” (delayed return of spontaneous circulation) Also critiqued for lack of control and harm Government Response 1973 - Congress passes the National Research Act o Signed into in 1974 by President Nixon The Act authorizes federal agencies (e.g., the NIH and FDA) to develop human research regulations The regulations require institutions to form Institutional Review Boards (IRBs) to review and oversee research with human subjects Ethics Ethics concerns examining, understanding, and enforcing what is “right” and what is “wrong” Rules of acceptable behaviour based on moral judgment When applied to most research settings, ethics involve the application of formal rules to the research process to systematically recommend and defend behaviours designed to protect wellbeing and curb misconduct Ethical Theories Determining what is right and wrong has been studied for centuries o Deontological theories (rule-based) Kant: In order to act morally, one must act from duty Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 “right” actions are independent of the outcome of the action o Teleological theories (balancing/utilitarian) Focus on the goodness/badness of the outcome, not the action Relativistic theories (no universal right/wrong) o No one is objectively right or wrong o We ought to tolerate actions of others even when we don’t agree about the morality of it Four Ethical Principles Autonomy Maintain their identity as unique and free individuals Have the right and capacity to decide Have values and dignity respected Have the right to informed consent o Typical legal standard of consent o To give consent, individuals must have the following: The capacity to make rational, mature decisions Adequate information about what is to occur Adequate comprehension of possible effects Freedom from coercion or undue pressure o Consent procedures overseen by IRBs IRB Institutional Review Board o Formally checks the plans of researchers to maintain ethical standards o Researchers must submit applications that must be approved by the IRB prior to the start of the research project o IRBs do several things Oversee treatment of research animals Help to ensure that there is a minimal risk of harm for human participants in research (incl. use of deception) Help to ensure research integrity Vulnerable Participants o Some groups need extra consideration when asked to participate in research: children, the elderly, and the mentally ill may be challenged to understand information that would enable them to make an informed decision about study participation o Assent: Used with people who cannot make their own decisions, gain consent of the guardian, ask permission of the participant Nonmaleficence Ensure that your actions will not harm participants Primum non nocere - “first, do no harm” Respect and protect: o Privacy o Physical wellbeing (e.g., discomfort, fatigue) o Psychological wellbeing (e.g., emotional state) Debriefing as a means to reduce harm o Inform participants about the study’s true nature Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Immediately after participant completes the study Delayed debriefing (full explanation) o Reasons for debriefing Helps return participants to their pre-participation state (before the study affected their beliefs, attitudes, feelings, etc.) Diminishes anxiety and other unpleasant emotional reactions Gives subjects a sense of true value of his/her participation Explains ethical use of deception (if deception was used) Promotes perception of researcher honesty Beneficence Actions done to benefit others Weighing risks against benefits o Work to minimize risk and maximize benefits o Risks should be reasonable given the expected benefit to the individual and society Justice Holds that people who are equal in relevant respects should be treated equally Positive results of research should be shared with all Cook’s Research Commandments Do not involve people in research without their knowledge or consent Do not coerce people to participate Do not withhold from the participant the true nature of the research Do not actively lie to the participant about the nature of the research Do not lead the participant to commit acts that diminish his or her self-respect Do not violate the right to self-determination Do not expose the participant to physical or mental stress READINGS General Ethical Theories 1. Deontological (Rule-based) a. Immanuel Kant, who introduced categorical imperatives – principles that define appropriate actions in all situations (action can be universally implemented) b. Kant’s thinking parallels what we might call the Golden Rule: Do unto others as you would have them do unto you 2. Teleological (Balancing) Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 a. John Stuart Mill’s Utilitarianism – the good that may come from an action is weighed against or balanced against the possible harm b. Maximizes good and minimizes harm, the end may justify the means c. One difficulty with this approach is that it is sometimes difficult, if not impossible, to anticipate all of the harm that might ensue from a given research design 3. Relativistic a. No absolute right or wrong way to behave b. Ethical decisions are determined by the culture within which a researcher is working Ethical Principles 1. Autonomy (self-determination, roots in categorical imperative) a. This principle is exemplified by the use of informed consent in the research procedure 2. Nonmaleficence a. Wrong to intentionally inflict harm on another 3. Beneficence a. Stipulates a positive obligation to remove existing harms and to confer benefits on others 4. Justice a. People who are equal in relevant respects should be treated equally b. Related to both deontological and teleological theories c. E.g. Benefits should be shared with all who are qualified Summary by Frey, Botan, and Kreps (2000) 1. Provide the people being studied with free choice 2. Protect their right to privacy 3. Benefit them rather than harming them 4. Treat them with respect. Summary by Cook (1976) Do not involve people in research without their knowledge or consent Do not coerce people to participate Do not withhold from the participant the true nature of the research Do not actively lie to the participant about the nature of the research Do not lead the participant to commit acts that diminish his or her self-respect Do not violate the right to self-determination Do not expose the participant to physical or mental stress Do not invade the privacy of the participant Do not withhold benefits from participants in control groups Do not fail to treat research participants fairly and to show them consideration and respect Always treat every respondent or subject with unconditional human regard. (That is, accept and respect a person for what he or she is, and do not criticize the person for what he or she is not) Private-Sector Research VS Academic Research Both valued confidentiality equally, while academic researchers placed a higher value on integrity and beneficence. Private-sector researchers were more sensitive to conflict-of-interest issues. Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Specific Ethical Problems Voluntary Participation and Informed Consent Individuals are entitled to decline participation or to terminate participation at any time Researchers who are in a position of authority over subjects should be especially sensitive to implied coercion Voluntary participation is not a pressing ethical issue in mail and telephone surveys because respondents are free to hang up the phone or to throw away the questionnaire o Implied Consent Informed consent o People need to know enough about the project to make an intelligent choice o Researchers have the responsibility to inform potential subjects or respondents of all features of the project that can reasonably be expected to influence participation Form of Consent o Written consent (can be impractical in some situations, i.e. telephone/mail surveys) Concealment and Deception Concealment: Withholding information Deception: Deliberately providing false information Transforms human beings into manipulated objects More likely to expect to be deceived in future studies Might affect results Some researchers argue that certain studies could not be conducted at all without the use of deception (Utilitarian approach) Consider: (Kelman 1967) o How significant is the proposed study? o Are alternative procedures available that would provide the same information? o How severe is the deception? Consider: (Elms 1982) o When there is no other feasible way to obtain the desired information o When the likely benefits substantially outweigh the likely harm o When subjects are given the option to withdraw at any time without penalty o When any physical or psychological harm to subjects is temporary o When subjects are debriefed about all substantial deception and the research procedures are made available for public review Debriefing: Investigator thoroughly describes the purpose of the research, explains the use of deception (if it occurred), and encourages the subject to ask questions about the research. Debriefing should be thorough enough to remove any lasting effects that might have been created by the experimental manipulation or by any other aspect of the experiment Protection of Privacy Arises more often in field observation and survey research than in laboratory studies Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 The more public the place, the less a person has an expectation of privacy and the fewer ethical problems are encountered (note some exceptions like eavesdropping) A researcher should violate privacy only to the minimum degree needed to gather the data Respondents have a right to know whether their privacy will be maintained and who will have access to the information they provide 2 Ways to guarantee o Assuring Anonymity Any given respondent cannot possibly be linked to any particular response Encourages honesty o Assuring Confidentiality Assured that even though they can be identified as individuals, their names would never be publicly associated with the information they provide Respondents should be told who will have access to the information they provide Federal Regulations Concerning Research Institutional review boards (IRBs) TBC Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Chapter 16: Research in Public Relations LECTURE PR Research Related but distinct from advertising Can be qualitative, quantitative, or both Three types Applied (problem-solving, strategic and evaluative) Basic (Academic, knowledge creation, theoretical) Introspective (examining the profession and its function in society) Alternative way of organizing PR research – how it is used in the process Process Perspective Research helps… 1. to define PR problems 2. with PR program planning 3. with program implementation 4. evaluate PR programs Problem definition techniques 1. Environmental monitoring early warning analyses to identify emerging issues tracking public opinion on major issues Public relations audits and communications audits 2. Program planning Designing a campaign or program to address problem and achieve specific goals Program planning examples i. US Army campaign “Be all you can be” (cliché) to “An army of one” (too individualistic) to “Army strong” ii. STB campaign “Uniquely Singapore” (create the idea that Singapore was special/different from other countries in SEA) to “Your Singapore” (realised everyone comes for a different reason, no one-size-fits-all) 3. Program implementation Gatekeeping research (study of who controls the flow of information, i.e. in journalism the editors are gatekeepers) i. How to get gatekeepers to help, be on your side Output analysis (examining immediate & short-term results) i. Mentions in the media, surveys, etc 4. Program evaluation through all phases Implementation checking Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 In-progress monitoring Outcome evaluation (ongoing, long-term) i. Awareness, knowledge, relevance, action, and advocacy Qualitative PR Research (linguistic in nature, analyse text, less quantification) Critical incident (when something important/crucial happens) analysis o Combination of in-depth interviewing and case study o Content analysis (putting units of meaning, a quantitative method) of incident descriptions o Accounts might not be accurate due to imperfect retrospection or sensitive nature of incident Discourse analysis o Interpretative approach to understanding language use related to the campaign o Comes from the rhetorical tradition o Differs from quantitative content analysis PR Research and Social Media Increasingly important (information sources) Analytics (scraping the web) UX research (user experience) Copy Testing Important in PR and Advertising (Strat Comm) Developing effective messages o Two key criteria or dependent variables: Recall (memory) Liking (attitudes) o Strat Comm and the hierarchy of effects Knowledge (change their knowledge, learn something first) Attitudes (they cannot like/dislike something they don’t know about; thus knowledge precedes attitudes) Behaviour (Positive/negative attitudes allow for behavioural change) Dimensions of Impact Cognitive o Attention (predicated on exposure, presented to audience) o Comprehension (understand and know what it means) o Recognition/Recall Recall: Asking people to reproduce something with little prompt Affective Responses (Emotions) o Liking or favourability o Involvement/Engagement (feeling of wanting to participate) Conative (Behavioural) – Process o Intention to Act o Action o Consolidation (Repetition, establishment of pattern or habit) o Advocacy (participating in the campaign amongst people, social in nature) Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Reasons for Copy Testing Copy testing to improve message quality o Comprehensibility o Liking o Involvement Copy testing to avoid unintended outcomes Methods Can be done individually or in groups Research methods depend on the question asked o Self-reports Surveys CRM (continuous response measurement) Susceptible to demand characteristics (people forced to create an opinion, pseudo-opinion, social desirability) o Direct Measures Unobtrusive (observations) Psychophysiology o Trade-off between ease and validity o Multi-method can be useful, but costly READINGS PR becoming more research-oriented, with quantitative methods seemingly becoming more popular 7 Principles to guide measurement and evaluation of public relations (Barcelona Declaration, 2010) 1. Goal setting and measurement are fundamental to public relations 2. Media measurement requires both quantity and quality 3. Advertising Value Equivalents are not useful measures of public relations effectiveness 4. Social media can and should be measured 5. Measuring outcomes is preferred to measuring media results 6. Where possible, business results should be measured 7. Sound measurement is built on transparency and replicability Types of Public Relations Research, Pavlik (1987) 1. Applied Research a. Examines specific practical issues to solve a specific problem b. Branches: i. Strategic Research is used to develop public relations campaigns and programs – Plans for the future and how to get there Broom and Dozier (1990) Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 ii. Evaluation Research assesses the effectiveness of a public relations program 2. Basic Research a. Creates knowledge that cuts across public relations situations, examining the underlying processes and constructing theories that explain the public relations process i. E.g. Aldoury and Toth (2002) presented the beginnings of a theory that could explain the gender discrepancies in the field 3. Introspective Research a. Examines the field of public relations i. E.g. How women felt about the perceived “glass ceiling” in the PR profession Research in the Public Relations Process 4 Step Model of the PR Process 1. Defining public relations problems 2. Planning public relations programs 3. Implementing public relations programs through actions and communications 4. Evaluating the program Defining PR Problems Consists of gathering information that helps define and anticipate possible public relations problems. Techniques: Environmental Monitoring Programs (Boundary-scanning) o Observe trends in public opinion and social events, can use both content analysis and surveys o PR practitioners monitor mentions of their clients on both traditional media and social media Message analytics is a group of detailed descriptive content analysis statistics that examine online message volume, tone, and engagement Determine positive or negative mentions Sophisticated programmes can add temporal element of analysis (over time) A qualitative analysis can also reveal underlying themes and context of the messages o Grunig (2006) argued that environmental monitoring should be integrated into a company’s strategic management function. Two phases: Early Warning Phase – an attempt to identify emerging issues An alternative method is to perform panel studies of community leaders or other influential and knowledgeable citizens Monitor via Trigger Event Analysis – an event or activity that might focus public concern on a topic or issue Precursor Event Analysis – assumes that leaders establish trends that ultimately trickle down to the rest of society Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Second Phase – Tracking public opinion on major issues Longitudinal panel study, in which the same respondents are interviewed several times during a specified interval Cross-sectional opinion poll, in which a random sample is surveyed only once Omnibus Survey – a regularly scheduled personal interview, with questions provided by various clients o Baseline polling—an analysis of the current trends in public opinion in a given state or community that could be helpful for a candidate. o Threshold polling—surveys that attempt to assess public approval of changes in services, taxation, fees, and so on. Such a poll can be used to establish positions on various issues. o Tracking polls—polls that take place after a baseline poll and that are used to look at trends over time. Public Relations Audits o Comprehensive study of the public relations position of an organization o Useful because the research may unearth basic issues that the organization might not be aware of o First Step: List the segments of both internal and external groups that are most important to the organization. This phase has also been called identifying the key stakeholders in the organization o Second step is to determine how the organization is viewed by each of these audiences Designed to measure familiarity with the organisation Ratings scales usually used, then tabulate average scores o Following which, compare results from the first and second step to identify areas a company falls short Communication Audits o Similar to PR audit, but narrower goals, considers the company’s communication means more than their PR o Internal Audit Interview top management to pinpoint communication problems. Content-analyse a sample of all the organization’s relevant publications and other communication vehicles. Conduct focus groups and intensive interviews with employees that examine their attitudes toward company communications. Use this information to develop a survey questionnaire. Conduct the survey. Analyse and report results to employees. o External Audit (focus groups, interviews, and survey are done among audience members, shareholders, and other external groups) A review of communication plans and policies A review of communication structure and staffing Analysis of communication vehicles Interviews with senior managers, communication staff, and other key constituents Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 News media analysis Audience surveys An assessment of strengths and weaknesses Social Audits o Small-scale environmental monitoring program designed to measure an organization’s social performance—that is, how well it is living up to its public responsibilities Planning PR Programmes Interpret the information to identify specific problems and opportunities that can be addressed by a systematic public relations program The planning phase also involves research that attempts to determine the most effective media for delivering the program At the most basic level, this research entails finding the reach, frequency, and demographic characteristics of the audiences for the various mass and specialized communication media Media Audit: survey of reporters, editors, and other media personnel that asks about their preferences for stories and how they perceive the public relations agency’s clients Implementing PR Programmes The most common type of research during the implementation phase consists of monitoring the efforts of the public relations program Gatekeeping Research Analyses the characteristics of press releases and video news releases that allow them to “pass through the gate” Content and style variables are typically examined (e.g. differences between the grammar and syntax of original news releases and published versions) Output Analysis Short-term or immediate results of a particular public relations programme or activity Content analysis – can determine type of story, subtle qualities like tone of article o Must be based on clearly defined criteria for assessing positives and negatives Nonmedia activities can also be studied with output analysis, such as attendance at special events and trade shows Evaluation Research Process of judging the effectiveness of program planning, implementation, and impact. Should be present in every phase of a programme, and propose the following specific phases. 1. Implementation checking. This phase investigates whether the intended target audience is actually being reached by the message 2. In-progress monitoring. Shortly after the campaign starts, researchers check to see whether the program is having its intended effects. If there are unanticipated results or if results seem to be falling short of objectives, the program might still be modified Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 3. Outcome evaluation. When the campaign is finished, the program’s results are assessed. These findings are used to suggest changes for the future Benchmarking – standard of comparison used by a company to track its public relations progress. research is conducted before the campaign to establish the standards of comparison. Other ways to establish a benchmark might include examining existing data to find industry averages and looking at past performance numbers Qualitative Methods in PR Critical Incident Technique Combination of in-depth interviewing and the case study approach A critical incident is defined as an incident in which the purpose or intent of the act is clear to the observer and the consequences are definite, the event must have a clearly demarcated beginning and ending, and a researcher must be able to obtain a detailed account of the incident In general terms, a critical incident analysis includes the following characteristics: It focuses on a particular event or phenomenon It uses informants’ detailed narrative descriptions of their experiences with the phenomenon It employs content analysis to analyse these descriptions It derives interpretive themes based on the results of the content analysis. Advantage: Focuses on real-world incidents as seen through the eyes of those who are directly involved Different incidents can be examined and compared -> lead to greater theoretical understanding Disadvantage: Depends on the memory of the informants for its data Sometimes sensitive nature of critical incidents, particularly those that involve negative events (unwillingness to divulge) Discourse Analysis Examines the organization of language at a level of analysis beyond the clause or the sentence, focuses on larger linguistic units like whole conversations or messages. Concerned with the way language is used in social contexts, and how they are received Discourse analysis analyse three specific aspects of language: 1. The form and content of the language used 2. The ways people use language to communicate ideas and beliefs 3. Institutional and organizational factors that might shape the way the language is used Advantage: Can be used to study different situations and subjects Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Allows public relations researchers to uncover deeply held attitudes and perceptions Disadvantage: Takes large amounts of time and effort Focuses solely on language, rarely tells the whole story (therefore should be supplemented with other qualitative techniques like observation or focus group interviewing) PR and Social Media Research in this field falls into 4 categories 1. Practitioners’ attitudes toward the Internet and social media a. Introspective reserch 2. The role of social media in public relations a. How social media operate in the full range of public relations activities (how companies make use of Facebook, Twitter, etc) b. “Damage Control”: repair their image using social media as well as traditional media channels i. Audience members were more likely to accept defensive, supportive, and evasive crisis responses via traditional media rather than social media 3. Characteristics of websites used for public relations a. Non-profit organizations’ websites: technical and design aspects were emphasized more than interactive features b. Wiki health websites (collaborative websites that can be edited by anyone with access to them): more likely to use dialogic public relations techniques than non-wiki sites c. Corporate responsibility in U.S. corporate websites: found that presentational features were more developed than interactive features 4. Usability studies a. Website Experience Analysis i. Requires research participants to use a website and answer a series of questions about their experience ii. First impressions are important, the researcher may interrupt participants’ website experience after about 10 seconds, or at the moment of the first click away iii. Disadvantage of this technique is that it looks at website usage in an artificial environment Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Chapter 5: Qualitative Research “…a process of understanding based on distinct methodological traditions of inquiry that explores a social human problem. the researcher builds a complex, holistic picture, analyzes words, reports detailed views of informants, and conducts the study in a natural setting.” (Creswell, 2003) Method vs Methodology Method: a specific research technique Methodology: study of methods o Reviewing techniques and their suitability o Efforts to improve methods Quantitative methodology examines: o Experimentation o Surveys o Content Analysis Qualitative methodology examines: o Focus groups o In-depth Interviews o Participation Observation When to use Qualitative Research When concepts cannot be quantified easily When concepts are best understood in their natural setting When studying intimate details of roles, processes and groups When the paramount objective is “understanding” Limitations of Qualitative Research Usually not generalizable Time/resource intensive Reliability is not applicable Validity means something different – Strive for verification by leaving audit trail, member checks, etc. If sample size small, study has interpretational limits not representative Qualitative Analysis Data preparation – reduction and display o Separate ideas o Order chronologically Categorize on preliminary themes o Note, some categorization can occur during interview Recategorize as more data comes in and new themes emerge Constant comparative technique – part of Grounded Theory Categorize via comparison among data Interpret data to refine categories Identify relationships and specify themes among categories Simplify and interpret data into a coherent theoretical structure Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Analytic induction strategy – blends analysis and “hypothesis generation” 1. Formulate general questions – “From which media sources do people seek information?” 2. Examine a case (participant) to see if the question is appropriate a. “I watch some television, but I prefer newspapers and the Internet.” 3. Refine question in reference to other cases a. “Information seekers engage w many different media sources, so the question is why go to different sources?” 4. Look for negative cases a. “I love to seek information, and mostly get it from talking with my friend.” 5. Continue until question is adequately addressed a. Are negative cases typical? b. What are the most common themes? Readings Qualitative analysis – Relies mainly on the analysis of visual data (observations) and verbal data (words) that reflect everyday experience. Aims and Philosophy: No commonly accepted defn of Qualitative. Qualitative refers to: 1. A broad philosophy and approach to research 2. A research methodology 3. A specific set of research techniques. 3 Distinct Approaches to social science research: Paradigms for the Social Sciences Accepted set of theories, procedures and assumption about how the researchers look at the world Positivist (objectivist) o Oldest and most widely used in mass media research o Most used in natural and social sciences o Involves concepts eg quantification, hypotheses, objective measures Interpretive (constructivist) o Focus on the creation of meaning o To understand how people in everyday natural settings create meaning and interpret events Critical o Focus on power distribution and dynamics Distribution of power in society and political ideology o Normative theories o Draws on analysis models used in the humanities. Each represents a model/ paradigm for research - an accepted set of theories, procedures, and assumptions about how researchers look at the world Paradigms are based on axioms (statements universally accepted as true) related to selection of research methodologies Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Paradigmatic Differences Distinguishing positivist and interpretive paradigms Feature Positivist Paradigm Interpretive Paradigm Nature of Objective Subjectivity reality Exists apart from researcher, No single reality; Exists in can be seen by all reference to the observer Each observer creates reality as part of the research process Reality can be divided into Examines the entire process, as components, knowledge of reality is holistic and cannot be whole is gained by looking at subdivided parts Nature of the Focus on commonalities Focus on uniqueness Individual All human beings are basically Human beings are similar and looks for general fundamentally different and categories to summarize their cannot be pigeonholed behaviours or feelings Focus General Trends Particular Cases Generate general laws of Produce a unique explanation behaviour and explain many about a given situation or things across many settings individual. Aggregation; Breadth Individuation; Depth Paradigms differ in their ontology (study of existence, nature of being) , epistemology (theory of knowledge wrt methods, validity, scope, distinction btn belief and opinion) and methodology 5 Practical Differences between Positivist and Interpretive approaches: Feature Positivist Interpretive Role of the Objective observer; Analyst Active participant; Interpreter researcher Researcher is separated from the Researcher is an integral part of the data data, without the active participation of the researcher, no data exist Research design Pre-planned – design determined Evolving – design can be adjusted or before begins changes as research progresses Research Setting Control prioritised Naturalness prioritised Limit contaminating and Conducts studies in the field, in confounding variables by conducting natural surroundings, trying to investigations in controlled settings capture the normal flow of events without controlling extraneous variables. Measurement Measurement Instrument distinct Researcher is the instrument from researcher No other individual can substitute Another party could use the instruments to collect data in the researcher’s absence. Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Theory Building Hypothetico-deductive Inductive (data-emergent) Researcher uses research to test, Researcher develops theories as part support, or reject theory of the research process—theory is “data driven” and emerges as part of the research process, evolving from the data as they are collected. A researcher’s paradigm has a great influence on the specific research methods the researcher uses Positivist Quantitative content analysis, surveys. Experiments Interpretive Qualitative Research methods are not conscious of the philosophy that influenced their selection. (Positivist can use Qualitative methods as well vice versa) Many researchers now use a combination of the quantitative and qualitative approaches to understand fully the phenomenon they are studying. Mixed Methods Research Qualitative and Quantitative Qualitative and quantitative are not completely separate domains o Can be used to answer the same research questions o Can be complementary in a single research project Concurrent mixed methods Sequential mixed methods o Quantitative then Qualitative o Qualitative then Quantitative Mixed Methods approach: Researcher collects, analyzes, and integrates both quantitative and qualitative data in a single study or multiple studies in a sustained program of inquiry. Draws from the strengths of both qualitative and quantitative techniques Concurrent: Both qualitative and quantitative data are collected at the same time and both are weighted equally in analysis and interpretation o EG: Survey containing both open-ended and closed-ended questions Sequential: One method precedes the other o EG: A researcher might conduct focus groups that generate items to be used in a subsequent survey Advantages of Mixed Method Approach Disadvantages of Mixed Method Approach 1. Technique can produce stronger 1. More time and effort because the evidence for a conclusion through a researcher is actually conducting two convergence of findings studies 2. Researcher can answer a broader range 2. Requires the researcher to be skilled in of research questions because the both qualitative and quantitative research is not confined to a single methods. If these skills are lacking, it method might require a research team Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 3. Can provide information and insight 3. Data analysis might be more difficult, that might be missed if only a single particularly if the methods yield method were used. conflicting results. Data analysis in Qualitative Research Data analysis in qualitative studies is done early in the collection process and continues throughout the project. o Quantitative: analysis does not begin until all the numbers are collected Inductive model: Data are collected relevant to some topic and are grouped into appropriate and meaningful categories; explanations emerge from the data o Quantitative: Deductive - Hypotheses are developed prior to the study, and relevant data are then collected and analyzed to determine whether the hypotheses are confirmed Phases of Qualitative Data Analysis: Data Reduction and Data Display o Large amt of data – Information organised along a temporal dimension (chronological order accd to sequence of events that occurred) Each info is coded to identify source o Data organised into preliminary category system Category: arise from the data or they might be suggested by prior research or theory Eg: study of teenage radio listening – pages of transcript categorised under “Peer group pressure”/ “escape” OR Make multiple copies of the data, cut them into coherent units of analysis, and physically sort them into as many categories as might be relevant OR Commercial software programs o Researcher is the main instrument in qualitative data collection and analysis prepare before investigation o Epoche: Process by which the researcher tries to remove or at least become aware of prejudices, viewpoints, or assumptions that might interfere with the analysis Conclusion Drawing o Constant comparative technique (4 steps) 1. Comparatively assigning incidents to categories Place each unit of analysis into categories and compare to other units alr in the category Can create new categories Units that fit into >1 categories shd be copied and included Emphasis: Comparing units and finding similarities among the units that fit into the category. 2. Elaborating and refining categories Researcher writes rules or propositions that attempt to describe the underlying meaning that defines the category Downloaded by xx xx ([email protected]) lOMoARcPSD|30289352 Rules for inclusion might be rewritten and revised throughout the study Help to focus the study + allow the researcher to explore the theoretical dimensions of the emerging category system. Value of rules: Reveal what the researcher learns about a chosen topic & help determine the research outcome 3. Searching for relationships and themes among categories Searching for relationships and common patterns/ meaningful connections across categories. Goal: To generate assertions that can explain and further clarify the phenomenon under study 4. Simplifying and integrating data into a coherent theoretical structure Writes a report that summarizes the research Results are integrated into some coherent explanation of the phenomenon Goal: To arrive at an understanding of the people and events being studied. o Analytical induction technique (5 steps) 1. Define a topic of interest and develop a hypothesis. 2. Study a case to see whether the hypothesis works. If it doesn’t work, reformulate it. 3. Study other cases until the hypothesis is in refined form. 4. Look for “negative cases” that might disprove the hypothesis. Reformulate again. 5. Continue until the hypothesis is adequately tested. Verification – Credibility of research o Completeness of the data (sloppy notes/ incorrect interpretations) o Selective perception (dismiss data that doesn’t fit a favoured interpretation) o Reactivity — when the act of observing some situation changes the situation itself. o 4 methods to increase credibility: o 1. Multiple methods of data collection (interviews, field obs, existing docs diff perspectives) o 2. Audit trail (permanent record of the original data used) o 3. Member checks (research participants to read a researcher’s notes and check for accuracy) o 4. Research team (team members keep each other honest and on target when describing and interpreting data) In-person vs Online Method – depends on the research question and the objectives/ combination of both In-person Online The data are “richer” – observe physical Coverage of wide geographic areas is responses, surroundings, body possible language, facial expression