Summary

This document is a chapter on science and psychology, discussing different ways of knowing, and goals of science such as description, explanation, prediction, and control. It further describes characteristics of science, like accuracy, objectivity, and public reporting. The document will serve as a good guide for undergraduate students learning about psychology.

Full Transcript

Chapter 1: Science and Psychology ○ How do we know? Tenacity ⇒ Believing something because it is what we have long believed ⇒ Knowing by force of habit ⇒ Denying information that threatens firmly held beliefs ⇒ Not a g...

Chapter 1: Science and Psychology ○ How do we know? Tenacity ⇒ Believing something because it is what we have long believed ⇒ Knowing by force of habit ⇒ Denying information that threatens firmly held beliefs ⇒ Not a good way to “know” about the world ⇒ Typically regarded as the worst way of knowing ⇒ Tenacity = “confirmation bias” ⇒ Has the benefit that we’re not jumping all over the place ⇒ Different from the absolute truth ⇒ Religion, comments from parents affects your opinion Authority ⇒ Rely on other people as the source of knowledge and beliefs ⇒ Reliance on others depends on credibility: ➞ Believe the person has subject expertise ➞ Perceive as trustworthy ⇒ Authority is efficient ⇒ Problems with Authority: ➞ Disagreement among experts/experts can be wrong ➞ Give expert status to non-experts/non-expert status to experts ↳ nobel prize winner not believing in climate change ➞ Believe an untrustworthy source ⇒ Authority figures: parents, teachers, professors, friends ⇒ We can use authority figures to give broad concepts of the world Reason ⇒ Use of logic and rational argument to reach a conclusion about how things must be ⇒ Reason is integral to science ➞ Theory construction based on facts ➞ Deriving testable hypotheses from theories or facts ⇒ Limitation: Different starting points (initial premises) can result in different conclusions ➞ Opposing conclusions can be sound/valid ⇒ Using rationality to make sense of the data ⇒ Theory building, understanding basic facts ⇒ How we weigh pieces of evidence Empiricism ⇒ The process of acquiring knowledge based on observation and experience ➞ Sensory experience ⇒ Foundational aspect of science ⇒ Limitations of (non-systematic) Empiricism ➞ Nobody experiences everything ➞ Unrepresentative experience & confirmation bias ↳ Heavily weigh things we’ve experienced ➞ Alternative explanations ↳ Things we haven’t experienced yet ⇒ Not just interpretation, independently verifiable Systematic Empiricism (Science) ⇒ Science relies on empirical evidence ➞ Empirical evidence: pieces of information that comes through our sensory apparatuses ⇒ Evidence is collected systematically ➞ According to a predefined plan ⇒ Is not gathered or interpreted haphazardly ⇒ Evidence needs to be evaluated ➞ Measurement = always imperfect ➞ Reasoning ⇒ Most bias = least aware of biases ⇒ “Best” way of knowing ○ Goals of Science Scientists want to describe, explain, predict and control events ⇒ Understand associations between variables Variables: any factor or attribute that can assume two or more values (i.e., a factor that varies). Psychologists study a variety of variables: ⇒ Personality characteristics, Mental States, Attitudes and Beliefs, Behaviour and Decision Making Description ⇒ One of the most fundamental tasks of science ⇒ Identifies and provides an account of some phenomenon of interest, and its characteristics ⇒ Takes many forms across the sciences ➞ Identifying stars, categorizing animals, measuring the prevalence of behavior, etc. ⇒ Psychologists often describe what humans think and how they behave ⇒ Develop Coding Systems ⇒ Want to provide a basic account of what is actually observable ⇒ Saying what the rates of things are or the average of a certain population ⇒ Describing the world in some way Explanation ⇒ Scientists seek to understand why phenomena occur ➞ What causes people to think, respond, and behave as they do? ↳ Causal relationships ⇒ Hypothesis: a tentative proposition about the causes or outcome of an event or, more generally, about how variables are related. ⇒ Theory: a set of formal statements that specifies how and why variables or events are related. ➞ Broader than a hypothesis ➞ Theories differ in scope ➞ Very specific ➞ Could mostly come from theories ➞ falsifiable ⇒ The term “Theory” is often misunderstood ➞ It is not merely a “guess” ➞ Gives rise to the hypothesis ➞ An explanatory framework that incorporates all of the available evidence ➞ Differ in scope ➞ Challenges of theory formation ↳ Causes of behavior are viewed from different perspectives ⤷ Different people taking different approaches will produce different results = big mess ↳ Multiple causes may affect behavior simultaneously ↳ Distinction between distal causes (remote) and proximate causes (immediate) need to be clarified ↳ Determining Causation ⤷ Causal Inference: to conclude that one variable had a causal effect on another variable ⇾ Three Conditions for Causal Inference ⤍ Covariation: as X varies, Y varies ⤍ Temporal Order ⤍ Absence of plausible alternative explanations Prediction ⇒ Use knowledge about events or variables to predict an outcome of interest ⇒ Prediction is used: ➞ To test hypotheses and theories ➞ In applied settings ⇒ Example: what factors could influence road rage? Hot weather? Short temper? Traffic jam? ⇒ Testing Hypotheses and Theories ➞ Strongest means to determine the correctness of event explanations ➞ Theory makes a general claim ↳ Theory Example: Proximity to unhealthy food is a cause of poor dietary patterns ➞ Generate a specific (if-then) hypothesis to test theory ↳ If-then example: If people live within walking distance of fast-food restaurants and convenience stores, but not within walking distance of grocery stores, then they will have poorer dietary patterns. ➞ Form a tentative conclusion based on results ⇒ Applied Settings ➞ We use knowledge about one (or more) variable(s) to predict an outcome of interest ↳ When the two variables are statistically associated with one another ➞ Prediction without established causality Control ⇒ To exert influence over research settings procedures, and over the application of scientific knowledge. ⇒ Two Contexts of Control ➞ Research Activities ↳ Variables studied ⤷ Measurement: we control the variables we want to study ↳ Participants: some control but not full control ↳ Experimental Setting ⤷ E.g., does viewing nature pictures cause people to be happier? ⤷ We have control over the experimental setting ➞ Application of Scientific Knowledge ↳ Apply scientific techniques and knowledge to address real world problems and to improve lives ↳ We can control the techniques and knowledge we use ↳ Examples: ⤷ Discover how workplace settings can improve performance and job satisfaction ⤷ Test effectiveness of therapies in treating depression ⤷ Design health education programs aimed at increasing vaccination, improving diet, or prevention of disease ○ Characteristics of Science Science Involves Assumptions about the Natural World: ⇒ Events are not random; regular patterns ⇒ These patterns have underlying causes ⇒ The causes can be discovered ⇒ If you’re religious then you might not believe the last point as it is outside the bounds of psychology ⇒ Assumptions are always made Empirical and Systematic ⇒ Claims are based on observable evidence collected and evaluated in a systematic manner ⇒ Come up with a predefined plan before starting to minimize biases and specify what to do ➞ Empirical = observable ➞ Systematic = plan Focus on Testable Questions ⇒ Can we measure the phenomenon of interest? ➞ Tested using current technology but in psych we have certain measurement tools ⇒ Falsifiability: an assertion is testable if some type of empirical evidence could reveal it to be false Accuracy and Objectivity ⇒ Variables are measured accurately ⇒ Research methodologies are constructed to minimize biases ➞ We can all engage in biased creation ➞ Corporate research is quite biased Clear Definitions ⇒ Operational Definitions are central ➞ Describes a variable or a construct in terms of the procedures used to measure or manipulate it ➞ Essentially a survey tool Public Reporting ⇒ Other scientists view and critique the work ⇒ Reasons to Publicly Report Findings: ➞ Evaluation of the quality of the evidence ➞ Grow the body of knowledge ➞ Replication Scientific Knowledge is Tentative, Not Absolute ⇒ Science provides evidence, not proof Science is Self-Correcting ⇒ New information leads to a revision of theories or the creation of new ones ⇒ Studies can produce erroneous results: ➞ Chance, methodological flaws, and bias ⇒ Replication: The process of repeating a study to determine whether the original findings will be upheld Science has Limitations ⇒ Science is thought of as the best method for acquiring knowledge about the world ⇒ Cannot address Non-empirical questions ⇒ Studies and theories are limited; measurement is imperfect ⇒ Other forms of knowledge ○ Basic and Applied Research Basic Research: ⇒ Examines the fundamental nature of phenomena ⇒ Contributes to a core body of knowledge ⇒ The nuts and bolts of the real world ⇒ Test theories ⇒ Cognitive Psych: could examine response time ⇒ How things work the way they work ⇒ Ex. Perceiving colors and how they could affect the perception of traffic lights Applied Research: ⇒ Focuses on helping to solve or evaluate a specific real-world problem ⇒ Research often encompasses both types ⇒ To solve a real-world problem ⇒ Ex. Finding a healthy lunch for schools; treatment for depression ○ Benefits of Learning About Research Methods Enhanced Critical Thinking ⇒ Generate precise questions ⇒ Recognize vague questions/ideas of others ⇒ Convert vague problems into specific ones ⇒ Search and evaluate information ⇒ Issues in social media = a lot of broad, aspect ideas turned to a concrete thing ➞ Doesn’t work in terms of thinking Tools for other Psychology Courses ⇒ Research is central to all fields of psychology General Evaluation of Research ⇒ Evidence-based practice ⇒ Daily life: news, marketing claims, political claims, etc. ○ Skepticism, Science, and Everyday Life An outlook that involves questioning the validity of claims before deciding whether to accept them How do you know? Show me your evidence! Skepticism is not cynicism. Be open to claims supported by evidence while refraining from accepting inadequately supported claims. Scientists are skeptical Anecdotal Evidence: ⇒ Brief stories or descriptions about personal experiences, other people, and events (i.e., anecdotes) are used as factual evidence to either support or refute a claim ⇒ Ex. A “regular person” tells us how well _______ worked for them. ⇒ Often a compelling testimony ⇒ Problems with anecdotes: small sample (n= ~1) with no systematic collection of information ➞ Atypical case ↳ Could work for one but not for all ➞ Other possible explanations ➞ Biased memory Chapter 2: Conducting Psychological Research (09/18/24) ○ Generating Research Ideas Personal interests, concerns, and daily interactions Follow Current Events & Media Reports Prior Research & Theory ⇒ Research builds on previous work ➞ Test a question that arises from other research ➞ Conduct a series of studies on the same topic ↳ Ex. Distracted Driving: Different types of distractions and different contexts ↪ Does distracted driving negatively impact driving performance? ➞ Existing research adds core details ⇒ Using a theory to derive a testable research question ➞ Usually supervisors would want you to work from a theory Real World Problems ⇒ Develop practical research questions that may solve a problem ⇒ Ex. crime or depression ⇒ Evidence-based Treatments: interventions that scientifically controlled studies have demonstrated to be effective in treating specific conditions. Serendipity ⇒ Accidental discovery of something important ⇒ Ex. Lithium ○ Gathering Background Information Have a research idea? Get more information… ⇒ Has the topic been studied? ➞ There’s a lot of repeats, play around with keywords ⇒ Develop specific questions ⇒ Learn about existing operational definitions and methodologies ➞ Specify how things are measured ⇒ Discover the limitations of existing research ⇒ Do theories exist in your area? Important Considerations: ⇒ A good literature search takes time ⇒ Use Scientific Sources ➞ Peer-reviewed journals ↳ Not Wikipedia, YouTube, or other websites ↳ Best scientific source but not perfect, better because they are reviewed by experts ➞ Primary research: original research conducted and written-up for a peer-reviewed journal Searching Scientific Databases ⇒ U of R Library Search Engine ➞ Basic search ➞ Crude, so often you must use other tools ⇒ APA Databases ➞ PsycINFO ➞ PsycArticles ➞ *psychology specific ⇒ Google Scholar ➞ More precise ⇒ Targeted Searches ➞ Mine References Sections ➞ Web of Science Obtaining Articles ⇒ Link provides you with access to the article ⇒ Use one search to find articles, then look up in another OR in the particular journal ⇒ Get the key information: Title, year, authors’ names, journal ⇒ Still cannot find what you are looking for? ➞ Interlibrary loan ➞ Contact author ➞ Pay for it Structure of a Research Article ⇒ Abstract ➞ Short summary of a research article that summarizes key points ➞ Useful for knowing the relevance ⇒ Introduction ➞ Summary of related research ➞ Justifies the research questions and hypotheses of current project, often by highlighting gaps in the existing literature ➞ Lists hypotheses ⇒ Method ➞ Divided into subsections: sample, procedures, measures ➞ How was the construct of interest measured? ➞ Shows specific methodology and tools to figure out how/why things are the way they are ⇒ Results ➞ Describes how the data was analyzed and presents the results ➞ Data is summarized in tables and graphs ⇒ Discussion ➞ Were the hypotheses supported? ➞ Implications of the results ➞ Limitations of the research ⇒ References ➞ List of all sources referenced in the article Tips for Understanding Research Articles ⇒ Take your time ⇒ Start with the abstract ➞ A simple overview of the whole study ⇒ Read the methods! ➞ How were things measured? ➞ Which procedures were used? ⇒ Results ➞ Browse and get a sense of the technicality of the analyses ➞ Read between the numbers ➞ Use the discussion section to help ➞ Look at the tables ➞ A long-term game: understanding over time ⇒ Discussion ➞ Summary of key results ➞ Understand the context ➞ Understand the limitations ⇒ References ➞ See what other research has been done on the topic to broaden your understanding ○ Forming a Hypothesis Hypothesis ⇒ A tentative proposition about the causes or outcome of an event or, more generally, about how variables are related ⇒ Based on reasoning and available evidence Characteristics of a good hypothesis ⇒ Testable ⇒ Specific ⇒ Supported by data Two ways to form a hypothesis ⇒ Inductive reasoning ➞ Inductive Reasoning: using specific “facts” to form a general conclusion or general principle ➞ Extend facts beyond the current situation ↳ Example: In our study at the U of R, 65% of psychology undergraduates said they fell asleep at some point during a research methods course ↳ Therefore, roughly 65% of Canadian psychology students will report falling asleep in a research methods course ➞ Abductive Reasoning: inferring what was not observed ↳ Inference to the best explanation ↳ Example: Some people talk a lot in social situations, others do not talk much ↪ The people who talk a lot report being energized by social situations, while the people who do not talk much report getting tired in them ↪ In most cases, people report “this is how I always am in social situations” ↪ We theorize that there are stable personality types that are at least partially responsible for the differences we observe ⇒ Deductive reasoning ➞ Deductive Reasoning: using a general principle to reach a more specific conclusion ↳ Truth preserving ↳ E.g., All dogs are mammals; Abbey is a dog; Therefore Abbey is a mammal. ➞ Example: Testing a theory to make a prediction about an outcome; form a hypothesis based on the theory ↳ (Fake) Boredom theory states: when bored, people will fall asleep more often ↳ When the instructor is talking, people will be more likely to be bored ↳ Therefore, people fall asleep more when the instructor is talking vs. when doing activities like tests and discussions. ○ Designing & Conducting a Study Approaches to Conducting Research: ⇒ Quantitative and Qualitative Research ➞ Quantitative: Relies on numerical data and numerical (statistical) analysis to describe and understand behavior ↳ Ex. Response time, how many drinks were consumed at a football game ➞ Qualitative: Gathers non-numerical data and uses non-statistical analyses to understand behavior ↳ Ex. How do people feel when their team scores ➞ Mixed-Methods Research ↳ Combines quantitative and qualitative research methods to explore a research problem ⇒ Experimental and Descriptive Research ➞ Experimental Research: The researcher(s) manipulates one or more variable, attempts to control extraneous factors, and then measures how the manipulated variables affect participants responses ↳ Variables ↪ Independent Variable (IV): manipulated ↪ Dependent Variable (DV): measured ↳ Between-participants Design: Each participant engages in only one of condition of the independent variable ↪ Example: Rote memorization vs. Mental Imagery on memory test ↪ Hold everything constant: we need equal groups ↳ Random Assignment: a procedure in which each participant has an equal probability of being assigned to any one of the conditions in the experiment. ↳ Within-participants Design: Every participant engages in all conditions of the IV. ↪ Advantage: removes individual differences between conditions ↪ Disadvantage: Order effects ↳ Counterbalancing: a procedure in which the order of conditions in an experiment is varied so that no condition has an overall advantage relative to the other conditions ↳ Confounding Variables: Extraneous factors that systematically vary along with the variables we are studying and therefore provide a potential alternative explanation for our results ➞ Descriptive Research (non-experimental research): ↳ Researchers measure variables but do not manipulate them ↳ Describe characteristics of variables, constructs, phenomena ↳ Describe associations between variables ↳ Types of Descriptive Research: Surveys, Case studies, Observational studies ⇒ Laboratory and Field Research ➞ Laboratory ↳ Offers maximum control ↳ Extraneous Variable: a factor that is not the focus of interest in a study, but that could influence the outcome of the study if left uncontrolled ↳ Internal Validity: the degree to which we can confidently infer that our study demonstrated that one variable had a causal effect on another variable. ➞ Field Study ↳ Conducted in a real-world (field) setting ↳ External Validity: inferences about the generalizability of the findings beyond the circumstances of the present study. ↳ Field studies are typically NOT experiments ↳ Field Experiment: a study in which researchers manipulate an independent variable in a natural setting and exercise some control over extraneous factors. ⇒ Cross-sectional and Longitudinal Research ➞ Cross-Sectional Research Design: People of different ages are compared at the same point in time. ↳ Advantage: Gather data from many age groups at once ↳ Disadvantage: cannot disentangle age effects from cohort effects ↳ Not always with age cohorts ➞ Longitudinal Research Design: the same participants are tested across different time periods. ↳ Advantage: study same people at different ages (watch aging unfold) ↳ Disadvantages: takes a long time, attrition (losing participants over time), limited knowledge about other cohorts ➞ Cohort-Sequential research design: several age cohorts are tested longitudinally ↳ Combines cross-sectional and Longitudinal designs ↳ Following different age groups over time ↳ Gives more insights into age effects Planning and Performing the Study ⇒ Select Topic and Generate Research Question(s) ⇒ Consider Ethics ⇒ Design Choice: Quantitative, Qualitative, or mixed-methods? ➞ What type of study: survey, experiment, etc.? ➞ Participants ⇒ Measurement ➞ Operational Definitions ➞ Measurement tools ⇒ Research Protocol: Standardized set of procedures that the researcher will follow with each participant The Role of Sampling ⇒ Population: consists of all the cases or observations of interest to us in a given study ➞ Who is of interest, who are we targeting ⇒ Sample: A subset of cases or observations from a population ➞ Representative Sample: reflects or matches the characteristics of the population ➞ Ex. UofR Students– can’t just be the people in Psych class because it wouldn’t capture the “overall” experience at the UofR ○ Analyzing Data & Drawing Conclusions Qualitative Analysis ⇒ Non-mathematical and often involves identifying, classifying, and describing different types of characteristics, outcomes, or behaviors Quantitative Analysis ⇒ Mathematical and typically involves using statistics to aid in summarizing and interpreting data Descriptive Statistics: organize and summarize a set of data ⇒ E.g. Averages, percentages ○ Building Knowledge & Theories Measures of Central Tendency ⇒ Describe the “typical values” or center of a distribution of scored ➞ Distribution of scores: 0,0,0,0,0,1,3,3,4,5,6 ⇒ Mean ➞ The arithmetic average of a distribution of scores ➞ The sum of all scores divided by the total number of scores ↳ E.g. class average; mean is two for the previous distribution ➞ Advantage: includes all numerical information in a dataset ➞ Disadvantage: heavily influenced by outliers ⇒ Median ➞ The middle score ➞ E.g. 1 in the previous distribution Data Analysis ⇒ Range ➞ Describes the highest and lowest scores in a distribution; it can also be expressed as the distance between them ➞ The range is affected only by the extreme scores ⇒ Variance & Standard Deviation ➞ Reflect how much the scores in a distribution are spread out in relation to their mean. ➞ The larger the variance/SD the more the scores are spread out Inferential Analysis and Statistical Significance ⇒ Inferential Analysis ➞ Allow us to infer that a result is unlikely to be due to chance and can be generalized to the population ⇒ Statistical Significance ➞ Unlikely to be due to chance ➞ Conventional threshold 5% (.05) ➞ Statistical Significance does not equal practical significance ○ Reporting the Findings Where is research reported?: ⇒ Peer-Reviewed Scientific Journals ⇒ Conferences ➞ Oral Presentations ➞ Poster presentations ⇒ Books ⇒ Media/Websites APA Publication manual ⇒ Established standards for reporting research in psychology Good reporting of research is: ⇒ Clear and concise ⇒ Organized ⇒ Logical ○ Building Knowledge and Theories Evidence builds up – what to do with it? ⇒ Form a theory! Benefits of Forming a Theory: ⇒ Provide a unifying framework to organize existing knowledge ⇒ Understand and make predictions about a phenomenon ⇒ Generate interest in research on the topic Characteristics of a Good Theory: ⇒ Testable and Specific ⇒ Clear and Internally Consistent ⇒ Based on Empirical Support ⇒ Parsimonious ⇒ Advances Science ○ Proof and Disproof The accumulation of evidence for a theory does not prove that the theory is true ⇒ Scientific knowledge is tentative Logical Problem: Affirming the consequent ⇒ If X, then Y does not mean if Y, then X ⇒ E.g., If theory X predicts that Y will occur does not mean if Y occurs then X is true Evidence can disprove a theory ⇒ Especially if the theory makes strong predictions Chapter 3: Conducting Ethical Research ○ The Importance of Research Ethics Ethics: represent a system of moral principles and standards. ⇒ Ethics are particularly important for psychologists (and other social scientists) ➞ Psychologists study sentient beings: humans and animals Importance of Research Ethics: ⇒ Progress on psychology depends on willing participants ⇒ Public and Government: view research as acceptable and valuable ⇒ Ethics can impact methodology ⇒ Samples must be obtained ethically ⇒ Deception in research ○ Why regulate research? Historical Examples: Nazi experiments violate human rights ⇒ Led to establishing the Nuremburg Code post-war ➞ Principles of the Nuremburg Code: ↳ Consent is always voluntary; can withdraw at anytime ↳ Prior to giving consent, people must be informed about the purpose and potential risks to their personal welfare ↳ All unnecessary risks to participants should be avoided ↳ The study should yield results whose benefit to society should outweigh any potential risks to participants ↳ Only qualified scientists should conduct the research Tuskegee Syphilis Study ⇒ 1932 – 1972 (USA) ⇒ How does syphilis spread through the body? ⇒ Recruited 600 financially poor black men ⇒ 2/3 had advanced syphilis – were not informed or treated ⇒ Led (in part) to the Development of the Belmont Report ➞ An ethics code that provides the foundation for U.S. federal regulations governing research on humans. ➞ Principles: 1) Respect for persons; 2) Beneficence; 3) Justice ○ Modern Codes of Ethics for Psychologists American Psychological Association (APA) Ethics code Canadian Psychological Association (CPA) Ethics code Tri-council Policy Statement (TCPS-2) ○ TCPS-2: Ethical Conduct for Research Involving Humans Tri-Council Research Agencies ⇒ Canadian Institutes of Health Research (CIHR) ⇒ Natural Sciences and Engineering Research Council of Canada (NSERC) ⇒ Social Sciences and Humanities Research Council (SSHRC) For an institution to receive grant funding, all researchers at the institution must adhere to the ethical guidelines Mandate: “To promote research that is conducted according to the highest ethical standards.” Principle: Respect for human dignity– “Research must be conducted in a way that is sensitive to the inherent worth of all human beings and the respect and consideration that they are due.” ○ TCPS-2 Core Principles Respect for Persons ⇒ Respect Autonomy ⇒ Protect those with developing, impaired, or diminished autonomy ⇒ Autonomy: ➞ Ability to deliberate and act based on that deliberation ➞ Must seek free and informed consent for participation ⇒ Concerns for Autonomy ➞ Limited Information ➞ Coercion ➞ Youth, cognitive impairment, mental health issues, general health issues Concern for Welfare ⇒ Quality of a person’s experience of life ⇒ Includes: physical, mental, and spiritual health ⇒ Researchers and Research Ethics Boards should: ➞ Aim to protect the welfare of participants ➞ In some cases promote welfare in view of foreseeable risks ⇒ Welfare of Groups: ➞ Groups can benefit ➞ Groups can be harmed: stigmatization; discrimination ➞ Risks versus benefits for society and groups are difficult issues that require careful attention and potentially the input of the group Justice ⇒ Fair and equitable treatment ⇒ Treat all people with equal respect and concern ⇒ Equitable distribution of benefits and burdens of research ⇒ Equity does not mean identical treatment ⇒ Threat to Justice: Imbalance of power between researchers and participants ○ Adhering to Ethics Codes Ambiguities and Dilemmas ⇒ The researcher may have difficulty in deciding how a principle or standard applies to a particular case ⇒ Disagreement between researchers and Research Ethics Board ⇒ Disagreement between Research Ethics Boards ⇒ Adhering to one principle or standard might conflict with another ⇒ Different aspects or applications of the same principle may conflict ⇒ Different parties might have competing interests ○ Research Ethics Boards (REB) An independent institutional committee that evaluates whether proposed research projects with human participants complies with the TCPS-2 principles and guidelines. Committee includes people from a variety of backgrounds A Research Ethics Board may: ⇒ Approve the proposed study ⇒ Require revisions and a resubmission ⇒ Disapprove the study Other approvals may be needed to conduct research ○ Ethical Standards in Human Subjects Research Consider the “Risk to Benefit Ratio” ⇒ Risks must be eliminated or minimized ⇒ Benefits must outweigh the risks Minimal Risk: ⇒ Probability and magnitude of possible harms implied by participation in the research are no greater than those encountered by participants in those aspects of their everyday life that relate to the research (TCPS-2, p. 22) Above Minimal Risk: ⇒ Potential benefits of the research must be greater than the probability and magnitude of risk to participants Types of Risks and Harms ⇒ Physical: Potential for physical injury, health consequences, or discomfort ⇒ Psychological: experience negative emotions ⇒ Threats to self-esteem, self-doubt, frustration/anger, anxiety ⇒ Social: Breach of confidentiality & stigma ⇒ Legal: disclosure of illegal activities ⇒ Economic: Job costs; opportunity costs ○ Informed Consent The principle that people have the right to make a voluntary, informed decision about whether to participate in a study REB Requirements: ⇒ Information ➞ Purpose/Nature of Research ➞ Anticipated Risks, Discomforts, Adverse Effects ➞ Anticipated Benefits ➞ Confidentiality & Limits ➞ Incentives & Compensation ➞ Voluntary Participation ➞ Researchers’ Contact Information ⇒ Consent for Participation ⇒ Consent for Use of Information/Data ○ Waving Consent The research involves no more than minimum risk to subjects, The waiver or alteration is unlikely to adversely affect the rights and welfare of the subjects, The research could not practicably be carried out without the waiver or alteration, Whenever possible and appropriate, the subjects will be provided with additional pertinent information after participation, The waived or altered consent does not involve a therapeutic intervention. ○ Deception Passive Deception ⇒ Researchers intentionally withhold information from potential participants that might influence their decision to provide informed consent Active Deception ⇒ Researchers intentionally mislead participants about some aspect of a study Deception may be permissible: ⇒ If the study is likely to yield significant potential benefits ⇒ There is no feasible non-deceptive approach to obtain the same benefits ⇒ Participants are unlikely to experience harm ○ Debriefing A conversation or communication with the participant that conveys additional information about the study. Goals of Debriefing ⇒ Provide complete information about the study, including about deception that was used ⇒ Give participants a chance to ask questions ⇒ To learn about participants’ perceptions of the study & correct any misperceptions about it ⇒ Minimize adverse effects ⇒ Maximize likelihood that they will feel positively about participation ⇒ To ask for cooperation in not discussing the study with others who might participate ○ Animal Research In Psychology ⇒ Thorndike: Learning/Operant Conditioning ⇒ Pavlov: Learning/Classical Conditioning ⇒ Köhler: Chimpanzee Insight ⇒ Tolman: Spatial Learning/Cognitive Mapping in Rats ⇒ Skinner & Behaviourists: Operant Conditioning ⇒ Contemporary Research: Further investigation of learning & the role of the brain (neuroanatomy & neurochemistry) Why study animals? ⇒ To learn about other species ⇒ To learn about human behavior ➞ Animals mature faster ➞ Experimental control not possible in humans ➞ Investigate questions that would not be permitted in humans Ethics of Animal Research ⇒ Ethical Perspectives ➞ Inherent Rights Perspective: ↳ All sentient beings have inherent value and moral standing ↳ Animals cannot give informed consent and therefore cannot be subjected to research that requires it. ➞ Utilitarian Perspective: ↳ Animals are worthy of moral consideration, but this is not equivalent to human beings ↳ Moral standing increases with the capacity to experience pleasure and pain, and with increased self-awareness ↳ Research is justified according to animal capacities – higher capacities require more justification ➞ Pro-use Perspectives ↳ Humans should treat animals as humanely as possible ↳ Animals do not have the same moral standing as humans ↪ Humans have the authority to decide the status of animals ↳ Humans have the strongest moral obligation to other humans, thus research on animals is justified by the benefits that the human species can gain from animal research ○ Scientific Integrity Ethical researchers must be knowledgeable about research ethics codes and willing to implement them in practice. Researchers must avoid false or deceptive statements Research results must be reported honestly ⇒ Falsified data/results could: ➞ Direct research/funding ➞ Confuse the public ➞ Directly harm people Questionable Research Practices ⇒ Gray area practices that can influence research results Plagiarism ⇒ Give other people credit where credit is due ⇒ Careful paraphrasing ⇒ Avoid “self-plagiarism” Chapter 4: Defining and Measuring Variables ○ Why do we need to talk about variables? Basics of measurement Specify the characteristics of our abstract or hypothetical constructs Specify what we are measuring Daily life: clear communication ○ Variables Any factor or attribute that can assume two or more values (i.e., a factor that varies) An event or behavior that has at least two outcomes An aspect of a testing condition that can be changed or changes as the result of a manipulation Something that varies Examples: ⇒ Number of times a person looks at their phone while driving; different types of looking at the phone (e.g., read a text, send a text, take a selfie) ⇒ Intelligence scores and grades across university students ○ Qualitative & Quantitative Variables Qualitative Variables (i.e., categorical variables): ⇒ Represent properties that differ in “type” ⇒ E.g., biological sex; country of birth; job type; religion Quantitative Variables: ⇒ Represent properties that differ in “amount” ⇒ E.g., Height; intelligence; working memory capacity; quantity of junk food eaten per day ➞ Both qualitative and quantitative variables can generate numerical data and be statistically analyzed ↳ Qualitative: counts ↳ Quantitative: information is numerical ○ Discrete & Continuous Variables Discrete Variables: ⇒ Between any two adjacent values no intermediate values are possible ⇒ Whole number units or categories ⇒ E.g., How many children are in your family? Continuous Variables ⇒ In principle, between any two adjacent scale values intermediate values are possible ⇒ Exist on a continuum; fractional amounts ⇒ In practice, continuous variables often are converted to discrete variables ➞ E.g., psychological scales ○ Independent & Dependent Variable Independent Variable ⇒ The presumed cause in a cause-effect relationship ⇒ The factor that is manipulated (systematically varied) in experiments to assess its influence on the behavior or outcome of interest Dependent Variable ⇒ The presumed effect in a cause-effect relationship ⇒ In an experiment, this is the behavior or outcome that is measured to test the effect of the independent variable Extraneous Variable ⇒ A factor that is not the focus of interest in a study, but that could influence the outcome of the study if left uncontrolled. ○ Hypothetical Constructs Underlying characteristics or processes that are not directly observed but instead are inferred from measurable behaviors or outcomes. Examples: ⇒ Intelligence ⇒ Motivation ⇒ Compassion ⇒ Happiness ⇒ Love ⇒ Depression How do we empirically measure “hypothetical” constructs? ○ Mediator and Moderator Variables Mediator Variable: ⇒ A variable that provides a causal link in the sequence between an independent variable and a dependent variable ⇒ Provides an explanation; why one thing influences another ⇒ E.g., Why does cell phone use while driving reduce performance? ⇒ Visual and manual distractions ⇒ Attention (cognitive distraction) Moderator Variable: ⇒ A factor that alters the strength or direction of the relation between an independent and dependent variable ⇒ E.g., When does cell phone use affect driving performance ➞ In heavy traffic conditions, cell phone use greatly impairs driving performance ➞ In light traffic conditions, cell phone use slightly impairs driving performance ○ Defining Variables in Research Conceptual Definitions ⇒ How do we specify our phenomenon or construct of interest? ⇒ E.g., happiness; driving performance ⇒ Describes what something means– concept Operational Definition: ⇒ Defining a variable in terms of the procedures used to measure or manipulate it ⇒ Convert an abstract, hypothetical, or non-observable construct into things that can be measured ⇒ Example: Driving Performance ⇒ Lane departures & following distance ⇒ Crashes ⇒ Response time ⇒ Operational Definition in Everyday Life: Terms of service/contracts, job performance, understanding each other, politics ○ Scales of Measurement Measurement ⇒ The process of systematically assigning values (numbers, labels, or other symbols) to represent attributes of organisms, objects, or events ⇒ Scales in everyday life: ➞ Movies ➞ Product/Services ⇒ We assign values to variables with four common scales of measurement: nominal, ordinal, interval, and ratio scales ➞ NOIR Nominal Scales: ⇒ The scale values represent only qualitative differences (i.e., differences in type rather than amount) of the attribute of interest ⇒ Objects or individuals are assigned to categories that have no numerical properties ➞ Labels ⇒ Examples: biological sex, religion, political affiliation, job areas, country of birth ⇒ Important for categorization ➞ Numerical information in the form of counts ➞ E.g., number of driving errors by type ⇒ Weakest level of measurement Ordinal Scales ⇒ The different scale values represent relative differences in the amount of some attribute ⇒ Objects, individuals, or categories are rank-ordered along a continuum ➞ E.g., Size: small, medium, large ⇒ Provides some information about the differences between attributes or categories ⇒ Does not provide information about the degree or magnitude of difference between categories ⇒ For example: Categorize the following animals into small, medium, and large categories: ➞ Whales ➞ Humans ➞ Cat Interval Scales ⇒ Equal distances between values on the scales reflect equal differences in the amount of the attribute being measured ⇒ The units of measurement on the scale are equal in size ⇒ Example: temperature (in Fahrenheit or Celsius) ⇒ Provide more information than nominal or ordinal scales ⇒ No absolute zero point on interval scales ⇒ 0 represents a value Ratio Scales ⇒ Equal distances between values on the scale reflect equal differences in the amount of the attribute being measured and the scale has a true zero point ⇒ Provide the most information of the four scales of measurement ⇒ Examples:Height, length, weight, exam scores, time ⇒ Ratios are meaningful ⇒ 0 means absence or “nothingness” ○ Measurement Accuracy, Reliability, and Validity Accuracy of a measure: ⇒ Represents the degree to which the measure yields results that agree with a known standard ⇒ Example: Weight ⇒ Systematic error (a.k.a. bias): a consistent degree of error that occurs with each measurement ➞ E.g., bathroom scale adds 10 lbs to weight for every measurement Reliability ⇒ The consistency of a measure under conditions where consistency would be expected ⇒ E.g., weighing yourself 5 times in a row, without eating/drinking/exercising ⇒ Measures can be reliable, but inaccurate ⇒ Random measurement error: random fluctuations that occur during measurement and cause the obtained scores to deviate from a true score ➞ E.g., True weight = 150 ➞ Scale readouts: 150.4; 148.9; 150.2; 149.6; 150.6 Test-Retest Reliability: ⇒ Determined by administering the same measure to the same participants on two or more occasions, under equivalent test conditions ⇒ Psychological measures are tested from one time to the next, especially for stable characteristics or attributes. ⇒ Scores from time 1 and time 2 should be correlated ⇒ E.g., Intelligence ⇒ If conditions are not equivalent, we introduce error ➞ Random error ➞ Systematic error: carry-over effects Split-Half Reliability: ⇒ The items that compose a test are divided into two subsets, and the correlation between subsets is determined ⇒ Test reliability with one administration of test ⇒ E.g., 40-item test for assessing verbal intelligence ➞ Split into two subsections & test correlation between them ⇒ Internal-Consistency Reliability: tests the interrelatedness of different types of items on the test Interobserver (Interrater) Reliability: ⇒ Represents the degree to which independent observers show agreement in their observations ⇒ Ratings may be qualitative (categorization) or quantitative (scaled) ⇒ Examples: ➞ Raters observe children playing and rate behaviors as passive, assertive, or aggressive ➞ Judges’ ratings of gymnast or figure skating performance Validity: ⇒ Can we truthfully infer that a measure actually measures what it is claimed to do? ⇒ The extent to which a measurement tool measures what it is supposed to measure ⇒ E.g., Does our measure of shyness actually measure shyness, or does it measure other psychological constructs? Face Validity: ⇒ The degree to which items on a measure appear to be reasonable ⇒ How well a measurement tool appears to measure what it is supposed to measure ⇒ E.g., A test of reading comprehension includes passages of text ⇒ Not a scientific form of validity ⇒ Practically, face validity can be important Content Validity ⇒ Represents the degree to which the items on a measure adequately represent the entire range or set of items that could have been appropriately included ⇒ Does the content of the test adequately measure all relevant content ⇒ E.g., Does our measure of leadership abilities measure all relevant aspects of leadership? Criterion Validity ⇒ Addresses the relation between scores on a measure and an outcome ⇒ Two Subtypes: ➞ Concurrent Validity ↳ The relation between scores on a measure and an outcome are assessed at the same time (i.e., concurrently) ↪ E.g., Do scores on a new measure of “leadership potential” correlate with performance of current managers? ➞ Predictive Validity ↳ A measure taken at one time predicts a criterion that occurs in the future. ↪ E.g., Does our new measure of “leadership potential” predict success in being a manager? Construct Validity ⇒ Demonstrated when a measure truly assesses the construct that it is claimed to assess ➞ Broad, theoretical type of validity Two general considerations: ⇒ Convergent Validity: ➞ Scores on a measure should correlated highly (i.e., converge) with scores on other measures of the same construct ⇒ Discriminant Validity ➞ Scores on a measure should not correlate too strongly with scores on measures of other constructs Chapter 5: Correlation and Correlational Research (Midterm 2) ○ Correlation: Basic Concepts Correlation ⇒ A statistical association between variables ⇒ Scores are associated in a non-random fashion ⇒ Examples ➞ Taller people tend to weigh more than shorter people ➞ Positive psychological well-being is associated with better cardiovascular health ➞ Ice cream sales are associated with higher rates of drowning Correlational Research ⇒ Different from experimental research as they analyze whether the manipulation of X changed Y ➞ Control experimental conditions to reduce the effects of confounding variables ⇒ Correlational research analyzes if there is an association between X and Y ➞ Reduce the influence of confounding variables through statistical control Key Concept: VARIABLES ARE MEASURED NOT MANIPULATED ⇒ Possible sources of association ➞ X and Y are characteristics of the same people ➞ X and Y are characteristics of different, but related, sets of people ➞ X is a personal characteristic and Y is an environmental characteristic Positive and Negative Correlation ⇒ Positive Correlation: high scores/levels of one variable are associated with higher cores/levels of another variable ➞ As X increases Y increases, as X decreases Y decreases ⇒ Negative Correlation: higher scores/levels of one variable are associated with lower scores/levels of another variable ➞ As X increases Y decreases, As X decreases Y Increases Measuring and Graphing Correlations ⇒ Pearson’s r ➞ Measures the direction and strength of the linear relationship between two variables that have been measured on an interval or ratio scale ➞ Values range from -1.00 to +1.00 ↳ + = positive correlation ↳ - = negative correlation ↳ -.51 is a stronger relationship than +.29 (the closer to 0) ↳ A value of +1.00 or -1.00 indicates a perfect correlation ⇒ Spearman’s rho ➞ Measures the relation between two quantitative variables when one or both variables have been measured on an ordinal scale (when the scores represent ranks) ⇒ Understanding correlations ➞ format/coding of scales matters for the direction of the correlation ➞ Recording is necessary to keep things logically consistent ➞ The way a researcher conceptualizes a variable affects whether the correlation is positive or negative ⇒ Scatter Plot ➞ A graph in which data points portray the intersection of X and Y variables ⇒ Interpreting the strength of a correlation ➞ Can be ‘small’ ‘medium’ or ‘large’ – distinction is arbitrary ➞ Cohen’s (1988 Guidelines) ↳ Small: r = 0.10 to 0.29 ↳ Medium: r = 0.30 to 0.49 ↳ Large: r = 0.50 to 1.00 ➞ Useful guidelines: 0.1 difference between categories; relative to the investigation ⇒ What does an r value mean? ➞ Absolute value of a correlation reflects its strength (RE: Cohen’s guidelines) ➞ Does not represent the percentage that two variables are related ↳ I.e., r =.85 does not mean the variables at 85% related ➞ r2 is the percentage of variance accounted for by the relationship between the variables. ↳ r2 is sometimes called the coefficient of determination ○ Correlation and Causation Correlation does not establish causation Causal Inferences require: ⇒ Covariation of X and Y = As X changes, Y changes ⇒ Temporal Order: Changes in X occur before changes in Y ⇒ Absence of plausible alternative explanations – X must be the only (or best) explanation for changes in Y The bi-directionality problem ⇒ Ambiguity about whether X has caused Y, or Y has caused X ⇒ E.g., Does playing recreational sports lead to higher levels of well-being, or does having higher levels of well-being lead to playing more sports? The Third-Variable Problem: ⇒ A third variable, “Z”, may be the true cause of why X and Y appear to be related (i.e., why they vary together) ➞ If we fail to measure Z, we only see X and Y varying together Spurious Correlation: an association that is not genuine ⇒ Ice cream sales & drowning Statistical Approaches ⇒ Partial Correlation: a correlation between X and Y is computed while statistically controlling for their individual correlations with a third variable, Z. ➞ Measure correlation between X & Z ➞ Measure correlation between Y & Z Research Design Approaches ⇒ Cross-sectional research design: ➞ Each person participates on one occasion, and all variables are measured at one time ➞ Bi-directionality problem is relevant ⇒ Longitudinal research design: ➞ Data are gathered on the same individuals or groups on two or more occasions over time. ➞ Prospective design: X is measured at an earlier time than Y ⇒ Cross-Lagged Panel Design ➞ Measure X and Y at Time 1 ➞ Measure X and Y at Time 2 ➞ Examine Correlations ↳ E.g., Preferences for violent TV and violent behaviour Combining Statistical and Design Approaches ⇒ Eron et al., 1972: Used partial correlation to examine the relationship between Stronger Preference for Violent TV in 1960 (X1) and Greater Aggression in 1970 (Y2) while controlling for: ➞ Relationship between Preference for Violent TV in 1960 (X1) & Greater Aggression in 1960 (Y1) ➞ Greater Aggression in 1960 (Y1) and Greater Aggression in 1970 (Y2) Now can we draw clear causal conclusions? ⇒ No list of third variables is complete (can’t statistically control for everything) Correlation in the Media ⇒ Can be well-reported and poorly reported ⇒ Even when well-reported, people might not read past the sensationalized headline ○ Correlation and Prediction Regression Analysis ⇒ Explores the quantitative, linear relation between two variables ⇒ Used to predict the scores of one variable based on the scores of another variable ➞ E.g., Time spent studying Final Course Grades Criterion Variable (a.k.a. outcome variable; dependent variable): ⇒ The variable that we are trying to estimate or predict Predictor Variable: ⇒ A variable whose scores are used to estimate the scores of a criterion variable Regression: Scatter Plots ⇒ Accuracy of the prediction depends on the strength of the correlation ⇒ Conditions between samples must be similar Multiple Regression ⇒ Explores the linear relation between one variable (the outcome or criterion variable) and a set of two or more other variables. ⇒ New predictors must improve regression model’s prediction ➞ Strength of correlation ➞ Minimal overlap between predictor variables ○ Benefits of Correlational Research and Special Issues Prediction in daily life ⇒ We do not need to know about causal relationships or mechanisms to make useful predictions Test validation ⇒ Does a measure/test predict the outcomes it is supposed to? Hypothesis and model testing ⇒ Test the predictions of theories and models; test specific hypotheses Venturing where experiments cannot tread ⇒ Experimentation may be impractical or unethical for many research questions Convergence with experiments ⇒ External validity ○ Special Issues in Correlation Non-linear relations ⇒ Pearson’s r may underestimate or fail to detect a correlation if the relationship is non-linear Range restriction: the range of scores obtained for a variable has been artificially limited in some way ⇒ E.g., Personnel Selection Test & Job Performance Chapter 6: Case Studies and Observational Research ○ Case Studies Case Studies: Basic Characteristics ⇒ Case Study ➞ An in-depth analysis of an individual, social unit, event or other phenomenon ➞ Researchers conduct a comprehensive examination of a single case ➞ Examples: ↳ H.M. & N.A. ↳ Hospital Staff ↳ Riders Playoff Game Why Conduct Case Studies? ⇒ Flexibility ➞ Use many different techniques to collect data ➞ Address limitations during the study ⇒ In-depth Exploration of the phenomenon of interest ⇒ Case studies are narratives of the case ⇒ Can provide insight into the causes of behaviour and lead to hypotheses ⇒ Provide supporting or disconfirming evidence ⇒ Provide support for the external validity of other findings Types of Case Studies ⇒ Qualitative Case Study: ➞ Examine an individual case in depth, within its real-life context (Creswell et al., 2007; Crowe et al., 2011) ⇒ Quantitative Case Study: ➞ Researchers rely primarily on numerical assessments and analysis to describe and understand a case ⇒ Mixed-Methods Case Study: ➞ Researchers rely substantially on both qualitative and quantitative data and analyses to explore a case ⇒ Single-Case Study Design: ➞ Researchers analyze one case in depth ➞ Allows for a specific focus on one case ⇒ Multiple-Case Study Design: ➞ Researchers examine two or more cases and perform an in-depth analysis of each case ➞ Two or more cases are investigated under the same umbrella theme ➞ Disadvantage: less resources may be devoted to any particular case ➞ Advantages: If done sequentially, allows researchers to learn from one case before conducting the next. Cross-case comparison Gathering Qualitative Data ⇒ Semi-structured Interview ➞ A researcher identifies in advance a set of topics or themes to be discussed with the interviewee, but the way and sequence in which questions are asked remain flexible. ➞ Open-ended questions ⇒ Focus Group ➞ A moderator leads a group of people through an interview and discussion of a set of topics ⇒ Other techniques ➞ Naturalistic/Participant observation ➞ Interview related people ➞ Documents Gathering Quantitative Data ⇒ Psychological Assessment Tools ➞ Personality tests; intelligence tests ➞ Scales relevant to the research topic ⇒ Neuropsychological Tests ➞ Tasks: N.A. performance on mirror reading ↳ H.M. on mirror drawing ⇒ Physiological Measures ➞ Brain imaging Limitations of Case Studies ⇒ Difficulty Drawing Causal Conclusions ➞ A case is explored in-depth; things are not held constant ➞ E.g., N.A.’s irritability with things being out of place ⇒ Generalizability ➞ Depends on the nature of the research topic ➞ Atypical cases may not generalize ⇒ Observer Bias: occurs when researchers have expectations or other predispositions that distort their observations ○ Observational Research Observational Research: Basic Characteristics ⇒ Encompasses different types of nonexperimental studies in which behaviour is systematically watched and recorded. ➞ E.g., Observe the frequency of men vs. women running red lights ⇒ Can be qualitative, quantitative, or mixed-methods in nature ⇒ Behaviour is observed and systematically measured in real time (or is recorded). Why Conduct Observational Research? ⇒ Describe behaviour ➞ Experimentation is not always an option ⇒ Examine relationships among naturally occurring variables ➞ Exploratory ➞ Hypothesis and theory testing ↳ E.g., are men more aggressive drivers? ⇒ Establish generalizability of principles previously discovered in experiments Types of Observational Research ⇒ Key ideas – Observational research varies: ➞ In the naturalness of the setting ➞ In whether participants/subjects are aware of the observation ➞ In the degree to which the observer intervenes in the situation ⇒ Naturalistic Observation ➞ Researchers passively observe behaviour in a natural setting ➞ Two Types: ↳ Undisguised: participants are aware they are being observed ↳ Disguised: participants are unaware they are being observed ➞ Advantages ↳ Ecological validity ↳ Presumed External Validity ➞ Disadvantages ↳ Lack of control: Cannot explore the causes of behaviour ↳ Complex behaviour ↳ Cannot observe all behaviours ↳ Reactivity ➞ Ethics ↳ Voluntary consent cannot be obtained for disguised naturalistic observation ↳ Naturalistic observation without consent may be permitted if: ↪ The study is not expected to cause participants harm or distress ↪ Confidential information is protected ↪ If responses/behaviours were to become known, participants would not be exposed to social, economic, or legal risks ⇒ Participant Observation ➞ The observer becomes part of the group or social setting being studied. ➞ Disguised Participant Observation: ↳ Researcher becomes part of the group and withholds the fact that research is being conducted ↪ E.g., UFO Cult membership ↳ Ethical Issues ↪ No informed consent ↪ Researcher’s presence may influence behaviour ➞ Undisguised Participant Observation: ↳ The group is made aware of the researcher’s presence ↳ Avoids ethical issue of deception ↳ Disadvantage: Researcher’s presence could influence group ↪ E.g., Aggressive behaviour in children at recess ➞ Ethnography: a qualitative research approach that often combines participant observation with interviews to gain an integrative description of social groups ➞ Advantages of Participant Observation ↳ Study behaviour from the viewpoint of the insider ↳ Use multiple forms of inquiry (unless using disguised observation) ➞ Disadvantages of Participant Observation ↳ Interaction with group may influence behaviour ↳ Ethical problems with undisguised observation ↳ Researchers may cross boundaries with group members ⇒ Structured Observation: ➞ Is where a researcher fully or partly configures the setting in which behaviour will be observed ➞ Expose people to certain conditions or particular tasks ➞ Advantages: ↳ Efficiency: can prompt behaviour ↳ Control: same conditions for everyone ➞ Disadvantage: ↳ Artificial conditions do not equate to the real world Recording Observations ⇒ Narrative Records: provide an ongoing description of the behaviour being observed for later analysis ➞ Written or recorded account of the researcher’s observations ➞ Video or audio of the participants’ behaviour ⇒ Field Notes: are used to record important impressions or instances of behaviour ➞ Less comprehensive than narrative records ⇒ Behavioural Coding Systems: are used to classify participants’ responses or behaviours into mutually exclusive categories ➞ Clearly defined categories ➞ Carefully trained observers ⇒ Rating Scales and Ranking Scales: Used to evaluate participants’ behaviour or other characteristics ⇒ Diaries: participants record their behaviours or experiences for defined periods of time or whenever certain events take place ⇒ Observer Training and Reliability ➞ Interobserver (interrater) reliability: represents the degree to which independent observers show agreement in their observations. ➞ Cohen’s Kappa: represents the percentage of times that two observers agree, beyond the degree of agreement expected by chance ➞ Development of coding systems and training are difficult and time consuming ↳ How to define categories? – no overlap ↳ How many categories? Sampling Behavior ⇒ Focal Sampling ➞ Select a particular member (or unit) who will be observed at any given time ➞ Then another member is observed, until all have been ⇒ Scan Sampling ➞ At preselected times the observer rapidly scans each member of a group so that the entire group is observed within a relatively short period ⇒ Situation Sampling ➞ Used to establish diverse settings in which behaviour is observed ⇒ Time sampling ➞ Used to select a representative set of time periods during which observations will occur. Limitations of Observational Research ⇒ Problems with Drawing Causal Conclusions ➞ Complex causes of behaviour ➞ Alternative explanations ⇒ Observer Bias ➞ Expectations or predispositions affect observations ➞ Minimize bias: ↳ Well-developed coding system/definitions & rigorous training ↳ Blind observation ⇒ Reactivity ➞ Behaviour is changed as a result of being observed/measured ○ Unobtrusive Measures Unobtrusive measure: assesses behaviour without making people aware that the behaviour is being measured ⇒ E.g., Disguised observation techniques ⇒ Physical trace measures: unobtrusively examine the traces of behaviour that people create or leave behind ➞ E.g., garbage on the ground after a music festival gives an indication of environmental values Archival Records ⇒ Previously existing documents or other data that were produced independently of the current research. ⇒ Include any type of previously created material: ➞ Government documents: statistics, policies, etc. ➞ Photographs/Videos ➞ Corporate Materials: advertisements, emails, bylaws, etc. ➞ Literature ➞ Personal letters ➞ Media Materials: articles, previous research ⇒ Useful to track changes over time ⇒ Limitations of Archival Research: ➞ Ethics – how were the data collected? ➞ Limited availability of archives and data ➞ Interpretation ➞ Did the original research cause reactivity? Chapter 7: Survey Research ○ Basic Characteristics of Survey Survey: uses questionnaires and interviews to gather information about people ⇒ Can be basic or applied ⇒ Can be descriptive or used to test hypotheses ⇒ Example: ➞ Ask people to report how often they text and drive so we can describe the problem ➞ Test a hypothesis: people with busier work lives will text and drive more often. Populations and Samples ⇒ Population: refers to all the cases or observations of interest to us ⇒ Sample: a subset of cases or observations from the population ⇒ Sampling Frame: a list – of names, phone numbers, addresses, or other units – from which a sample will be selected ➞ I.e., the “operational definition” of the population ⇒ Representative Sample: reflects the important characteristics of the population ⇒ Nonrepresentative (biased) sample: does not reflect the important characteristics of the population ⇒ Response Rate: the percentage of cases who participate in a survey out of all those who were selected to participate ➞ # responded / # asked = % Why Conduct Surveys? ⇒ Efficiency and Scope! ⇒ To describe the characteristics of a population ⇒ To describe and compare the characteristics of different populations or different demographic groups within a population ⇒ To describe population time trends ⇒ To describe relations among psychological variables, self-reported behaviours, and other characteristics ⇒ To test hypotheses, theories, and models Limitations of Surveys ⇒ Cannot draw inferences about causal relationships ⇒ Problematic Samples ➞ Poor sampling methods & non representative samples ➞ Low response rate ⇒ Validity of measurement ➞ Poor measures of behaviour & biases ➞ Social desirability bias: A tendency to respond in a way that a person feels is socially appropriate rather than as the person truly feels. ○ Selecting a Sample Probability Sampling ⇒ Each member of the population has a chance of being selected into the sample, and the probability of being selected can be specified Non-probability Sampling ⇒ Each member of the population either does not have a chance of being selected into the sample, the probability of being selected cannot be determined, or both Selecting a Sample: Probability Sampling ⇒ Simple Random Sampling: ➞ Every member of the sampling frame has an equal probability of being chosen at random to participate in the survey ⇒ Stratified Random Sampling: ➞ A sampling frame is divided into groups (called strata) and then within each group random sampling is used to select the members of the sample ⇒ Cluster Sampling: ➞ Units that contain members of the population are identified and then randomly sampled ⇒ Multistage Sampling ➞ The use of two or more stages to select progressively smaller samples Selecting a Sample: Non-probability Sampling ⇒ Convenience Sampling: ➞ Members of a population are selected non randomly for including in a sample on the basis of convenience. ⇒ Quota Sampling: ➞ A sample is non randomly selected to match the proportion of one or more key characteristics of the population ⇒ Self-Selected Samples: ➞ When participants place themselves in a sample, rather than being selected for inclusion by a researcher ⇒ Purposive Sampling: ➞ Researchers select a sample according to a specific goal or purpose of the study, rather than at random Margin of Sampling Error and Confidence Level ⇒ Sampling Variability: ➞ Chance fluctuations in the characteristics of samples that occur when randomly selecting samples from a population ⇒ Variability in the sample relative to the population is sampling error ➞ Any sample is an imperfect representation of the population ⇒ Margin of Sampling Error: ➞ A range of values within which the true population value is presumed to reside ➞ E.g., Is Justin Trudeau doing a good or poor job as Prime Minister ↳ 35% of our sample said “good” ↳ +/-3% margin of error; 95% confidence interval ↳ Estimate: 95% confident that 32%-38% of Canadians think Justin Trudeau is doing a good job ○ Constructing the Questionnaire Steps in Developing a Questionnaire ⇒ Reflect on the research goals and convert them into a list of more specific topics that will be investigated ⇒ Identify variables of interest within each topic ⇒ Consider the practical limitations of the survey ⇒ Develop questions, decide on their order, and get feedback ⇒ Pretest the questionnaire ⇒ Revise Types of Questions ⇒ Open-Ended Questions: ask people to respond in their own terms ➞ Maximize freedom to respond ➞ Difficult and time-consuming to analyze ⇒ Closed-Ended Questions: provide specific response options ➞ Responses are usable (already coded) ➞ Relevant to the research question ➞ Limited responses do not necessarily capture “true answers” ⇒ Multiple Choice: several choices are presented ➞ E.g., Why did you take this class: ↳ It is required for my degree, It seems interesting, I like to torture myself ⇒ Ranking Scales: present a list of items and ask people to rank them along some dimension (e.g., importance) ⇒ Forced-Choice Questions: Choose between two options ➞ E.g. The Liberal Government has done a good job managing the economy – Yes or No? ⇒ Rating Scales ➞ Provide participants with a graded scale on which they respond to questions ➞ How many points on a scale? ↳ 3 – 100+ ↳ Most are 3-10 points (usually at least 5 points) ➞ Even or odd amount of points: force the choice? ➞ Likert Scales ↳ Measure attitudes by combining scores on several items, each of which records how positively or negatively a person feels about a statement ↳ Reverse Score items when necessary ↳ Scale must be balanced and dimension valid Wording the Questions ⇒ Basics ➞ Use straightforward wording ➞ Simple questions are typically better than complex ones ➞ Consider the difficulty of the questions in conjunction with the overall length of the questionnaire ⇒ Common Issues ➞ Leading Questions: items are presented in an unbalanced way that can overtly or subtly suggest that one viewpoint or response is preferable to another. ➞ Loaded questions: items that contain emotionally charged words that suggest one viewpoint or response is preferable to another, or they contain assumptions with which the option to disagree is not provided. ➞ Double-barreled questions: items that ask about two issues within one question, forcing respondents to combine potentially different opinions into one judgment. ➞ Double negatives: items whose phrasing contains two negative words. Order of Questions ⇒ The order of questions can influence people’s responses ➞ Fatigue & Boredom ➞ Context Effects ⇒ Context Effects: responses to a survey item are influenced by the particular items that occur directly or soon before it ⇒ The questionnaire should: ➞ Be coherent & visually pleasing ➞ Have continuity ↳ Put similar items together ↳ Within similar sets of questions, put open-ended first ↳ Put more general items before more specific ones ↳ Logically organize the items and sections ↳ Put demographic questions last & start with an interesting question ○ Administering the Survey Face to Face Interviews ⇒ Participants fill out questionnaire or take part in interviews in person ⇒ Advantages: ➞ Achieve high response rate ➞ Establish rapport: sensitive information and motivation ➞ Provide clarification when necessary ⇒ Disadvantages: ➞ Cost & Logistics ➞ Interviewer Effects: presence of interviewer may distort participants’ true responses ➞ Interview bias ➞ Interviewer characteristics Telephone Interviews ⇒ Participants are contacted and take part in the interview or survey by telephone ⇒ Advantages: ➞ “Human touch” ➞ Can provide clarification if necessary ➞ More cost effective & logistically simpler than face-to-face ➞ More oversight of interviewers ⇒ Disadvantages: ➞ Cannot establish the same rapport as in face-to-face methods ➞ Mentally taxing for participants ➞ Interviewer effects Mail Surveys ⇒ Survey is mailed to selected participants, who return it to the researchers ⇒ Advantages: ➞ Lower cost than other methods ➞ Participants can view items for as long as they want ⇒ Disadvantages: ➞ Low response rate (and potentially unrepresentative sample) ➞ People may start to fill out survey but do not finish ➞ No clarification of items Online Surveys ⇒ Surveys are delivered on websites or by email ⇒ Advantages: ➞ Low cost & simple logistics ➞ Simplified data collection ➞ Quick sampling and data collection ⇒ Disadvantages: ➞ Convenience samples ➞ Limited attention Response Rate & Nonresponse Bias ⇒ Response Rate: the percentage of cases who participate in a survey out of all those who were selected to participate ⇒ Nonresponse Bias: When people who were selected but did not participate in a survey would have provided significantly different answers (or other data) from those provided by participants ➞ Introduces error ➞ We shouldn’t conclude that a sample is not representative simply because the response rate is low ○ Being a Smart Survey Consumer Survey research is widely used and important! ⇒ Government ⇒ Academics ⇒ Politics ⇒ Business Basic Questions ⇒ Who conducted the survey, and for what purpose? ⇒ Was probability sampling used? ⇒ What method was used to administer the survey? ⇒ Are the survey questions appropriate? ⇒ Are the results interpreted properly? Being Aware of Bogus Surveys ⇒ Sugging: selling under the guise of research ⇒ Frugging: soliciting donations or fund raising under the guise of research ⇒ Push Polls: an attempt to influence people’s opinions under the guide of conducting a poll ⇒ E.g., Who do you trust more to protect America from foreign and domestic threats? ➞ President Trump ➞ A Corrupt Democrat Chapter 8: Single-Factor Experimental Designs ○ The Logic of Experimentation Experimental Control: ⇒ Manipulate one or more independent variables ⇒ Choose dependent variables to be measured and how they will be measured in order to assess the effects of the independent variable(s) ⇒ Regulate other aspects of the research environment, including the manner in which participants are exposed to the various conditions in the experiment The Goal: Rule out alternative explanations to determine causation Causal Inference and Experimental Control ⇒ Criteria for Causal Inference ➞ Covariation of X and Y. As X varies, Y varies ➞ Temporal Order. The variation in X occurs before the variation in Y ➞ Absence of plausible alternative explanations ⇒ Extraneous

Use Quizgecko on...
Browser
Browser