🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

PSYC 306 Lecture Notes - Fall 2024 PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

These lecture notes cover introductory material for PSYC 306, focusing on understanding data, presentation, and analysis, along with basic concepts from the scientific method. The notes detail common data presentation techniques, including different types of graphs (line, bar, pie charts) and their applications.

Full Transcript

Ryan Stainsby Fall 2024 PSYC 306 LECTURE NOTES Lecture 1 - Introduction - August 29 General Course Information - All office hours will be on zoom - Textbook is REQ...

Ryan Stainsby Fall 2024 PSYC 306 LECTURE NOTES Lecture 1 - Introduction - August 29 General Course Information - All office hours will be on zoom - Textbook is REQUIRED - Check syllabus for reading plan - Final exam is non cumulative - Exams during class will have more rooms to spread out Understanding Data Introduction - Being a savvy data consumer - Graphics - aspect of presentation and analysis of a study’s design - Helps reveal patterns not seen without a figure - Graph interpretation - Visual decoding of quantitative and qualitative info on graphs - Common data presentation - X-values are independent variable (what was manipulated) - Y-values are the dependent variable (outcome) - Primary types of graphs - Line graph - x is on the horizontal axis - Bar graph - similar - Pie chart - y variable is the percentages - When each type of graph is used - Continuous data - Quantitative - In an ordered sequence - Can have unlimited possible values - Categorical data - Qualitative - Not necessarily (or usually) numerical - Can only have certain possible values - Ordinal data - Qualitative (ish) - Definite ordering of categorical data - Treated as continuous data (usually) - Continuous Variables: Line Graphs - Line graphs are used to graph two continuous variables against each other - Can compare changes over time of many groups - Usually have time period or similar on x-axis - Continuous variables: scatterplot - Points plotted without connected lines - X and y must both be continuous - Each subject is represented by one point - Can also compare multiple groups - Categorical variables: bar charts - Data displayed as a series of bars whose heights indicate number or proportion of values in each category - Can be horizontal or vertical - Categorical variables: pie charts - Each “slice” of the pie is the x-variable - The size of the slices are the y-variables - Should be in proportions that add to 100% - Are also organized in ordered by size - Largest to smallest, clockwise - Multiple categorical variables: stacked bar charts - Can put each bar of different variables on top of each other - Multiple variables works specifically when measuring the exact same dependent variable for all - Other categorical graphs - If the x-axis is time, the data can be presented categorically - Ie: can use a bar chart or line graph for time - What makes a graph hard to interpret - If there are no meaningful relationships between items - Order of percentages don’t make sense - Color does not help - Proportions are not equal - Clear labels - Explain what the y-values actually show - What is the time period shown - Considerations when reading graphs - What are the types of values - What does the caption say - Are there multiple conditions in the graph - How are values shown - does it manipulate data Lecture 2 - The Scientific Method - September 3 Why Study Research Methods? - Conduct Research and Evaluate Work of Others - Results require full understanding of methods, as core of psychology - Evaluate False Claims - Science differs from pseudoscience in many ways - Testable and refutable hypotheses vs negative results accepted - Objective, unbiased evaluation vs focusing on examples of success and ignoring samples of failure - Challenging and adapting theories vs unchanging theories (lasting) - Grounded in past findings vs ignoring past history - Learn to Think like a Scientist - Scientific method keeps track of both hits and misses - Cases where results support or don’t support the theory - In order to prove a relationship, need a high accuracy for both true negatives and true positives - Things need to be standardized across studies - Eg: electrodes in EEGs have standard placements Sources of Knowledge - Non-scientific approaches - Method of tenacity - habit or superstition - Method of intuition - hunch or feeling - Method of authority - from an expert - Rational method - reasoning/logical conclusion - Method of empiricism - direct sensory observation The Scientific Method - Key Elements - Minimize errors - Use several methods to answer questions systematically - Systematic examination - Follows clear methodology - Techniques - To acquire to knowledge and correct previous, standard - Knowledge is evolving - Anything can be falsified - What it can do - Allows us to make a informed guess - Can never say something is proven - In support or not in support - Evidence tends to be probabilistic - Strength and vulnerability is that there is never a definitive answer - Assumptions - World is orderly and governed by natural laws - Eg: similar outcomes for different groups tested (with controls) - There are associations between events - There exist cause and effect relationships - Discover laws of nature through logical thinking - Methods help to arrive at understanding of cause and effects relationship - Steps - It is a cycle - Observation → topic → hypothesis → test/experiment → analyze data → adjust hypothesis → observation - Start by observing behavior and develop a theory - Form a hypothesis (tentative answer/explanation) - Use hypothesis to generate a testable prediction - Make systematic, planned observation, to evaluate - Use observations to support/refute/adjust the hypothesis - Most hypotheses usually have some correct elements - Step 1: Observation - Observe a behavior that leads to a question - Then develop a theory - Inductive reasoning - generalization based on few observations - Observations leads to theories - Arrive at explanation about cause and effect relationship b/w variables - Theory is a statement(s) that: - Organizes observations/ideas - Explains them - Predicts new events - Differing evidence can provide different results - Eg: what causes the greatest death rate? - Observations need to be shareable and public! - Next need to examine the literature that is available - Next lecture! - Conditions for a good theory - Parsimony - Theories gain power when explains many results with few concepts - Precision - Different investigators can agree about its predictions - Testability - Must make predictions that can be test empirically - Falsifiability - needs to be able to disproven - Step 2: Formulating a Hypothesis - What is it? - Hypothesis - proposed relationship between variables - Tentative answer or explanation to question - Needs to be tested and critically evaluated - Should have a directional explanation! - Variable - any characteristic that can change/take different values - Hypotheses contain two types of variables - Independent variable (IV) - variable we think is the cause - Doesn’t depend on other variables - Manipulated in experiments - Dependent variable (DV) - variable we measure, the outcome or effect - Results from manipulation - Generating hypothesis - In nonexperimental research, we can’t imply causality! - This is because there are potential alternative explanations - Eg: third variable problem - Step 3: Generating a Testable Prediction (Statement) - Prediction - Statement about expected relationship between variables in specific situation - Should use the rational/logical method - Deductive reasoning - making a specific conclusion from general premises - Begin with a general hypothesis - Make a specific prediction about a specific situation - Requirements of a good hypothesis/prediction - Logical - Founded in established theories/comes from previous research - Testable - Possible to observe and measure all variables involved - Refutable/falsifiable - Possible to get results contrary to hypothesis/prediction - Step 4: Make Systematic, Planned Observations - Collect and analyze data - Provide fair and unbiased empirical test of hypothesis - This is what we commonly think of as the “research” stage - Evaluate prediction using direct observation - Many ways to collect data - Will be reviewed this term - Step 5: Evaluating the Research Hypothesis - Based on statistics applied to observations, hypothesis will be supported, refuted, or adjusted - Not the focus of this course - PSYC 305! The Research Process - The Scientific Method Cycle - Repeating cycle between theories, hypotheses, and data - Continuous - Based on testing and correcting - Based on how data fits predictions - Communicating findings should come after numerous tests and cycles - Important Principles of scientific method - Empirical - Observations are: - Systematic - Performed under specified conditions - Enable us to accurately answer question - Public - Methods used must be available to others - Transparency - anyone can see all involved steps - Replication - another lab can reproduce results from steps - Objective - Observations / conclusions must be free of bias and personal opinions - Basic vs Applied Research - Basic research - to understand a type of behavior - Most research in this course will be basic! - Applied research - addresses a particular problem APA Research Reports - APA Scientific Article Format - Format used for psychology - Medicine format has different rules (eg: neuroscience) - Abstract - Brief summary of article (150 words or less) - Introduction - Goal of paper - Past theories and findings - Hypothesis to be tested - Methods - Study design - Participants - Materials, design, and procedure - Results - Data analysis - Statistics to test hypothesis - Discussion - Interpretation of results relative to past research - Potential future research and areas of exploration - References, figures, and tables - Other sections (differ by paper vastly) - APA Writing Style - General elements - Impersonal (objective) style (avoid we/I) - Terse (concise) dry style - Use past verb tenses - Citations follow APA format - For literature review… - Be conservative - select only those studies that are relevant and contribute to arguments - General rule: paraphrase a point using own words, rather than quoting - Abstract - About 100-200 words - One sentence statement of the problem or research question - Brief description of participants - Brief description of methods and procedures - Report of the results - Statement about the conclusions or implications - Introduction - General introduction to topic of the paper - Review only relevant literature - Need to use the right terminology - Statement of problem or hypothesis with relevant variables defined - Description of research strategy used to evaluate hypothesis or obtain an answer to research question - Methods - Must have enough information to be able to replicate the study in another lab - Participants - Who they are, any relevant biases, …etc - Design - IV, what conditions participants were in, order of conditions/controls, then dependent variables - Procedure - Describe what was done - Include materials (may be in another section) - Results - Immediately after method - Description of data and statistical analyses - Provides complete and unbiased reporting of findings - No discussion! - Discussion - Restates hypothesis - Summarizes findings - Discuss interpretations and implications - References - List complete references from all sources - Organize alphabetically by first authors’ last name - Need to cite studies from which ideas came from - 1-to-1 match between references in text and in reference list - If important enough to be in list, should be in the paper - Should appear in both - Tables and Figures - Should contain more information than is listed in the text - Each should be mentioned in text by number - Text should highlight important aspects of figure - Caption should explain figures clearly - Peer Review Process - Most journals send submitted articles to 2-3 reviewers - Editors will assign to specific articles - Reviews must be arms-length away - Objective, no conflict of interest with authors - Should have reasonable amount of knowledge in the subject - Review process takes 3-6 months before Editor sends a decision - Reviewers are unpaid (no incentive) - Most common decision is to revise and resubmit - Or accept or reject paper - Same reviewers are likely to read another submission - 2nd review usually takes 3-4 months - Total time from paper submission to acceptance is about 1 year - Up to 2 years if resubmission to a new journal - Higher impact journals (eg: Science, Nature) - Authors encouraged to use them - Lower acceptance rates (usually only very impactful work) - Scientists are always under pressure to publish - Grant funds, awards, and tenure influenced by publications - Issues returned to in ethics later Lecture 3 - Library Workshop (Literature Searches) - September 5 Steps to Follow (main topics below) Research Topic/Question - From observation (in one of the ways discussed) - Want to find published research that discuss issues Choose Key Concepts & Brainstorm Terms - Identify core concepts from the overall research question - Brainstorm related terms and synonyms to key terms - Goal is to obtain all articles that use different terminology - Some external research for these terms is helpful! Select Database - Mcgill has many databases (depending on topic) - PsycINFO - main psych database - Medline - main medical one, contains many psych papers - Scopus or Web of Science Core Collection - multidisciplinary databases - Helps find papers that might be published under other topics Conduct Search - Combine with boolean operators - AND - When need search to cover all topics connected by AND - Decreases results - OR - Need search to cover any listed terms - Increases results - Truncation - At the end of a word - Replaces arbitrary character or characters - Usually use *, ?, or ! - Eg: psycholog* = any term that starts with this - Can also put in the middle of word for alternate spellings - Manage Article - Zotero and EndNote (workshops and videos available) - Used for citation management - McGill pays for EndNote so might be more familiar with that - Library guide - http://libraryguides.mcgill.ca/psychology - PsycINFO - Search word term - Use scope to help learn about terms - If searching for [term].mp - will look for that word anywhere in articles - Click on the term of interest to see general overview of papers - Then click continue to keep searching terms - There will be a menu at the top - can select multiple terms and choose how to combine them - Lower down is a box called limits - Click on additional limits box - Allows for filtering by peer reviewed journals, age groups..etc - Then can export all search results - Into EndNote or Zotero…etc - Can also email the list of articles to self - To read articles - Click “find full text at mcgill” - Look for “view pdf” - Then can read/download it - EndNote/Zotero also allows for finding pdfs - Scopus (or Web of Science) - From mcgill library homepage - Click on databases - Then search for scopus - Search within: article title, abstract, and keywords - To search for multiple words, put in quotes for exact terms - Need to type every way to spell every option - Can add a search field - Use each field for ORs - Use new fields for each AND - Use truncations! (eg: child* = child, children, childhood…etc) - Some limits will help narrow it down more - Can see authors and locations that are publishing the most - Can also filter for subject area - Can also look at citations - See articles that are highly cited - Then look at articles that cite - Doesn’t filter for peer review publication - Use Ulrich’s Global Serials Directory - Find via McGill Library databases - Search for the journal it was published in - Look for referee jersey icon beside the journal (indicates peer review status) - Can also find other information Screen - Use methods described for PsycINFO for screening - Use Ulrich’s to determine if journals are peer reviewed - APA citation courses/workshops are available - See slides for other resources Synthesize the Results - Scan for key information - Look at title and read abstract - Assess impact of an article - Look at number of citations - Article level metrics (citation number…etc) - Journal level metrics (quality and importance of journal work) Publish Work in Open Access Journal - Why open access? - Makes free to general public - Increased visibility, usage, and impact of research - Retention of some or all of copyright - Fulfills grant requirements for many government funds - General societal good - How to make open access - Archive in open access repository - Mcgill shares and makes available - Publish in an open access journal (directory available) - Many charge processing charges! (grants often cover them) - Able to limit directory to journals without fees - “Creative commons license - what’s right for me?” - search - Finding legitimate open access publishers - Indicators to look for (to ensure legit): - Articles have Digital Object Identifier - Registered in Ulrich’s - Included in McGill library - Clearly indicates rights - Indicators to watch out for - Website difficult to locate/identify - Publisher information is absent on journal website - Instructions to authors information not available - Peer review and copyright info is unclear on website - Also watch out to sign away thesis copyright to a solicitor - If suspect legitimacy of a publisher, there are ways to report Peer Review Process - - Know Your Terms - Pre-print - Almost always reserve copyright of this version - Author’s original, submitted version - Usually what can be released to open access - Author’s accepted manuscript (AAM) - Accepted version - Post-print - Publisher’s final version - Publisher’s PDF - Version of Record (VoR) - Make sure to archive all versions and all author agreements Lecture 4 - Defining & Measuring Variables - September 10 Why Measure? - Comparison - present stimuli, compare items - Classification - looks for diagnoses or categorization - Diagnosis - using clinically validated scales to determine disorders…etc (use DSM) - Decision-making - using data to make a determined decision Types of Variables - Variables - Characteristics or conditions that change or take different values - Many types in research - Well defined, easily observed and measured (height, age…etc) - Intangible, abstract attributes (motivation, love…etc) - Independent variable - what is manipulated - Dependent variable - what is observed (influenced by IV) - Confounding variable - uncontrolled variable, varies systematically with IV and has potential to influence DV - If yes to both possibilities, it is confounding - ask questions about these! - Extraneous variable - uncontrolled variable that could influence DV (but not related to IV) OR could influence the IV but not related to DV - Not the same as confounding - only has one of the two! How to Measure? - Each variable must have at least 2 levels in order for relationships to be established - Qualitative data - any information that can be captured and isn’t numerical - Quantitative data - information represented by numerical values - The variable is the characteristic of interest and quantity we wish to measure - If we want to turn variable that are usually qualitative into one that is quantitative we can use a Likert scale - Need to specify what the endpoints mean for context - Score - particular value that is measured - Data - series of scores, observations, measurements or facts used for analysis, to reason, or to make decisions - Unit (of measurement) - standard scale or quantity in which values are measured - Repeatable by other scientists (across studies) - Direct measurements - Straightforward, classic, such as height or heart rate - Inferred states - More complicated if not directly observable, need to infer from observable behavior - Never 1-to-1, ie: never able to determine entire variable from behavior Constructs & Operational Definitions - Inferred states = constructs - Construct - unobservable internal mechanism that accounts for externally observed behavior - Influenced by changes in environment, and influence behavior - Abstract internal constructs can be reflected in concrete behaviors - Need to consider the environmental impact and behavioral outcomes - Operational definitions - Goal - convert abstract entity (construct) into a concrete variable that can be directly observed and measured - Procedure - First identify a behavior associated with construct - Then specify a measurement procedure - Finally use procedure as a definition and measurement of construct - Precise description of what you will measure, how to measure, and when to measure - Testing protocol, find in Design subsection of Methods - Defines operations that allow us to link unobservable construct with observable behaviors - Good operational definitions - Clear and precise - Make replication possible - They often have norms in psychology - look for them! - Limits on operational definitions - Not the same as the construct - only external manifestations (we think) - May not capture all internal components - May not consider any extraneous factors How Good Are Our Measures? - Need to meet two important criteria - Validity - accuracy of measure - Reliability - consistency/repeatability of the measure Validity - Main question: does the measurement capture the variable it is intended to? - Conceptual definition - Qualities and attributes about abstract concept/construct in mind - Operational definition - Defining construct by concrete methods use to measure it - If conceptual and operational definitions are close, there is high validity - Importance - Conclusions are formed based on what researchers think they are measuring - If not what they are actually measuring - conclusion will be wrong - Types of validity - Face, predictive, concurrent, construct, convergent & divergent - Internal validity - Extent to which you can conclude that changes in IV have caused observed changes in DV - How closely researcher’s experimental design follows cause and effect explanation - Depends on appropriate control of other variables - External validity - Extent to which results can generalize to other settings and population - If effect remains even with different groups, has good EV Reliability - Consistency of a measure over repeated applications under same conditions - High reliability assumes variable is stable - Measurement usually fluctuates (ie: error) so this error should be as small as possible - Concept of reliability - Inconsistency of a measurement comes from error - Measurement = true score + error - High reliability = greater consistency = less randomness/error - Low reliability = less consistency = more randomness - Test-Retest Reliability - Take the same measurement/score of behavior at two different times and calculate correlation between scores - Reliability coefficient - Answers whether survey generates same response over time - Reflects the consistency of measure - scores should be highly correlated - Inter-Rater Reliability - Used for measurements that involve human judgments - Compare the scores from two or more raters and calculate correlation - Split-Half Reliability - Typically used for clinical scales and questionnaires - Internal consistency - Take scores from half of items (sometimes in a set) and correlate them with scores from other half - Halves should correlate highly if measuring same variable - Validity and Reliability - Partially related (and partly independent) - Measurement can procedure can be reliable, valid, both, or neither - Though unreliable, valid measures are often considered impossible Types of Measurements - Recognizing - Statistics - numerical values that rely on data - Facts - statements that can be proven or disproven with data - Arguments - reasons or explanations given for data - Opinions - personal judgments or thoughts not based on proof or certainty from data - Facts go beyond arguments based on truth value Lecture 5 - Measurement Issues Cont - September 12 Sources of Measurement Error - Measurement Error - Observed scores may not be a true reflection of the variable or construct being measured - There is always some error present - Observed score = true score + error - Types of measurement error - Random error - Changes randomly on each trial, unpredictable - Beyond experimenter’s control - Can work to distribute these across conditions - Systematic error - Changes nonrandomly - Biased in a particular direction - Experimenters can work to reduce - Sources of Systematic Error - Four common sources - The participant - The equipment or apparatus - The testing environment - Experimenter bias and scoring guidelines - The Participant - Reactivity - Occurs when participants modify natural behavior in response to fact that they are being measured - Response set - Readiness to answer in a particular way - Reactivity - subject roles - Good - wants to produce answers that support hypothesis - Negativistic - answers contrary to hypothesis - Apprehensive - believes they will be judged, social desirability - Faithful - follows instructions, answers truthfully - Response set/bias - Acquiescence - tendency to say yes - Disagreeing - tendency to say no - High and low scores can reflect this bias, but we can’t tell - To address - Phrase items so they are more acceptable - Include measures of these and then control for them later - The Equipment / Apparatus - Quality - Sensitivity - detect small enough changes? - Clarity of instruction - Appropriateness, length, vocabulary, intrusiveness…etc - Range effect - Ceiling effect - measurement is not sensitive enough to detect difference at high end of the scale - Too many scores all at high end - Little possibility to increase scores - Floor effect - measurement not sensitive enough to low end of the scale - Inverse of ceiling effect - Can fix both by adjusting scale - Testing Environment - Comfort - Presence of others - social facilitation - Distractions and interruptions - Experimenter Bias - Experimenter bias - measurement influenced by experimenter’s expectations or beliefs of outcomes - Can be intentional or unintentional - Includes how instructions given, body cues…etc - Example - rat study - Rats labeled as bright performed better than ones labeled dull - Students unconsciously influenced performance - Reducing experimenter bias - Standardizing or automating experiment - Single blind - Participant not aware of condition they are assigned to - Double blind - Neither participant or researcher knows condition - Third assistant conducts differentiation and doesn’t inform which group people are in until data analysis How to Reduce Them - General Strategies - To reduce error as much as possible, need to try and minimize effects of possible confounds - Confounding variable - uncontrolled variable that varies systematically with independent and can influence dependent variables - Needs to affect both - Can standardize the same 4 sources of error - Standardizing Participants - Decide in advance what inclusion and exclusion should be - Have age, gender, educational level, health status, ethnicity…etc - Standardizing Test Protocol - Test needs to be consistent - Includes instructions to and treatment of participants - As well as administration of tests/measures and their order - Standardizing the Environment - Choose best environment conducive to testing and try to repeat environment in the future - Note factors such as time of day/week/year, temp, noise level, accessibility…etc - Standardizing Scoring - Scoring should be as objective as possible - Marking criteria should be as clear and precise as possible - True for both participants and raters - Should also conduct some practice runs to become familiar with scoring procedures - Especially if using different raters Scales and Types of Measurement - Types of Measurement - Can be either - Qualitative - categorical information - Quantitative - numerical information - Measure using - Self report measurements - Physiological or neural measurements - Behavioral measurements - Quantitative Measures - Assign numerical value to a variable - Variable is anything that can have different values - Must have at least 2 to determine differences - Depending on properties of the variable, there are different scales to use - Scales of Measurement - Nominal - qualitative - Ordinal - quantitative - Interval - quantitative - Ratio - quantitative - Consider 4 factors - Names - items given names - Rank order - there is an ordering of the items - Equal intervals - space between two scores is constant - Meaningful zero - zero means absence of variable - Nominal Scale - Classification into discrete categories - Attributes are given names - Tells us individuals are different, but gives no comparison between them - No relation in systematic way - Only has names - Ordinal Scale - Categories have different names and ordered sequentially - No assumption of equal differenced - Tells that scores are difference and direction of difference - Doesn’t indicate how different - Have names and rank order - Interval Scale - Categories are the same width - Can be organized/ordered sequentially - Compatible with basic arithmetic - can calculate difference between points - Zero point is assigned for convenience - Have names, rank order, and equal intervals - Ratio Scale - Same as interval scale, but there is an absolute zero - Here zero indicates a lack of that measurement - Can compare measurements in terms of ratios - Have names, rank order, equal intervals, and absolute zero - Summarizing Data from 4 Scales - Measures of central tendency - Identify single score that defines center of distribution - Goal is to identify value that is most representative - Mean - Arithmetic average of set of scores - Sum of values / number of values - Median - Midpoint of distribution - Half of values above and below - Mode - Most commonly occurring scores - Skewed distribution - If negatively skewed, mean < median < mode - If positively skewed, mode < median < mean - Scales and central tendencies - Nominal → use mode - Ordinal → median (or mode) - Ratio → mean (or median or mode) - When Do People Use Central Tendencies - We are drawn to repeated patterns - Things like a winning streak - Here we are using mode - recent pattern dominates - Arithmetic means most common used with interval/ratio scale data - But sometimes person will rely on what is salient - Ie we will look at mode rather than median/mean - Self Report - Ask participant to report or rate a construct - Most direct way to assess construct - But easy to distort, and can affect validity - Still benefit from self knowledge and self awareness - Physiological / Neural - Physiological manifestations of underlying construct - Indirect measurement of construct - Objective - Accurate, reliable and well defined - Not subject to interpretation - But can be expensive, and unnatural situation may influence results - Behavioral - Behaviors can represent constructs - May be of interest as actual variable - Can be natural or structured - Need to select appropriate behavior - Consider operational definition - Does it represent the construct? Lecture 6 - Research Strategies & Validity - September 17 Descriptive Research - Purpose - Produce description of individual variables as they exist within specific samples - Observations without manipulations of variables - Data - List of scores obtained by measuring each individual - Can do basic calculations - averages/min/max - Designs and statistical analysis - Descriptive studies usually summarize single variables for specific group of individuals - Numerical (interval/ratio) data - Analyzed by statistical calculation of mean score - Non-numerical (nominal) data - Evaluated by a report of percentage associated with each category - Or can find the mode Correlational Research - Purpose - Produce a description of relationship between two variables - Doesn’t attempt to explain the relationship - Data - Take measures/scores for each individual in the group - Strategy - Consistent patterns are better seen in graphed scatterplot - Each individual is represented by a single point - Correlation doesn’t imply causation - Describes relationship but doesn’t explain it - Relationships between two variables - Line type - Linear association - positive or negative - Curvilinear (exponential, or simply both increase/decrease but nonlinear) - No association - Numerical scores - Usually analyzed with correlation calculation - This is only appropriate for linear relations (doesn’t work for curvilinear) - Strength + direction - Correlation coefficient r is based on the distance of each point from the line of best fit - Smaller distances means a better fit, higher correlation - Lies between -1.0 to +1.0 - Closer to 1 means a perfect relationship - Closer to 0 means no relationship - Measuring correlations - Correlation values are computed from the sum of squared distances of each point to best fitting line - Means each point can contribute a lot to correlation value - Outliers can greatly affect correlation value - Outlier are values that stand out more than other values - Outliers matter both for small and large samples (more for small) - Any correlation scatterplot can also be a bar chart - Compares mean values and variability - Lines above each bar are “standard error” bars - Measure spread of the data - Gives different information, but allows to compare whether scores are vastly different - Only makes sense if on the same scales Experimental Research - Compare two or more groups of scores - One variable differentiates groups, called IV (usually manipulated) - Measured variable is DV (usually the outcome) - Experimental - Purpose - Define a cause and effect explanation for the relationship between two variables - Design - Create 2 or more conditions by changing the level of one variable - Then measure data for participants in each condition - Quasi-Experimental - Purpose - Define cause and effect explanation but falls short - Study of IVs in settings where true experimental designs are not possible - Often used because participants are a special case - Or for ethical reasons we should not manipulate the IV - Or over such a long time that many other confounds are possible - Threat to validity: can we generalize beyond the single condition - Non-Experimental - Purpose - Produce description of relationship between two variables - Doesn’t attempt to explain relationship - Design - Measure scores for two different groups of participants - Or for one group at two different times - Compared to Correlational - Have the same goal - Both designed to demonstrate relationship exists - Both vulnerable to internal validity concerns - Do not explain causality - Types of data differ - Correlational research tends to sample participants broadly - Nonexperimental research usually targets specific groups Research Design and Procedure - Design decisions - Experimenters make decisions about three basic aspects of a study: - 1. Group vs individual - Classic study, case study, single subject design - 2. Same individuals vs different individuals - Within subject, between subject designs - 3. Number of variables to be included - One or two levels, factorial designs - These decisions provide general research methods framework for conducting studies - Will cover in second half of course - Research Procedures - Includes details about how study is to be conducted - Exact, step-by-step description of specific research study - Should contain enough info for another lab to replicate - Includes determination of - Exactly how variables are manipulated, regulated, and measured - Exactly how many individuals will be involved - Exactly how individuals will proceed through the course of the study - What instructions will be given - Consider all W’s: - Who collected data - Where and when was data collected - How was data collected - What processes and instructions did participants experience - Confirmation bias - Interpreting information in a way that confirms one’s pre-existing beliefs or hypotheses - Scientists can unknowingly share biases with participants - How to avoid confirmation bias - Make open and transparent atmosphere - Data and experimental designs are examined and evaluated by every lab member - Encourage and carefully consider critical views - Also discuss with the team - Question internal validity - alternative explanations - Question external validity - other situations where this isn’t true - Ensure all team members examine primary data - Don’t rely on analysis and summary from single individual - More than one way to analyze relationships - See if people reach the same conclusion in the same way - Ensure experiment actually tests hypothesis - Not just give evidence to support idea - Potential outcomes should be able to both prove and disprove working hypothesis - Set standard for - Which results provide support for hypothesis - Which results will disprove the hypothesis - Which results will fail to provide useful information - Guidelines safeguard against bias in conducting experiment and interpreting results Lecture 7 - Research Strategies and Validity - September 19 Validity - Validity of the research study - Does the study answer the question it intends to? - Truth of study, accuracy of results - External validity - extent to which results can generalize to other settings and populations - Internal validity - extent to which can confirm changes in X cause changes in Y - Threats to validity - Any factor that raises questions about accuracy or interpretation of results - Impossible to eliminate all threats - Need awareness of what can control in study Threats to external validity - Any characteristic of study that limits generalizability of results - Category 1: generalizing to other participants - Extent research results can be generalized to individuals who differ from those in study - Include selection bias, volunteer bias…etc - Characteristics unique to specific group of participants will limit external validity - Selection bias - Favoring selection of some individuals - Sample has characteristics different from those of population - Convenience samples can threaten external validity - University students - Will be getting certain volunteers - WEIRD samples - Volunteer bias - There are some characteristics that tend to be true of volunteers in studies - Tables are available of how likely these are - Participant characteristics - Age, gender, socioeconomic status, education…etc - Cross species generalization - Even more difficult - Category 2: generalizing across studies - Extent results can be generalized to other procedures - Characteristics unique to specific procedures/experimenters can limit generalizability - Specifically to situations where other procedures are used - Novelty effect - New situation elicits artificial responses - Behavior is different, potentially due to excitement or anxiety - Eg: propose music can be comforting in high stress scenarios - Not generalizable as people may only be comforted by familiar songs - So would need to know familiarity in order to be consistent - Multiple treatment interference - Carry over effects - Can include fatigue and practice effects - Experimenter characteristics - Different experimenters may produce different results - Based on personality, demographics, appearance…etc - Gauge based on different results in the same study - Category 3: generalizing across features of the measurement - Extent results can be generalized to other methods of measuring in the study - Characteristics unique to specific measurement procedure may limit ability to generalize results to studies with different procedures - Sensitization - When process of measurement alters participants - Usually due to familiarity/lack thereof with testing methods - Find results with assessment measurements that differ - Use standard and the study specific - Pre-test measures - Can sensitize participants - More aware of attitude/behavior - Have increased awareness - Therefor can affect behavior in experiment - Can have another group perform task without pre-test and compare - Self-monitoring - Act of monitoring participants - Measurements affects scores, not treatment itself - Generalizability across other measures - Results may be different if measured in different ways - Time of measurement - Time of day and time in the life to apply tests might impact results Threats to Internal Validity - Any factor of the study that allows an alternate explanation of the results - Confounding variable - Variable not being measured - Changes systematically with IV and DV - When IV and DV and confounding change together systematically, we can’t conclude cause and effect - Confounding variables can provide alternate explanations - Environmental Variables - Any characteristic in physical environment that differs between treatment conditions - Must be no systematic differences in environment between conditions - Otherwise have 2 alternative explanations - Individual Differences - Characteristics of individuals that differ from one person to another - Assignment bias - Participants assigned to different treatment groups have noticeably different characteristics - Gives rise to alternative explanation - Time-Related Variables - When testing the same participants multiple times - Any factor that can affect participants or scores between testing sessions - Can no longer identify if time impacted change, or experiment - Changes can be environmental or participant related - Four types - History - any outside event that changes over time - Must influence at least one treatment condition and enough participants to cause difference - Maturation - changes in participants characteristics between treatments - Physical or psychological - Problem with long term studies, especially for young/old people - Instrumentation - changes in measuring instrument throughout study - Score changes might be from instruments - Problem with long term studies, sensitive equipment - Lots of equipment will only last 1-2 years - Testing effects - Practice effects - people get better with repeated testing - Fatigue effects - cognitive capacities’ decreases that result in worse performance - Carryover - when previously experienced condition affects response to current condition, can reduce reaction time, raise accuracy, or the reverse - To control, counterbalance condition orders (equally) Artifacts - Experimenter bias - Results influenced by experimenter - Threat to external - results specific to that experimenter may not generalize - Threat to internal - results caused by experimenter and not by IVs - Demand characteristics - Features of a study that tell participants purpose and influence their behavior - External threat - results may not generalize from design - Internal threat - results caused by demand characteristics, not IVs - Participant reactivity - Participants’ roles (how they behave when they are in a study) - External threats - results specific to participant features may no generalize - Internal threats - roles may explain results, not IVs Internal and External Validity - Must balance - As one increases, the other decreases Lecture 8 - Selecting Participants - September 24 Sampling From Populations - Sampling - To decide who to recruit for study, need to determine population - Population - Group sharing some common characteristics - Researcher is interested in phenomenon of entire group - Sample - Subset of population - Participate in the study - Who to generalize results to? - Usually want to apply to entire population - Use a sample to map back to population - Dependent on representativeness of sample - Sampling Error - Naturally occurring differences between population and sample - Researchers want to ensure samples are good representations of population - Want sample to be similar - If is similar enough, results can be generalized - Populations - Target population - Entire set of individuals sharing the characteristic of interest - Accessible population - Portion of target population that are accessible to be recruited - Sample - Individuals selected for study - much smaller than target population - Can’t include everyone for many varying reasons - Population records - Way of getting hold of every member of the population - Include: census report by the government - From residential population, from workday population - Also include: - Birth, hospitalization, education, immigration, and crime records - Even if population is known, hard to reach each member with equal probability - Even if can be reached, can be expensive/time consuming to enroll all members Sampling Biases - Representativeness - Want to be sure people in sample are a true representation of those in the population of interest - Larger samples are usually more representative - Larger and more representative the sample, more confidence we have that results can be generalized - Biased Sample - Biased sample - When participants in sample differ from population on a given characteristic - Can result from way participants were selected - Called sampling bias - If selection favors the inclusion of certain people over others then it is no longer random - Method we use to select participants affect representativeness - Larger the sample → the more accurately it will represent the population - Law of large numbers - There are practical limitations - can’t always get largest samples - Researchers compromise between large samples and requirements needed for testing a large sample - Based on mathematical probability - Discrepancy between sample and population decreases in relation to square root of N (size of sample) - Means there is minimal benefits to increasing sample size above 25-30 - Statistical power - formula that identifies minimum number of participants needed to detect expected effect in study Sampling Procedures - Probability sampling - Based on random sampling - Everyone has equal chance of being selected - Good but difficult - Selection process must be unbiased (random) - Non-probability sampling - Not randomly selected - Everyone does not have equal chance of being selected - We tend to use this most in behavioral sciences - Comparing the two - Probability sampling requires exact size of population known - Ie: can list all members - Non-probability doesn’t require this - Odds of selecting individuals is - Known / calculated in probability - Not known in non-probability - Probability Sampling Estimates - Estimating population size is very difficult - Need a best estimate of population - Sampling frame - list of cases in a population or best estimate of it - This is as good as we can get Probability Sampling Procedures - Simple Random Sampling - Entire population is represented - Each individual has equal chance - Each selection is - Random - Independent - To do it - Define population - List all members - Use random process to select individuals - Two principal methods to use: - Sampling with replacement - Individual selected is recorded then returned to population for next selection - Guaranteed to have each selection independent - Sampling without replacement - Removes each selected individual after selection - Small changes in probability (depends on population size) - Is fair and unbiased, but not a guarantee of being representative - Which to use? - If population is small, use sampling with replacement - If population is large, then outcomes are similar even after removal, so can use either - Important since some computer programs and predictions are based on results form various samples - Bootstrapping datasets - Sampling with replacement - Resample from the same dataset - Cross validation studies - Sampling without replacement - Split train test - Despite all this, there is still no guarantee the sample is representative - Can still get a very distorted sample - Systematic Random Sampling - Every nth participant is selected from a list containing the total population - n = population size / desired sampling size - Random starting position is chosen - This violates the principle of independence - Once determine the starting point, all other choices are set - But this ensures a high degree of representativeness - Technique is less random since no independence - Stratified Random Sampling - Selection procedure divides population into subgroups called strata - Random sample selected from each stratum - Samples from each are the same size - This guarantees each subgroup has adequate representation - Overall sample usually not representative of population - Strata aren’t equal in population, but are equal in sample - Proportionate Stratified Random Sampling - Deliberately sample to ensure proportion of subgroups/strata in the sample matches proportions in population - Otherwise the same as stratified - Requires a lot of work - However sample will be representative of population composition Non-Probability Sampling Procedures - Used when population isn’t completely known - Common in behavioral sciences - These have no evidence that they are representative of population - Convenience Sampling - Subjects selected on basis of accessibility & convenience - Drawn from part of population close by - Also called accidental sampling - No attempt to know population details - Why we use it? - Easy, less costly, more timely - How to limit loss of validity - Use large sample with broad cross-section of individuals - Provide detailed description about sample - This lets people draw inferences about generalizability - Quota Sampling - Identify relevant categories of people - Select sample size for each based on predetermined number of participants - These are the ‘quotas’ for number in each subgroup - Can adjust quota to reflect proportions in population or to ensure subgroups are equally represented - Reflects population, but not randomly selected - Still selected by convenience - Sample can be biased, but more representative - Snowball Sampling - Using current participant to reach other participants - Usually due to difficulty of recruitment with special populations - Sample members will know each other - Less representative - Less anonymity Lecture 9 - Research Ethics - September 26 Origins of Research Ethics - Nuremberg Trials & Nuremberg Code - 1947 - Atrocities committed by Nazi scientists investigated by tribunal - Led to development of Nuremberg Code - Emphasis placed on notion of consent - Voluntary consent (early notions) - Should yield fruitful results - Must avoid unnecessary harm - Tuskegee Syphilis Study - Ran from 1932-1972 - Funded by NHS in USA - Sought to track progression of untreated syphilis - Biomedical observational study - no direct benefit to participants - Involved 600 African-American tenant farmers (399 with syphilis) - Were lied to - told they were being treated for “bad blood” - 1932: recorded natural history of syphilis for 6 months - Ethical problems - Researchers deliberately withheld treatment - Penicillin was standard care by 1950s but withheld - No scientific benefit (no need to study if cure available) - No informed consent obtained (deliberate deception used) - Mislead to think they were getting a treatment - Legacy - Lead to commission of Belmont Report - Numerous spouses and children contracted the disease - Destroyed trust in US government and scientific research - Stanley Milgrim & Behavioral Study of Obedience - Study in 1961 - Wanted to conduct experiment to test “following orders” - Defense of Eichmann in Nuremberg - Determine how far people would follow orders against morals - Roles - Experimenter - part of study, ran experiment - Teacher - study participant - Deceived and told they were assessing a learner - Learner - another member of the study team - Pretended to be another study participant - Set-Up - Teacher and learner introduced and separated so they could communicate but not see each other - Teacher had to teach list of word pairs to learner - Teacher then tested learner with multiple choice test - If answer was incorrect, they would be shocked - Shocks increased intensity, up to 450V labeled “danger” - Teachers believed learner was being shocked - If teacher wanted to stop, experimenter would give verbal cues - In order: - Please continue or Please go on - The experiment requires that you continue - It is absolutely essential that you continue - You have no other choice, you must go on - Nuu-chah-nulth (Nootka) Blood Sample - Took place in British Columbia - Gather blood for research into disease in this nation - Samples were later used for other research without consent REB Review Process - Research Ethics Regulatory Framework: The Tri-Council Policy Statement - Regulatory policy in Canada (not a law) - System of principles that guide research ethical decision making - Created research ethics board (REB) system in Canada - Influenced by Nuremberg code, Belmont report..etc - To be eligible for grants, institutions must comply with policy - What is the REB? - Independent body - Composed of institution-affiliated staff and members of community - Four categories of members - Disciplinary (research focused) - Ethics - Legal (independent members) - Community members - Mandate is to review ethical acceptability of research with humans - Done by… - Protecting rights and welfare of participants - Adopting proportionate approach to ethics review - Striking balance between potential benefits and protection - Reviewing from perspective of participant - REB can approve, reject, require modifications to, or terminate any proposed or ongoing research - TCPS requires research with humans undergo review and approval - Structure ensures independent decision - Research Involving Humans - Research - Undertaking intended to extend knowledge - Through disciplined inquiry of systematic investigation - Undertaking must have research as a purpose for REB review - Human participant - Living person whose data or responses to measures are relevant to answering research question - Human biological materials - Include embryos, fetuses, fetal tissue, reproductive materials, and stem cells - Research Excluded from REB Review - Research relies exclusively on information - Publicly available through mechanism protected by law - In public domain and information has no reasonable expectation of privacy - Eg: social media - Research involving observation of people in public where - No intervention or direct interaction - Individuals or groups have no reasonable expectation of privacy - Result sharing doesn’t allow identification - Reliance on secondary use of anonymous information - So long as no way to link - Research with individuals who aren’t a focus of research - Who, What, When of REB Approval - Who - Any McGill faculty, staff, or student acting in connection with institutional role and research involves humans - What - All research conducted under McGill in any way - Including if any researcher is under McGill, even if principal researcher is external and has REB approval - Any activities conducted by external researchers - Research involving human participants must receive REB review before research activities - This is true for many kinds of research, including independent studies - McGill’s REBs - REB 1 reviews most sciences and arts - REB 2 reviews the rest of arts, education, and psychology - REB 3 reviews all research involving anyone who can’t give full consent - FMHS REB reviews all medical or biological research - McGill REB Review Process and Types - Uses electronic submission and management system - Minimum initial review takes 5-6 weeks - Depends on many factors - Complexity - Quality of application - Every question must be answered well - Attach anything participants would see - The higher the quality, the faster the turnaround time - Types of review - Full board - Everyone reviewing is the default - Delegated review - Review by a subset for research with minimal risk - Review Decisions - Approval - Modifications required - Any concerns will be sent back - Approval granted once all issues are addressed - Disapproved - Very rare, but occurs if study is fundamentally flawed Research Ethics and The TCPS2 - Definition of Minimal Risk - Where probability and magnitude of possible harms from participation is no greater than those they experience in everyday life - REB have to consider groups that might be put at higher levels of risk - Fundamental Principles of Research Ethics & TCPS2 - Underlying value of human dignity - Shapes entire policy - Inherent worth of all human beings and respect and consideration they are all due - 3 principles derived from core value - Respect for persons - Concern for welfare (beneficence) - Justice - Aim to balance potential benefits with protection of participants - Take proportionate approach - Respect for Persons - Action oriented guidance - Concepts of autonomy - Voluntary, informed, and ongoing consent - Moral obligation to respect autonomy but also protect those with limited autonomy - For those who have limited consent ability - Get assent from participant (try to explain so they understand) - Get consent from legal guardian - Concern for Welfare - Awareness that participation in research can affect welfare of individual - Means unnecessary exposure to risk - Aspects include - Attention to overall welfare and circumstances - Protect confidentiality - Treatment of biological materials according to consent - Possible effect of research on welfare of relations of person - Eg: can’t coerce participants with too much benefit - Justice - Refers to obligation to treat everyone fairly and equitably (and impartially) - Fairness - equal respect, not elimination of differences - Equity - even handed distribution of benefits or burdens - Results in equal distribution of benefits and burdens What Happens During and Ethics Review? - Study Design - Balance risks and benefits - Risk - function of magnitude and probability of possible harms - Harms - of any mean, can affect individual or group - Magnitude - ranges from minimal to substantial - Probability - likelihood of experiencing those harms - Think about risks throughout study protocol - From questioning to actual experiment to location - Think about how to reduce risks as much as possible - Informed consent is not a means to reduce risks - Risk minimization doesn’t consider if people agree to it - Benefits must outweigh harms - Recruitment - Considering who, what, where, and when recruitment will take place - Consider situational or intrinsic vulnerability of potential participants - Be attentive to all contexts and potential conflicts of interests - Informed Consent - Informed - All information must be given to make decision - Covers as many aspects of the research as possible - Any areas not covered must be available or given after the study (in cases of deception) - Comprehension - Information must be understandable for the audience - Consider age, literacy level…etc - Need time to consider their consent - Documentation - Written consent is most common - Verbal consent is acceptable with lower literacy population, for very minimal risk studies, or if written consent may pose risk to participant - Always need to document it - Free and Voluntary - Free - ability to make decision - Voluntary - one’s capacity of will to make decision without coercion - Factors that undermine voluntariness - Incentives that are too large - Conflicts of interest - Research involving deception - Capacity to understand information Responsibilities as McGill Researcher - Researcher’s Responsibilities - Primary responsibility - ensure research carried out in ethical manner - Responsible for protection of participants - Responsible for ensuring research receives review - Responsible for ensuring research conducted in accordance with University policies - Respect laws governing access to personal information and privacy - Visit University’s Secretariat’s webpage for policies - Ethical conduct of research involving human participants - Policy on conduct of research - Regulation on information technology resources - Policy on responsible use of mcgill cloud directive - Ethics approval happens before activities - Approvals valid for maximum of 1 year - Continuing review for ongoing project - Modification must be approved by REB before they can be introduced - True for almost all changes to a study possible - Any unanticipated issues that have any ethical implications must reported immediately - TCPS online tutorial is mandatory before submitting application to REB Pro Tips - Submitting an Excellent REB Application - Take time - Re-read the application - Comprehensive, Coherent, Concise - Read questions and answer them - Use/Modify/Adapt the McGill REB consent form template! - Don’t confuse anonymous and anonymized - Anonymous - never have personal information - Anonymized - removal of personal information later (after data collection) - Ask for advice before submitting - Use procedural language, be detailed and explicit - Include how to do things as well as what is being done - Need to focus on information and facts - Do not make any assumptions - The golden questions - What asking participants to do - What information is being asked about participants - Who is doing what and with who - When are they going to do this - Where is it being done - How will you do it Lecture 10 - Research Ethics - October 1 Research Ethics - Help scientists define what is legitimate to do or not - What “moral” research procedure involves - Includes concerns, dilemmas, and conflicts that arise over proper way to conduct research - Applies to both participants and experimenters - Encompasses everything - Measurement techniques - Participant selection - Which designs/strategies can be used with certain populations - How data is analyzed - How results are reported - Need to go through research ethics board for institution - Acting ethically requires researcher balances value of advancing knowledge against value of non-interference in lives - Protecting research participants and upholding human rights (more important) - Vs gaining knowledge and finding clear answer Historical Roots - Nuremberg Code - After WWII - Foundation of ethical guidelines used today for all research - 10 important guidelines 1. Consent 2. Benefit of knowledge for society 3. Knowledge of anticipated results (animal studies) 4. No unnecessary physical/mental suffering 5. No risk of death 6. Risk must be lower than importance of problem 7. Adequate facilities 8. Competence of researchers 9. Participants’ withdrawal allowed 10. Termination of study by research - Belmont Report - 1979 in the USA - Results of Milgram experiments - Outlines 3 core principles - Respect of persons - Individuals should consent - Those who cannot must be protected - Beneficence - Act of mercy, kindness, doing good for others - No harm, minimize risks, maximize benefits - Justice - Fairness in procedures for selecting participants - Tri-Council Policy Statement - Three core principles - Respect for persons - Intrinsic value of participants as people - Informed and ongoing consent - Accountability and transparency - Concern for welfare - Ensure privacy and control of information (confidentiality) - No or minimal harm - Advise of any and all risks - Justice - Equal respect and concern for all participants - No segment of population unduly burdened by harms - Selection has inclusion criteria that is justified Canadian Code of Ethics for Psychologists - Overview - CPA, 2017 - Respect for dignity of persons (MOST IMPORTANT) - Do no harm - Informed consent - Protection of privacy - Protection of vulnerable populations and individuals - Responsible caring - Competence - Maximize benefit - Minimize harm - Integrity in relationships - Accurate and honest result reporting - Minimize deception and debrief any that occurs - Responsibility to society - Contribute to psychology and society - Informed Consent 1. Participants must be informed of what will be done and why i. Sometimes all info cannot be shared before ii. Especially the why - behavior may change with knowledge iii. In these cases, say what will be done but not why b. Special or vulnerable populations i. Individuals lacking cognitive capacity or full freedom to give truly informed consent ii. Additional protection required 1. Children, developmental disability 2. Also those susceptible to undue influence iii. In these cases, researchers must seek assent from participants and consent form guardians 2. Participants must have complete understanding a. Informing is not enough, need to understand 3. Participation is voluntary and not coerced a. Right to withdraw at any time without consequences - Observation & Consent - Consent is not required for observation of people in public places where - No intervention staged by researcher - Or no direct interaction with individuals/groups - No responsible expectation of privacy - Sharing results doesn’t allow identification of specific individuals - Deception - When participants not given complete and accurate information - Two types - Passive - leave out information - Active - alter information (false feedback) or use confederates - Must be justified - Consider all alternatives and justify rejection to REB for approval - Cannot conceal information about physical pain or severe emotional distress - Must immediately debrief with complete explanation - Confidentiality and Anonymity - Practice of keeping all individual information obtained during a study in a private and secure location - Anonymity - Names not associated with data - Code numbers rather than participant names are used - Only group data reported (like averages) - Some exceptions - especially case studies - Access to data is limited to research team members listed on protocol - Many ways to protect, especially to travel with laptops! Animal Research Ethics - Historical Roots - Animal Welfare Act - 1966 - General standards for animal care - Minimal standard - Last amended in 2008 - Canadian Council for Animal Care - 1968 - Standards for care, treatment, and use of animals in science in Canada - Prevent harm to animals - Canadian Council on Animal Care - Governs all animal research in Canada - Laws and guidelines for all animal care - Require no to little pain or stress - Universities have Animal Care Committees that review proposals - 3 principles - Replacement - Replacing animals with alternative - Reduction - Minimizing number of animals used - Refinement - Modifying procedures to minimize distress or stress - Most animal research is not for comparison to humans - Most is to help and better the animals Ethics in Research - Applied - University Research Ethics Boards: Decision Process - Any research team must apply to REB committee for permission and approval - Committees exist anywhere there is government funding - At McGill there are 4-5 REBs - Steps - Start by assessing benefits and potential risks on own - If benefits outweigh risks - conduct research - If not - protocol must be modified - Scientific Ethics/Integrity Publication Issues - Mistake versus fraud - for mistakes: - Erratum - update that fixes mistake - Retraction - removal of the paper - Data fabrication and falsification of findings - Plagiarism of sources - Taking ideas from other sources (without citing) - Ghost-writing and fake peer reviews - Ghost writer - someone else writes paper and don’t appear on publication (need to report these!) - Safeguards - replication, peer review, and watchdogs - Scientific Integrity - Ethics violations can occur at data analysis and publication phase - Scientific misconduct - Violating basic and generally accepted standards of honest research - Include research fraud, plagiarism, or suppressed findings - Research mistakes - Reporting values or analyses mistakenly made, caught after publication - Can publish erratum - errors inadvertently created - Research fraud - Invent, falsify, or distort study data to lie about what was conducted - Can publish retraction for important offenses - Research Fraud - Suppressed findings - Studies try to conceal findings - Can be due to multiple reasons - Plagiarism - Using another’s words or ideas without giving proper credit - Pass off as their own - More easily detected nowadays - Safeguards - Republication, peer review, and watchdogs - Peer review process: - - Retraction watch - Blog started 2010 by health journalists - keep track of bad science - Founded Center for Science INtegrity - Searchable database of 18K retractions - 50% from fabrication, falsification, plagiarism; 10% due to fake publishing - Pressures for Unethical research - Publish or perish - Career building pressure - Need to publish, build prestige..etc - Must obtain significant findings to publish - Ethical research takes time and money - Need for success and admiration - Fake News - Information pollution affects capacity to make informed decisions - Incorrect information creates mistrust in public institutions - Disinformation - Information that is false and deliberately created to harm a person, social group, organization or country - Aka fake news - Misinformation - Information that is false but not deliberately created with intention of misleading or harm - Malinformation - Information based on real facts but deliberately manipulated to inflict harm - Hardest to discriminate from true news - Why is Disinformation Effective? - Cherry picking - Presenting only facts that support certain view - Often used to support and contrary evidence (evidence that supports fake news) - Presented as challenge to scientific consensus - Double standard - Holding evidence that supports scientific consensus to higher standard than evidence that challenges it - Ie: heavily critiquing information that doesn’t support false news - Reliance on false experts - Amplifying opinions of a few scientists who challenge consensus - Ie: they support false news - Conspiracy Theories - Attempt to explain harmful/tragic events as result of action of small powerful group - Reject accepted evidence or explanation - often based on easily falsifiable evidence - Driven by desire to make sense of social forces that are: - Self relevant, important, and threatening - Pizzagate Conspiracy Theory - Hilary Clinton had emails leaked - Conspiracy theorists claimed linked to human trafficking - Causes - Attribution error - Overestimate causes that arise from human motives - Underestimate causes related to environment - Confirmation bias - Tendency to focus on/trust more evidence that fits with existing beliefs - Confirmation Bias - Experiment with students - Those who thought/did not that capital punishment reduces crime rate - They then read stories that support each idea - Then rated belief after reading the stories - Both only agreed with story that aligned with perspective - Both were then more committed to original perspective - Twitter study - Looked at false and true news stories - Found false news spread more quickly - Looked at size, depth, and time of articles - Found false news - Spread to more people - Spread over more iterations - False news spread faster - Disinformation Spreads - Like an echo chamber - Echo chamber - groups of users who share strong opinion - Align themselves in a group where they are exposed to content similar to beliefs - Beliefs become strengthened by repeated interaction - Algorithms select content that matches - Study of echo chambers on reddit and twitter - Amount of hate didn’t matter - what it said had no difference - Posts by high-hate users spread more quickly - So who says it matters - Source attribution - Users disseminate news more often from trusted source then from a trusted fact - Spoofing - Disguising communication from unknown sources as originating from known, trusted source (reliance on false experts) - Can be in many forms - Caller ID spoofing - Email spoofing - Online news spoofing - Image Editing - Many examples of editing photos for political purposes - If images send strong emotional messages, may be edited - Can look for it on google images - Also look using your eyes - Finally, check the metadata - Cache of information stored in image files - Should reveal information about where/when was taken Lecture 11 - Research Ethics Cont + Descriptive Methods and Surveys - October 3 - Image Editing - What to look for - Image cloning - An image is superimposed on the original image - Look for changes in the number of pixels - Lighting - Inconsistencies in shadows, reflections, and light sources - Facial features - Whether irises of person’s eyes match - Similar facial inconsistencies - When Scientific Findings are False News - Some journals pretend to be real by hijacking journal information - Predatory journals - Accept articles for publication along with authors’ fees - Don’t perform usual quality checks for plagiarism/ethical approval - Claim to follow regular peer-review process but don’t - Often named like bona fide peer reviewed journal - There are websites to check if journal is bona fide or predatory - Vanity journal - publishes for a fee with no review - Retractions of Science Papers - It takes a long time for retracted article to be false news - Startling findings are talked about more than retractions - Followup news on old topics is less interesting to consumers - Some scientists don’t read retraction - How to Evaluate Disinformation - There are websites to check - Can compare with other credible sources - Ask questions - Is perspective biased? - Are experts bona fide? - Are other credible sources reporting same story? Descriptive Research Strategy - Quantitative Research - Grouped under three categories - According to research goals - The first category is descriptive research - Descriptive Strategy - Goal - to describe single variable or set of variables when several are involved - Measure variables in natural environment to obtain a description of that variable - Not interested in relationships/associations between variables - Results capture interesting, naturally occurring behavior - Types of Descriptive Strategies - Observational study - Naturalistic, participant, and contrived observations - Case study - Rare and unusual cases, counterexamples - Survey study - Online or in person - participants choose study instead of vice versa - Methods for Quantifying Observations - Frequency method - How often specific behavior occurs - Duration method - How much time was spent in particular behavior - Interval method - Dividing observation period into series of intervals and recording the number of intervals in which behavior is observed - Requires less time and shorter concentration by experimenters - Used to generalize rate/number of behaviors per interval to other intervals Observational Research Designs - Types - Naturalistic observation - no researcher intervention - Participant observation - interact with participants/become one - Contrived observation - set up situation likely to produce desired behavior - Naturalistic Observation - When researcher observes in natural setting - As inconspicuous as possible - Advantages - Provides insight into real world behavior - High external validity - Allows examination of behaviors that can’t be manipulated - Disadvantages - Time consuming - Need to ensure not influencing behavior - Participant Observation - When researcher interacts with and becomes part of participant group - Can be overt or covert - Used when natural or inconspicuous observation is impossible - Advantages - Gives researcher unique perspective by having the same experience - Can gather information otherwise not available - High external validity - Disadvantages / ethical considerations - Extremely time consuming - Expensive - Bias - Interpreted by single observer - Views influence interpretation - May lose objectivity by being part of the group - Reactivity - Observer participating may influence behavior - Contrived Observation - When researcher arranges setting to specifically facilitate or elicit a particular behavior - Can be laboratory or public area - Gets behavior to occur in more timely fashion, but less natural - Two main types - In public setting - People must not have expectation of privacy in observed behaviors - In more artificial environment - Can have less expectation o

Use Quizgecko on...
Browser
Browser