🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

PSYC 4780 Midterm - Lecture Notes Compiled.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

PSYC 4780 Midterm - Lecture Notes Compiled January 31st, 2011 - Daryl Bem experiment - Supposedly participants had a better memory during the test for words they would study after the test - I.e., participants had psychic powers Confirmation Bias - Journal of Personality + Soc...

PSYC 4780 Midterm - Lecture Notes Compiled January 31st, 2011 - Daryl Bem experiment - Supposedly participants had a better memory during the test for words they would study after the test - I.e., participants had psychic powers Confirmation Bias - Journal of Personality + Social Psychology (who published Bem’s paper) had a policy = articles would not be published if they disconfirmed previous publications E.g., card flipping experiment Hypothesis - if there is a D on one side, there will be a 3 on its other side - Flipping D = confirmation choice - Flipping 7 = falsification choice - can provide evidence to falsify the hypothesis Karl Popper - if a theory could not be falsified, it was not a scientific theory - Science = disconfirms - Pseudo-science = confirms - it's easy to find confirmation if you’re looking for it Scientific Method 1. Testable 2. Refutable 3. Falsifiable Reproducibility - can someone use the exact same process as you and get the same numbers? Example - Molecular Brain (journal) - Of 41 manuscripts - 21 withdrawn (did not want to provide raw data), 19 were unable to reproduce the same data - Only 3% of manuscripts could be reproduced (97% rejected) Example - the Prevalence of statistical reporting errors in psychology (1985-2013) (article) - 250,000 p-values were evaluated using R statcheck (comparing p-value and reported test statistic) - Half of all published papers had inconsistencies between p-value and test statistic Replicability Example - Many Labs 2 - Specifically designed to address criticisms of Many Labs 1 - Sample sizes were 60x larger, repeated in labs all over the world - Results = only half of psychological studies can be successfully repeated - “Published and true are not synonyms” What does good science look like? Key point - what was true for people before the replication crisis is not true for afterwards - What you see is based on what you know, when what you know changes = your universe changes John A. Bargh - priming research - Paper on priming old people with words to walk slower was cited more than 3800 times - Many researchers had problems replicating = these papers were not published - E.g., Pashler, 2008 - failed to find the effect, posted on FileDrawer website - Doyen - failed to replicate BUT got paper published in PLOS ONE - Publish based on methodological rigor NOT ‘do I think this will be interesting?’ - Bargh responded attacking people for calling him out - E.g., these people have ‘nothing in their heads’ - E.g., Doyen is ‘incompetent and ill informed’ - E.g., Ed Young is ‘superficial online science journalism’ - This eventually led to the creation of the Replicability Index - what percentage of the findings in a journal replicate? - E.g., specific article titled ‘Replicability Audit of John A. Bargh’ Examples of other popular studies that do not replicate: - Power Posing - Amy Cuddy - Pen in mouth (facial feedback hypothesis) - Ego depletion Kate’s Story - PI conducted a new experiment with 2 undergrad students, but not Kate - PI positioned equipment to provide incorrect biased measurements = biased the results to support the PI’s hypothesis - PI asked undergrads to draw diagrams to show the equipment was in the correct spot - I.e., asked them to lie - Undergrads went to Kate for advice, she suggested they go to the department chair - Kate was fired from the lab - Had to finish her Ph.D. elsewhere - Career was delayed 10 years - The person most likely to reap consequences of QRP = the whistleblower Darren Brown, the garden of forking paths - Khadisha - Darren Brown used ‘the system’ to create a reality for Khadisha - i.e., he used p-hacking or finding the story - Academics use the same system to create a reality for the reader of their research - Researchers are presenting a path (e.g., horse betting), but you can’t see any of the other many, many paths - only the one being presented to you - You have no knowledge of the broader paths when you read academic research Problem with exploratory research: you need to distinguish it from confirmatory research - Confirmatory = the outcome is a conclusion about something that might be true - Committing to a specific hypothesis or research question - Exploratory = the outcome is not a conclusion, but a hypothesis - Do not have a set hypothesis before the study - Needs to be followed up by a confirmatory analysis In brief: Confirmatory = scientific conclusions Exploratory = hypothesis (must be tested through further research) Joseph Simmons, Leif Nelson, Uri Simonsohn: DataColoda - “A researcher may run a regression with/without outliers, with/without a covariate, with one and then another DV, and only report the significant analyses in the paper” - Method: Undisclosed Flexibility - “When I’m 64” by the Beatles Result: everyone thought p-hacking was wrong, but they thought it was wrong the way to jaywalk - ‘False-Positive Psychology’ was written to reveal its actually wrong in the way to rob a bank Saving Science from the Scientist - Dorothy Bishop - P-hacking is comparable to dealing a magic poker hand - researchers are dealing many hands + just picking the ones that look exciting (similar to Khadisha) - Ignorance - people have gotten used to looking for evidence of something interesting - Researchers don’t grasp the idea that they need to present the entire set of data - It has become a culture to conduct research like this - Strong incentive = journals won’t publish things without a low p-value Example - optional stopping - There are ways to conduct optional stopping appropriately - E.g., deciding in advance how many times you are going to ‘peak’ at your data - Peaking 3 times = dividing significance (p =.05)/3 - p =.0017 - much more difficult to achieve How common are QRPs Note: 9% admitted to falsifying data - literally making stuff up Example - mathematically identical situations, but a better look Researchers sample 100 people in 1 study = find no effect - What if I throw out 40 people (data points) = find an effect - this is unethical BUT Researchers sample 100 people across 5 studies (n = 20) - 3 samples = significant effects = (n = 60) - 2 samples = no effect = (n = 40) - What if I only report the 3 significant findings? It is the exact same as the first example Data Detectives - Treat a paper as a ‘crime scene’ - look for evidence of data manipulation, fabrication, or fraud - May be able to look solely at the sample statistics to find fraud Example - identified using descriptive statistics - Simonsohn (2013) was able to find issues in Sanna, Chang, Miceli & Lundberg (2013) from just the standard deviations Example - identified using raw data - Francesca Gino - Harvard researcher - Someone moved participants from one condition to another to ensure significance - ‘Under the hood’ in excel = tracks changes to see how the data file was changed (called a calcChain) - This can then be applied to certain statistics - E.g., DataColada - able to show 6 rows were out of sequence - variables that had large effects on the DV had been manually manipulated - Results - manipulated results were significant, original ones were NOT Francesca Gino - 4 retracted published papers found evidence of this data manipulation - Harvard commissioned an investigation = report was 1300 pages long - Gino sued Harvard for $25 million + the guys at DataColada - The case against DataColada was dismissed - “Scientists cannot effectively sue other scientists for exposing fraud/errors in their work” Tri-Agency Statement of Principles on Digital Data Management - Grants come from taxpayer money! - Agencies believe that research data collected with the use of public funds (i.e., any grant you can get in Canada), to the fullest extent possible, in the public domain and available for reuse by others - Other people can reuse your data - you don’t own it! - Tri-agency has huge influence over university grant money - required for everyone to follow tri-agency ethics! EVEN if your grant isn’t under any of them Responsibilities (of research agencies): “recognizing data as an important research output and fostering excellence in data management” - You need to treat data itself as a research output NOT just published papers

Use Quizgecko on...
Browser
Browser