Convergence Evidence & Meta-Analysis Psychology 210-2 PDF

Summary

These notes summarize the concepts of convergence evidence and meta-analysis in research, particularly in psychology. They discuss how research findings are often presented as major breakthroughs and the importance of considering multiple sources to evaluate scientific advances. The notes also explain the importance of a gradual synthesis model of scientific progress. A broader area of psychology is involved

Full Transcript

11/5/24 Convergence evidence & meta analysis Connecting: Past theories and evidence -> current theory and evidence -> predictions Converging: Evidence —> Evidence —> Current theory Evidence —> “Breakthrough” headlines Media headlines often present research findings as “major breakthr...

11/5/24 Convergence evidence & meta analysis Connecting: Past theories and evidence -> current theory and evidence -> predictions Converging: Evidence —> Evidence —> Current theory Evidence —> “Breakthrough” headlines Media headlines often present research findings as “major breakthroughs” - May spread misinformation and exaggeration - Sources must be considered when evaluating scientific advances Main issue: “breakthrough” headlines imply that: - Problems are solved with a single, crucial experiment that completely decides the issue - A single critical insight overturns all previous knowledge Scientific progress is usually more complex and gradual “Scientist are not dependent on the ideas of a single person but on the combined wisdom of thousands” Connectivity principle Science obeys the connectivity principle: - A new theory must account for new evidence as well as previously established empirical facts - May explain observations (old and new) in a different or unconventional way - - but it must explain them all to be a true scientific advance Cumulative process of science Beware of violations of connectivity Pseudoscience often dismisses previous data as irrelevant in light of a new “breakthrough” theory - Novelty and “radical departures” are emphasized Pseudoscience relies on long-held “wisdom” AND novelty and “radical departures” “Great-leap” Model VS. Gradual synthesis model People tend to believe that scientific advances occur with “great leaps” Reality: progress and setbacks Gradual synthesis: - Progress occurs as the community of scientists gradually begins to agree that the evidence supports one explanation over another Review: Limitations in research Case studies - In depth research about one specific person or group - Not representative Correlational studies - Implies relationships between variables - Does not determine cause and effect - Issues of directionality and third variables Quasi-experiments - Like a correlational study in that variables aren’t directly manipulated - Have other features of an experiment - Shows group differences Experiments - Can determine cause and effect - Manipulate one variable (IV), control all other variables, and measure another variable (DV) - Often utilizes random assignment - Issues of confounding variables and external validity Limitations in research No study is perfectly designed - Flaws and limitations, threats to internal/ external validity - Ambiguity of interpretation of data from any one study To overcome this problem: - Must assess overall trends from many “flawed” studies Converging evidence Principle of converging evidence: Support for theories comes about when the preponderance of evidence from many different types of studies to the same answer What constitutes converging evidence? Two key factors: - The results of many studies consistently support one theory - The results also collectively eliminate competing theories When evidence from one wide range of studies all point in a similar direction, evidence has converged Scientific consensus Methods & convergence - Should expect many different methods to be used in research - Field of psychology should be careful not to be over reliant on one method of study The research process often proceeds from weaker methods to more powerful ones - Case study -> correlation -> quasi-exp -> experiments Meta analysis: statistical method for combining results of many studies 11/12/24 Converging evidence and meta analysis Meta analysis A statistical method of combing findings from multiple studies of the same topic to determine an overall pattern Procedure: - Define the effects of interest - Search the published literature - Code the characteristics of the studies you find - Calculate effect size - - e.g., correlation coefficient, standardized mean difference, odds ratio - Interpret effect size Two reasons why MAs are better than primary roles: - Condense volumes of research into a single meaningful investigation - Results are more reliable than a single primary study Problem: the “file-drawer problem” - Non significant results are less likely to be published - Journals have a bias and tend to publish only new findings, especially when there is a significant result - Solution: failsafe - - the number of non-significant results needed to change the conclusion of a significant MA Effect size Cohen's d - Most common measure of effect size Effect size represented in numbers - Small d =.2 - Moderate d =.5 - Large d =.8 Effects size represented in distributions Why Cohen's d? Fairly simple to calculate - Based on means, standard deviations - Even if group means/SDs are not reported, they can be extracted from results Used to analyze group differences - I.e., a categorical IV and a continuous DV - For continuous IV and DV, Pearson’s r commonly used - For dichotomous DV, odds-ratios commonly used Developed by one of the most important developers and advocates for meta-analysis: Cohen! Alternative exist Heuristics, probability and chance Making decisions under conditions of uncertainty Uncertainty: lacking knowledge about which events will occur Making decisions under this condition requires estimating the probability that an event will occur Heuristics in decision making Heuristic: informal, intuitive, speculative strategy for making a decision - They make decision making more efficient but can lead to errors (driving to school but chance of getting in a crash) Availability heuristic: - Evaluating the probability of an event by judging the ease which relevant instances come to mind Representativeness heuristic: - Evaluating the probability of an event in terms of how well it represents, or matches a prototype Availability Heuristic EX: overestimating how many people die from tornadoes/ fireworks because it's on the news more and underestimating how many die from asthma/ drowning because its more common so it's not on the news as much Representativeness Heuristic Base rates: - The relative proportion of different groups/ classes in the population (how a person should be like based on their description). The best guess is the option with the greater base rate Conjunction rule: - The probability of a conjunction between two events cannot be greater than the probability of one event alone Conjunction fallacy: - Incorrect assumption that two + specific condition are more likely than one general condition Fallacies Fallacy: a mistaken belief, especially one based on unsound agreement (attacking the person rather than their argument) The REAL random Which looks more representative of a fair coin toss. In other words, which looks real and which looks fake The clumpiness of randomness People underestimate the frequency and size of the clumps Contributes to the gambler's fallacy - Gambler's fallacy: the probability of a hit is greater after a long series of misses Seeing pattern in randomness People are predisposed to try to impose pattern on random events (finding a meaning in randomness) Contributes to the hot hand fallacy - Hot hand: the probability of a hit is greatest after a hit than after a miss (if you just won, you'll keep winning) (no such thing as hot hand effect in basketball) Law of large numbers Law of large numbers: - The more times your run an experiment, the closer the average of the results approximates the expected/ population value - Also applies to literal experiments: - Sample size and meta-analysis Expected utility hypothesis/theory According to the expected utility hypothesis (EU), optimal decision-making is based on an outcomes utility and the probability of it being achieved Prediction when decided among several alternatives, we choose the option that yields the highest EU Expected utility (EU) hypothesis EU = P x U EU hypothesis = rational people will choose to pursue opportunities that provide the greatest EU Probability: a person's belief that an event will occur UtilityL the subjective value of an outcome - More valuable outcomes have greater utility; they produce greater satisfaction - The EU for hains is identical, the EU for losses is identical, but we show a definite preference for one outcome over another Framing Framing refers to the perspective from which an outcome is viewed An outcome can be viewed as achieving a gain or avoiding a loss How a choice is framed coupled with the probability of achieving the outcomes determines a person's decision Framing and risk 90% chance is more of a “sure thing” than 45% chance, which is more risky Risk averse when framed as a gain, people are risk averse - They'd rather take 90% chance over 45% chance of gaining $ Risk taking: when framed as a loss, people are risk takers - They'd rather take 45% chance over 90% chance of losing $ These findings also demonstrate that EU doesn't explain human behavior well outcomes as losses and gains Losses loom larger than gains - A loss is more dissatisfying than the gain of that same outcome is satisfying - I.e., at the same outcome delay interval, the subjective value of an outcome loss is greater than the subjective value of an outcome gain The effects of time Delay interval: time between current behavior and availability of future outcome - As a consequence of this delay, the outcome losses value Delay discounting (aka temporal discounting) occurs when a future outcome is represented in the present at a marked down value The effects of time People prefer large outcomes over small outcomes People prefer immediate outcomes over delayed outcomes Preference reversal - When amount and delay interact: - If the delay interval is long, we prefer a larger delayed outcome vs. a smaller more immediate outcome - As delay interval shorters, we shift preference to smaller immediate outcome Multiple causation and interactions The search for the magic bullet What is the magic bullet? - The (one and only, all purpose) cause of something - But psychological phenomena are more complex than that! Multiple causation When not one, but many construct/variables contribute to a psychological phenomenon Different from converging evidence (multiple studies contribute to similar conclusions) Many phenomena are influenced or “caused” by more than once factor - Saw this as a “limitation” before: third variable problem Interactions Sometimes those multiple causes interact with one another Interaction: the joint effect of two or more variables in another variable Some ways to think about this: - The effect of one variable on another depends on the level of a third variable - The relationship between two variables depends on the quantity of a third variable Interaction effects occur when variables have combined effects on an outcome Problem: the more factors you measure as IVs or predictions, the harder it is to interpret the effect on the DV Types of variables that can be tested Last example dealt with categorical IVs and group differences - I.e., smiley face (or not) on check; female vs. male as server Can also test or interactions with continuous IVs and [patterns of association - E.g., study on facebook use, face to face, and personality - I.e., people who are more introverted spend more time interacting face to face with others when they use social media more often The “statics don't apply to the individual argument” The argument - Because people are unique, statistics dont directly apply to anyone individually Clinical vs. statistical approaches to decision making - Clinical = informal, impressionistic, intuitive decision making strategy may or may not reflect expertise - Statistical = mechanical, formal, algorithmic, mathematical models based on quantitative data - Which to put more faith in? Fist - People have biases use heuristics and utilize other common cognitive fallacies - Statistical Models don't have the same degree of these problems - And they have additional transparency Second, - People are bad at combining information, especially when phenomenon are measured in different ways - Results in inaccurate decisions - Statistical models can combine information more effectively Third, Agreement within and between (expert) decision makers - Within: Does the decision-makers use a single decision strategy in every quick case? - Between:Do multiple decision-makers make similar decisions about the same case(s)? Given the same data on repeated occasions, Statistical models generate identical decisions Judgemental bootstrapping: the development of a statistical model that assimilates and experts forecast into it by Inferring rules the expert appeared to use in making the forecast - Note that experts’ judgments are used to create a statistical model to Fourth, Many variables are interrelated Redundant information is weighed more heavily by human judges less heavily by statistical models Statistical models weigh unique information more heavily than redundant information Are there any cases in which the clinical approaches out perform the statistical approach? The “broken leg case” - Prof A. Goes to the movies regularly on Tuesday nights - Statistical models predict: “if it's Tuesday night prof A is 90% likely to go to the movies” - But, prof A. broke his leg Tuesday morning and is in a hip cast that won't fit into the theater seat - A clinician will predict that prof A. will not go to the movies - This is a special power of the clinician that cannot be duplicated by a statistical model - Unless the statistical model is updated! Why do we adhere to the clinical approach? 1. Deficits in understanding judgment fallacies 2. Fear of replacement 3. Belief in efficacy of one's judgment 4. The “dehumanizing” feel of statistical models Clinical versus statistical approaches to decision-making - The two approaches are not mutually exclusive - Ultimately, you will have to use your own judgment to evaluate all of the information that is available to you - Key to evaluating and synthesizing these kinds of information: understanding where the info comes from and what it means - Even with science, there are limitations to what any study can tell you Review Converging evidence and meta-analysis Advances in science occur in bits and starts, progress and setbacks. This concept is known as the gradual synthesis model Multiple Causation and interactions And interaction occurs when there are combined effects of two or more variables on another variable. This is true Heuristic, probability and chance If I flip a coin 10 times, it's entirely possible that the flips land on all tails or nearly all tails. However, if I've flipped the coin 10 times, I would've likely be close to a 50-50 split between heads and tails, this illustrates the: law of large numbers (The more you run an experiment the more the results are going to match what's happening in the real world) The stats don't apply to the individual argument Given the same data on repeated occasions, which of the following is most likely to generate an identical decision? - A statistical model (want anticipated unexpected decisions)

Use Quizgecko on...
Browser
Browser