Introduction to Evidence Based Medicine PDF
Document Details
Uploaded by VivaciousRegionalism
Tags
Summary
This document introduces evidence-based medicine (EBM) and the GRADE framework for assessing evidence quality. It covers the different stages of evidence evaluation, including systematic review and guideline development, and discusses the role of clinical reasoning in evidence-based decision-making.
Full Transcript
INTRODUCTION TO EVIDENCE BASED MEDICINE & CASE ASSIGNMENTS GRADE SYSTEM Formulation of recommendations based on evidence and more A little reminder, we are in a new era. There was a paternalistic traditional approach, you went to the doctor, explain your issue and he would propose a solution and...
INTRODUCTION TO EVIDENCE BASED MEDICINE & CASE ASSIGNMENTS GRADE SYSTEM Formulation of recommendations based on evidence and more A little reminder, we are in a new era. There was a paternalistic traditional approach, you went to the doctor, explain your issue and he would propose a solution and you would acept it. But actually, there could be different approaches to that issue so now, the decision doesn´t depend on each physician only. The decisions applied to groups of patients, the guidelines and policies would ussually be developed by committees of experts without any formal process. There was an implicit assumption that decision makerswould incorporate evidence in their thinking appropriately based ont heir education, experience and ongoing study of the applicable literature. Now, we would like to say that this is the past. But not actually, we are in a continuous fight against this but its a slow process. Alvan Feinstein identified biases in clinical reasoning. Archie Cochrane. published Effectiveness and efficiency describing lack of supporting evidence for practices assumed to be effective. John Wennberg and David Eddy pointed out that there were wide variation in how techniques were done. You should rely in the evidence to choose that one technique is better for a specific thing. They said that there were gaps in this area, that this wasnt being done correctly. David Sackett: published books in epidemiology, translating epidemiological methods to physician decision making. RAND Evidence medicine (EBM): is an approach to medical practice intended to optimize decision-making by emphasizing the use of evidence from well-designed and well-conducted research Broadly, evidence-based medicine is the application of the scientific method into healthcare decision-making. Evidence based medicine is a set of principles and methods intended to ensure that to the greatest extent possible, medical decisions, guidelines, and other types of policies are based on and consistent with good evidence of effectiveness and benefit The Evidence based medicine (EBM) triad In essence, EBM is a bridge connecting information (evidence) with patient care. EBM does not limit choice; rather, it increases and optimizes choice. David Sackett has defined EBM as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” GRADE: Grading of Recommendations Assesment, Development and Evaluation - The GRADE working group began in the year 2000. - The working group has developed a common, sensible and transparent approach to grading quality (or certainty) of evidence and strength of recommendations. - Many international organizations have provided input into the development of the GRADE approach which is now considered the standard in guideline development Formulate and grade recommendations using grade. Health technology assessment and/or Clinical guidelines development process needs good quality evidence to take the best decisions based on the evidence, its quality, and other factors that can affect the decision in a specific context. The image shows a two-part process: 1. Systematic Review (blue section) 2. Guideline Development (peach-colored section). 1. Systematic Review (Blue Part) In this part, the GRADE framework is used to assess the quality of evidence. It starts with a PICO framework, which helps formulate research questions and select outcomes. The outcomes are then ranked by importance (Critical, Important, or Not Important). For each outcome, evidence is gathered across multiple studies, and the quality of evidence is assessed based on five criteria that could lower or raise confidence. GRADE Criteria (for rating evidence): Risk of Bias: If studies have methodological issues, the quality is downgraded. Inconsistency Indirectness: If the population, intervention, or outcomes studied don’t match exactly what is being looked for, it lowers confidence. Imprecision: Small sample sizes or wide confidence intervals reduce the reliability of the results. Publication Bias: Studies with negative outcomes may not be published Once these factors are considered, the overall quality of evidence for each outcome is rated as High, Moderate, Low, or Very Low. 2. Guideline Development (Peach Part) Once the systematic review is completed, this information is passed to the guideline development panel. The panel uses the summary of findings to formulate recommendations based on the quality of evidence and other considerations, such as values, preferences, and the balance of benefits vs. harms. Recommendations can be strong or conditional/weak, and can be for or against a particular intervention. Decision Making (ETD - Evidence to Decision Framework) HOW DO YOU BECOME A CRITICAL THINKER 1. FORMULATE AN APPROPIATE PICO QUESTION In children with headache(patient), is paracetamol(intervention) more effective that placebo(comparisson) against pain(outcome)? - P: patient, problem, population - I: intervention - C: comparison, control, comparator - O: outcomes Optional: - T. timing - S: study type (randomyzed, controlled trial, cohort study..) There are different types of questions. 2. CONDUCT AN APPROPIATE SELECTION OF THE EVIDENCE Hierarchy of evidence: ranking study types by their strength and reliability: Strongest evidence at the top: Meta-analyses & systematic reviews and Randomized controlled trials (RCTs), which provide the most reliable conclusions. Moderate evidence in the middle: Cohort studies, Case-control studies, and Cross-sectional studies, which observe groups but are more prone to bias. Weakest evidence at the bottom: Animal studies, case reports, and opinion papers, which are less reliable and preliminary. Non-scientific evidence includes anecdotal information from sources like YouTube videos and unreliable websites, which lack scientific rigor and should not be used for serious conclusions. POOJECT search articles (pubmed and cochrane library). At least 2 databases Dont look for garbage because you will find garbage 9:58 use mesh [ ] in pubmed. It's a label for wach type of paper. identify in key articles, look across the abstract and look for key words and include them all in your next research. AND OR NOT In the report we need to put the number of evidence and also the date in which you find those. Read abstracts and select. You need to select and reject giving reasons for the exclusion. CRITICAL APPRAISAL AND RISK OF BIAS Comparative Synthesis of Evidence Identify the most relevant and useful evidence: Achieved through a systematic and exhaustive literature search. Enhance knowledge: By synthesizing and analyzing the identified data. Support informed decision-making: Through the use of review results and conclusions. Randomized Clinical Trial In research, the best evidence is often obtained through randomized clinical trials. These trials compare two interventions to determine which is more effective. A group of patients is randomly assigned to receive one of the interventions, and the outcomes are measured. This approach allows us to observe the main differences between treatments. Uncertainty vs. Sample Size One limitation of randomized trials is that they only involve a sample of the population, and the sample size may be small, leading to uncertainty about the true effects of the treatment. As the sample size increases, the reliability of the statistics also improves. Systematic Review and Meta-Analysis By combining different trials into a systematic review, we can work with a larger sample size. However, the trials may differ in their procedures, so it is essential to carefully screen and select those that are most suitable for inclusion in the review. Steps in a Systematic Review 1. Start with the PICO question. 2. Create a review protocol. 3. Conduct a literature search. 4. Screen studies and select relevant ones. 5. Extract data. 6. Assess the risk of bias in the studies. 7. Synthesize the data (with or without meta-analysis). 8. Write the discussion and conclusions. GRADE Approach In the GRADE approach, we balance the harms and benefits of interventions. It involves rating the importance of outcomes and endpoints, categorizing them into: Critical for decision-making, Important but not critical for decision-making, Not important or of lower relevance for patients. https://rb.gy 2. Risk of Bias Why do we assess the risk of bias? The findings of a systematic review depend on the validity of the studies it includes. Incorrect studies can lead to reviews with misleading results. Biases can either overestimate or underestimate the effect. Although we cannot measure bias directly, we should focus on methods that minimize its risk. What is bias? Bias refers to systematic error or deviation from the truth. Bias is not the same as... Imprecision: Random errors that occur due to variations in sampling, which are reflected in the confidence interval. Quality: Even well-designed studies can still be biased. Reporting quality: While the methods used in a study may be appropriate, they may not always be clearly described. Type of Bias: Bias from the randomization process: When randomization is flawed, it can lead to imbalances between groups. Bias due to deviations from the intended intervention: This can happen when the study is not double-blinded, meaning the participants or researchers know who received the placebo or the intervention. Bias from missing outcome data: Missing data, such as when participants drop out of the trial, should be avoided. Bias in outcome measurement: Double-blinding also helps prevent bias in measuring outcomes, ensuring no group is favored because of preconceived expectations. Bias in selection of the reported result: Selecting only favorable results for reporting can also skew the review. Cochrane Risk of Bias Assessment tool for randomized clinical trials 1. Allocation sequence concealed (oculto)? The allocation sequence was concealed. This means that the people enrolling participants didn’t know in advance which group a participant would be assigned to. If the allocation sequence was concealed, the process continues. If there is No Information (NI) on whether the sequence was concealed, this leads to Some concerns about bias. 2. Allocation sequence random? The sequence used to allocate participants into different groups (treatment or control) was random. If the allocation sequence is random (Y or PY), the process continues. If the allocation sequence is not random (PN or N), this leads to High Risk of bias. 3. Baseline imbalances suggest a problem? Imbalances in the characteristics of participants between groups that might suggest a problem with the randomization process. If participants in the treatment and control groups have significantly different characteristics at the start, this could indicate issues with randomization. If there are No imbalances (N) or Probably No imbalances (PN), the judgment would be Low Risk of bias. If there are Yes (Y) or Probably Yes (PY) imbalances, this raises Some concerns about bias. Low Some concerns High High High Meta-Analysis Process: 1. Estimate the effect and variance for each study separately: In a meta-analysis, each study included has its own result. In this case, we're looking at the hazard ratio (HR), which is used to compare how effective Lenalidomide (a drug) is versus a placebo. El hazard ratio compara la probabilidad de que ocurra un evento (como la progresión de una enfermedad o la muerte) entre dos grupos. En este caso, estamos comparando Lenalidomide con un placebo. 2. Assign weights to each study: Not all studies are equally reliable. Some are more precise than others, usually because they have larger sample sizes or less variability in their results. To take this into account, each study is given a "weight." Studies that are more reliable (with smaller variances or errors) get a higher weight, meaning they have more influence on the overall result. For example, the study by Attal 2012 is weighted 23.7%, while Weber 2007 has the highest weight at 27.6%. 3. Combine the estimators into a single result: After assigning weights to all the studies, we combine them to get an overall estimate. In the example, the combined HR is 0.68 [95% CI: 0.55, 0.83]. This number means that people taking Lenalidomide have a 32% lower risk of experiencing the negative outcome (like death or disease progression). The 95% confidence interval (CI) shows how certain we are about this estimate. Since the interval doesn’t cross 1.0, it means this reduction is statistically significant — the result is unlikely to be due to chance. Understanding the Forest Plot: Forest plot, which is a visual representation of the meta-analysis: Each horizontal line in the plot represents a study’s result, showing the hazard ratio and the confidence interval (CI) for that study. The red boxes show the hazard ratio estimate, and their size reflects the study's weight — larger boxes mean the study is more influential in the overall result. At the bottom of the plot, there’s a diamond shape that represents the combined result from all the studies. 1.0 es el punto neutro que indica ningún efecto (ni beneficio ni daño). Si el CI no cruza 1.0 (como en este caso, [0.55, 0.83]), significa que podemos estar bastante seguros de que sí existe una diferencia significativa entre los grupos. Un CI más estrecho (X-X) indica que los resultados son más precisos, mientras que un CI más amplio muestra más incertidumbre. Assessing the Certainty of the evidence Certainty of the Evidence In the GRADE framework (used to evaluate evidence quality), "certainty of the evidence" refers to how confident we are that the results (effect estimates) from clinical trials or studies reflect the true effect of an intervention. Essentially, it’s about judging how trustworthy the evidence is. Factors That Can Lower the Certainty of the Evidence There are five key factors that might reduce our confidence in the evidence: 1. Risk of Bias: Are the studies reliable, or could they be biased due to poor study design? If studies have a high risk of bias (for example, issues with randomization or blinding), the certainty in the results decreases. 2. Inconsistency: Do the results of the studies all point in the same direction, and are they of similar magnitude? If different studies show conflicting results (for example, one shows a large effect (more than doubles the initial effect), and another shows no effect), the certainty of the evidence decreases. 3. Indirectness: Are the studies directly applicable to the population, intervention, and outcomes we're interested in? If studies involve different patient groups or use different measurements that aren’t quite what we need, our confidence drops. 4. Imprecision: Is there uncertainty in the results due to small sample sizes or wide confidence intervals? If the results are not precise (e.g., the confidence interval is too wide, meaning we aren't sure what the true effect is), the certainty decreases. 5. Publication Bias: Are there important studies missing (perhaps because only positive results were published)? If certain studies are missing (especially negative or neutral ones), this can bias the results and reduce certainty. Questions to Ask When Assessing Certainty Risk of Bias: Are the methods used in the studies sound? Are the studies well-conducted? Inconsistency: Do the studies show similar results? Are the magnitudes of the effects similar? Indirectness: Do the studies match the PICO question (Population, Intervention, Comparison, Outcome)? Imprecision: Are the estimates of the effect precise, or is there uncertainty? Publication Bias: Could studies be missing, leading to biased results? Other Factors That Can Influence Certainty (Especially for Observational Studies) 1. Dose-Response Relationship: Does a larger dose of the intervention lead to a greater effect? This can increase confidence in the evidence. 2. Effect Size: Is the effect large or very large? A larger effect size generally increases confidence in the evidence. 3. Confounding: Are there other variables that might affect the results? If confounders (other factors that could explain the results) are well-controlled, we have more confidence. Downgrading certainty of evidence due to risk of bias Inconsistency Inconsistency happens when results across different studies vary significantly, making it hard to trust a single conclusion. For example, if one study shows a strong benefit and another shows none, this variation reduces confidence. We look at: Do the point estimates (main results) of the studies vary a lot? Do the confidence intervals (CIs) from different studies overlap? (If there’s no overlap, it suggests inconsistency.) Does a statistical test of heterogeneity (testing whether the studies agree) show a low p-value? A HIGH p-value means the studies are not likely showing the same effect. Is the I² statistic high? This number shows how much of the variation in results is due to differences between studies rather than random chance. Higher values indicate more inconsistency. ○ I² values are classified as: 90%: Substantial heterogeneity 60%: Considerable heterogeneity 40%: Moderate heterogeneity