HEM Notes: Systematic Reviews and Meta-Analysis PDF

Summary

This document provides an overview of systematic reviews and meta-analysis, focusing on key characteristics, benefits, and methodologies. It details the process of selecting, evaluating, and synthesizing research evidence. The document also discusses different types of analyses used in health economics, including the calculation of odds ratios and relative risk.

Full Transcript

**Literature review:** Important features of a review is that studies were mostly selected non-systematically and/or subjectively and this process is often not very clearly described, the results are often only qualitatively described, and a quality assessment of the studies included is mostly not...

**Literature review:** Important features of a review is that studies were mostly selected non-systematically and/or subjectively and this process is often not very clearly described, the results are often only qualitatively described, and a quality assessment of the studies included is mostly not performed. **Systematic literature review:** Important features of a systematic reviews are that studies are selected in a systematic and/or objective way and this process of selecting and including studies is clearly described. The results can be described both qualitatively and quantitatively. In addition, a systematic review often incorporates a quality assessment of the studies included. Thus, systematic reviews are an essential source of evidence. Therefore, in health economic modelling, it is preferred to use evidence from systematic reviews in your model than using other sources of information, like results from individual studies. **The key characteristics of a systematic review can be defined as:** - a clearly stated set of objectives with [pre-defined] [eligibility criteria] for studies. - an [explicit,] reproducible methodology. - a systematic search that attempts to identify all studies that would meet the eligibility criteria. - an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias. - a systematic presentation, and synthesis, of the characteristics and findings of the included studies. ![](media/image2.png) **As a systematic review summarizes results from individual studies, some of the main benefits of a systematic review are:** 1. Reduction in information: As a systematic literature review summarizes the results from multiple studies into one single study, it reduces the amount of information. 2. Efficient to perform: A systematic literature review is (generally) less time consuming to perform compared to (clinical) studies and prevents unnecessary conducting research when there may already be sufficient evidence available. 3. Less costly: Conducting a systematic literature review is (generally) less costly compared to performing a clinical study. 4. Improved generalizability: The results are often generalizable to broader patient populations as evidence from multiple smaller populations (i.e. multiple studies) is being summarized 5. Can create subgroups: Since the overall sample size in a systematic literature review is larger because results from multiple studies are summarized, this often allows to conduct analyses among subgroups. 6. Higher accuracy: Because of the larger sample size in a systematic literature review compared to individual studies, the results of a systematic review are more accurate than results of individual studies. NEXT meta-analysis, a specific type of systematic review **Meta-analysis:** A meta-analysis concerns a quantitative assessment of the outcome parameter of interest, but not every systematic literature review contains a meta-analysis. Thus, in a meta-analysis, results from individual studies are combined into one overall result. **Prerequisites for pooling individual studies into a meta-analysis are:** - The studies should focus on (approximately) the same intervention, the same patients, and the same outcome measures - Studies should have acceptable methodological quality - random effects method - Stratified method **METHODS FOR META-ANALYSIS** Three statistical methods to remember: 1. **Inverse Variance-weighted method (IV)** Suitable for all kinds of outcome measures (OR, RR, RD, MD, etc.), both for binary (dichotomous) and continuous data ![](media/image4.png) 2\. **Mantel-Haenszel (MH)** when data are sparse, both in terms of event rates being low and trials being small MH methods used. Only for binary (dichotomous) outcomes! 3\. **Peto** Only suitable for odds ratios Suitable when the event of interest is rare Peto's method fails when: Treatment effects are very large **FIXED EFFECTS MODEL** Most used statistical methods for meta-analysis are based on a fixed- effects model. Assumption: the true effect of treatment is fixed, i.e. the same value in each study; the differences between study results are solely due to chance. So, in the fixed model it is assumed that all studies are estimating the same effect size. Test the fixed effects assumption with a test of homogeneity. **RANDOM EFFECTS MODEL** In the random effects model it is assumed that effect sizes may differ. Differences between study results is not purely due to chance / random error In other words, random effect models incorporate more uncertainty DerSimonian and Laird is a simple method for incorporating heterogeneity Therefore, CIs of random effects models are wider than those in fixed effects models. importance of using a **logarithmic scale** when plotting **ratios** (such as **odds ratios** or **risk ratios**) in studies and meta-analyses, especially on forest plots. **Key Concepts:** 1. **Ratios are Asymmetrical**: - **0.0 -- 1.0**: Ratios less than 1 indicate a **decrease** in the outcome or risk (e.g., a treatment that reduces the likelihood of an event). - **1.0**: A ratio of **1** indicates **no change** or no difference between groups (e.g., treatment and control). - **\>1.0**: Ratios greater than 1 indicate an **increase** in the outcome or risk (e.g., a higher likelihood of the event happening in the treatment group). NEXT common issues of meta-analyses **Three common issues in performing a meta-analysis include:** 1. Heterogeneity 2. Difference in methodological quality 3. Publication bias **1. Heterogeneity:** Ideally, studies whose results are combined in a meta-analysis would all be undertaken in the same way and using the same experimental design and protocols. In that situation, differences between the observed outcomes would only be due to measurement error. In this situation, studies would be homogeneous. In contrast, study heterogeneity refers to variability in study outcomes that goes beyond what would be expected (or could be explained) due to measurement error alone. ***How to test for heterogeneity?*** Different measures of heterogeneity can be used, but the most commonly used metric is the I\^2. This metric describes the percentage of the variability in effect estimates that is due to heterogeneity rather than sampling error (chance).  - 0% to 40%: not important - 30% to 60%: moderate heterogeneity - 50% to 90%: substantial heterogeneity - 75% to 100%: considerable heterogeneity If statistical heterogeneity is identified when you aim to pool studies into a meta-analysis, there are several potential solutions. For example, you could decide that the studies are not suitable for pooling into a meta-analysis, or you could choose to conduct a random effects meta-analysis to incorporate heterogeneity among studies in the meta-analysis.  **2. Difference in methodological quality:** The studies included in a meta-analysis may differ in methodological quality. Consequently, you may be in doubt on whether or not to include a study in your analysis, for example when the study is of mediocre quality, or when it makes the comparison between the studies more heterogenous (for example because one study has a somewhat different study design than the other studies included in your meta-analysis). In such situations, sensitivity analysis can be used to determine how robust the effect estimate is: - One reason to perform a sensitivity analysis is to consider the impact of a single study. For example a study with mediocre quality can be removed from the pooled analysis, and the remaining studies can be pooled again. Then consider whether the results are comparable. If not, consider to pool and report the studies separately. - Also, it is used to examine the effect of an intervention when only studies are included that fulfill certain methodological criteria. If the results differs strongly from the original result (in which all studies were pooled), it is recommended to exclude the methodologically bad studies from the combined analyses, and to only include the methodologically better studies. - Alternatively, you could (for example) exclude all studies from a certain type, for example exclude case-control studies and leave all RCTs in. **3. Publication bias** Publication bias refers to the fact that research studies with positive findings are more likely to be published than studies with negative findings **Network meta-analysis** is an extension of standard pairwise meta-analysis by including multiple pairwise comparisons across a range of interventions Traditional meta-analysis: all included studies compare the same intervention with the same comparator. Network meta-analysis is a technique for comparing multiple treatments (3 or more) simultaneously in a single analysis By combining direct and indirect evidence within a network of clinical studies. Subtypes: Mixed treatments comparison meta-analysis (MTC) -- atleast one closed loop Indirect treatment comparison(ITC) - anchor ![](media/image6.png)TRANSITIVITY **Study designs:** Elements of **PICO** - Patient - Intervention - Comparison -- Outcome **Observational study designs** - Case-control study - Cross-sectional study - Cohort studies / Follow up study **Advantage of an Observational study** - It allows to measure unintended effects and identify side effects - Low Costs - High susceptibility to confounding variables - High susceptibility to Bias - Reflects daily clinical practice - Observe but don't intervene **Experimental study designs** Randomized controlled trial (RCT). **Advantage of an Experimental study** - Measures intended effects - Exposition is allocated by the researcher - There is uncertainty about treatment effectiveness - Less susceptible to bias - Less susceptible to confounding - More costly - Less accurate reflection of clinical practice - Example study: impact of statins (medication) on cardiovascular events **The gold standard of experimental study designs is the randomized controlled trial (RCT).** ![](media/image8.png) **[RCT : ]** Always has a treatment/experimental group (new treatment) and control group (placebo) For validity reasons both groups should be similar so any outcomes that occur is simply due to the treatment Enough number of participants in the study (large numbers better representation) Single blinded study: only participants are blinded where they are not told if they get the treatment or the placebo ![](media/image10.png) Double blinded study: Both participants are blinded where they are not told which patients get the treatment and the placebo - Ensures clinicians act objectively **Intention-to-treat vs. per-protocol analysis:** Next, we will briefly address how results from experimental studies are analyzed. After assigning patients to the intervention and control group and administering the intervention, the data will be collected and need to be analyzed. This analysis can be performed in two ways: 1. **intention-to-treat analysis:** once a member, always a member. For example, if a patient is randomized to receiving the intervention but has to quit treatment prematurely, for example because the patient experiences adverse events, the patient will nevertheless remain in the intervention group. 2. **per protocol analysis:** only includes individuals who followed the treatment entirely, excluding patients who had major deviations from the study protocol. Thus, patients should have successfully received and completed the treatment they were assigned to. In this type of analysis, we can no longer speak about randomized treatment, because the two groups have become incomparable. The intention-to-treat analysis is the most commonly used and recommended principle 1. Allocation of interventions decided by chance 2. The groups only differ in the intervention at the time of randomization 3. Ethical issues may prevent randomization 4. The groups are NOT comparable during the entire study **[Cross sectional study:]** Data is collected only at one point in time High bias **[Cohort Study: Follow up (longitudinal)]** Selection based on exposure Takes participants based on common characteristics based on their exposure. Prospective 1. Suitable for studying rare exposures 2. Possible to study more than one outcome of interest 3. In prospective follow-up studies, the exposure is measured well 4. Absolute risks can be calculated 5. Inefficient for studying rare diseases 6. Prospective follow-up studies are costly and time-consuming **[Case- control study:]** Selection based on outcome 2 groups; cases and control Retrospective only 1. Case control studies are efficient for studying rare diseases 2. Case control studies are relatively fast and cheap to perform 3. Case control studies can be used to study more than one exposure 4. The outcome (e.g. disease) of interest is measured well 5. A case control study is NOT efficient for studying rare exposures 6. The quality of the data about the exposure may be limited 7. Absolute risks cannot be calculated 8. Case control studies are at risk for bias and confounding 9. Reffered to as case referent study **Association measures** In order to determine whether a certain exposure (e.g. smoking) is associated with experiencing a disease or outcome of interest (e.g. lung cancer), you need to calculate an association measure. An association measure is a *measure of the strength of the association between a factor/exposure and a disease/outcome.* ![](media/image12.png)**Relative Risk: (Only Cohort study)** To calculate the risk assoiated with the exposure we must compare the risk among the exposed to those not exposed. Uses probability. Be aware that there are two types of relative risk (RR), which are the risk ratio and the rate ratio. ![](media/image14.png) Interpretations of RR RR = 1 : No increased risk therefore no association RR \> 1 : Increased risk therefore postivie association RR \< 1 : Decreased risk therefore negative association RR will be reported along side a P value P \> 0.005 Not satistically significant Regardless of how small or large the RR is. ![](media/image16.png) RR cannot be performed with a case-control study ! Why because for cases 50% and control 50% selected Therefore can use Odds ratio **Odds Ratio: (Both cohort and case-control studies)** ![](media/image18.png) Interpretations of RR OR = 1 : Exposure not associated with disease OR \> 1 : Exposure postively associated with disease OR \< 1 : Exposure negatively associated with disease Note that **rate ratios** and **risk ratio**s tend to be numerically similar for rare diseases. Both are called a **relative risk**, but specify which measure of effect you are using! ![](media/image20.png) Based on the information provided: - Cases who visited: 179 - Total cases: 288 - Controls who visited: 56 - Total controls: 459 First, calculate the number of cases and controls who did **not** visit: - Cases who did not visit: 288−179=109 - Controls who did not visit: 459-56=403 Now, apply the formula **Bias** : In short, bias refers to any systematic error that results in an incorrect conclusion of the true effect of an exposure on the outcome of interest Bias undermines the internal validity of research Bias in research means deviation from the truth 1. Selection bias : may occur when study groups are NOT similar in all important aspects. Sampling, attrition, volunteer 2. Information bias : misclassification or reporting information incorrectly. Reporting, recall, observer **Confounding variable:** Confounding refers to real but misleading associations Stratification can be used to decrease the impact of confounding variables on your research Involves mixing or blurring of effects A confounding variable is related to the cause AND the effect of the study **Internal and external validity:** Internal and external validity are two commonly used concepts to reflect whether the results of a study are trustworthy and meaningful. Internal validity refers to whether conclusions that are drawn about the cause and effect relationship as identified within a study are valid. No other variables apart from independent varibale changed External validity reflects the extent to which study results can be applied or generalized to another context. Sample should represent wider population, sample large, exclusion criteria should relate to question. **Sensitivity and specificity** Both in observational and experimental study designs Aim may be to evaluate how accurate a diagnostic test is in prescence or absence of a certain condition. Such studies are called diagnostic accuracy studies. The primary outcome measures in these studies are sensitivity, specificity, and the positive predictive value (PPV) and negative predictive value (NPV) **Sensitivity**: - This refers to the **ability of a test to correctly identify individuals with the condition** (true positives). - A test with **high sensitivity** has few **false negatives**, meaning it's good at catching people who truly have the condition - **Sen = TP / TP+FN\ ** **Specificity**: - This refers to the **ability of a test to correctly identify individuals without the condition** (true negatives). It answers the question: \"Out of all the people who do not have the condition, how many did the test correctly identify as negative?\" - A test with **high specificity** has few **false positives**, meaning it's good at correctly ruling out people who don't have the condition. - **Spec = TN / TN+FP\ ** **Positive Predictive Value (PPV)**: - This tells you **how likely it is that someone with a positive test result actually has the condition**. PPV = TP/TP+FP **Negative Predictive Value (NPV)**: - This tells you **how likely it is that someone with a negative test result actually does not have the condition**. NPV = TN/TN+FN ![](media/image24.png) High sensitivity screening tests (low false negatives) High specificity confirmatory tests (low false positives) PPV and NPV depends on the prevelance of disease (how much disease in the population) Sensitivity: % ppl ill Specificity: % ppl not ill **HEALTH ECONOMIC EVALUATIONS**\ \ Always involves a comparative analysis of two or more alternative investment possibilities (called interventions, strategies, policies) to perform an incremental (cost effectiveness) analysis Goal is to systematically:\ identify, measure, value, and compare\ costs and effects (consequences) of different alternative policies/interventions. the fundamental principles of health economics? **TYPES OF HEALTHCARE DECISIONS:**\ 1. Market approval decisions - EU regulation\ 2. Market access and pricing decisions for new pharmaceuticals - companies\ 3. Reimbursement of new pharmaceuticals and medical devices - minister of health, national insurance\ 4. Physicians deciding about medical treatments - physician/patient ![](media/image26.png) **HIERARCHY OF EVIDENCE DEVELOPMENT**\ IV: Expert opinions\ III: Non-experimental studies\ IIb: Quasi-experimental studies\ IIa: Controlled studies without randomization\ Ib: Well performed RCTs\ Ia: Systematic reviews of well performed RCT **MARKET AUTHORIZATION** ![](media/image28.png) **DEVELOPMENT OF A VALUE DOSSIER**\ \ 1. Collecting all evidence (systematic review)\ 2. Synthesizing the evidence\ Meta-analysis\ Health economic model\ 3. Interpreting the health economic outcomes **REQUEST REIMBURSEMENT** Budget, affordability, pricing, access, return on investment 1\. Why do we often allow drugs on the market, and reimburse drugs, for which effectiveness is uncertain?\ We don't want to withhold potential benefits to patients\ National interests (pharmaceutical industry) 2\. Which are aspects/outcomes typically are most uncertain\ when the cost-effectiveness of a drug is first assessed\ Long-term effectiveness, effectiveness in target population 3\. When is an innovation or drug too expensive?\ When it is unaffordable (budget impact too high)\ When health benefits do not outweigh the costs ![](media/image30.png) ![](media/image32.png) **Markov model** is a mathematical model used to simulate the progression of individuals or systems through various states over time. **Key Components of Model:** 1. **Mutually exclusive**:you can only be in one state at a time 2. **Cycle:**Individuals move between health states at each time period 3. **Transition Probabilities**:These probabilities define the likelihood of moving from one state to another in a given cycle. For example, the probability of transitioning from \"Healthy\" to \"Sick\" might be 0.1 (10%) per year, while the probability of remaining \"Healthy\" could be 0.9 (90%) 4. **Memoryless Property**: This means that the probability of transitioning to another state depends only on the current state, not on the prior history of the patient. **Example: Simple Markov Model for a Disease** Let\'s model a simple chronic disease where individuals can either be **Healthy**, **Sick**, or **Dead**. **1. Health States:** - Healthy - Sick - Dead (absorbing state: once in this state, no transitions occur out of it) **2. Transition Probabilities:** - From **Healthy**: - 90% chance to remain Healthy - 10% chance to become Sick - 0% chance to go directly to Dead - From **Sick**: - 80% chance to remain Sick - 10% chance to recover to Healthy - 10% chance to transition to Dead - From **Dead**: - 100% chance to stay in Dead (absorbing state) **3. Cycle Length:** - Assume each cycle represents one year. **4. Costs and Utilities:** - Healthy: Cost = €0, Utility = 1.0 (perfect health) - Sick: Cost = €10,000, Utility = 0.5 (reduced quality of life) - Dead: Cost = €0, Utility = 0.0 **MARKOV MODEL: EVIDENCE SYNTHESIS\ ** Evidence in our model typically comes from multiple sources\ The effect of new tests or treatments often affects the transition probabilities in the model\ Relative risk of death, relative risk of complications, increased chance of treatment, decreased risk of progression,\...\ The absolute probabilities and rates need to be synthesized with these relative effects in our model Rates ≠ Probabilities\ Probabilities may be reported for time periods of days, months, years\ Rates may be reported, for example hazard rates from survival models Rates (incidence rates, hazard rates\...)\ o Instantaneous measure\ o Range 0 to infinity\ o Rate = Number of events / Person-time at risk Probabilities (annual mortality risk,\...)\ o Likelihood of event (for a single individual) in a given time period\ o Range 0 to 1\ o Probability = Number of events / Persons at risk Probabilities are hard to manipulate directly, manipulation rates is much more convenient\ Rates can be added and subtracted (same t)\ Rates can be divided and multiplied\ You should not do this for probabilities (\> 0.10) To adjust or convert probabilities\ 1. Transform probabilities to rates\ 2. Adjust the rates\ 3. Retransform the rates back into probabilities **Deterministic Decision Analysis (DDA)** **Definition**: In the deterministic analysis, we will define mean values for each input parameter (probabilities, rates, relative risks, odds ratios, costs, utilities), assuming that we know the exact value of each input parameter. As a result, the final output of your deterministic analysis will consist of one Incremental Cost Effectiveness Ratio (ICER) **Advantages**: - Easy to implement and interpret. - Useful for getting an initial understanding of the model's behavior. **Disadvantages**: - Does not capture the **true uncertainty** in the model parameters (as real-world data usually have variability and are not perfectly certain). - Results may be overly simplistic, and decision-makers may miss important risk or variability factors. **Probabilistic Sensitivity Analysis (PSA)** **Definition**: - Probabilistic sensitivity analysis, in contrast, recognizes that there is **uncertainty** in the model parameters. It assigns **probability distributions** to key inputs (e.g., costs, utilities, transition probabilities) instead of fixed values and then uses **Monte Carlo simulations** to model the variability and uncertainty in the outcomes. **Advantages**: - **More realistic**: Reflects the uncertainty in real-world data and generates more robust insights about risk and variability. - **Uncertainty captured**: It accounts for all uncertainties at once, giving a better understanding of the range of possible outcomes. - **Confidence Intervals**: PSA provides confidence intervals or probabilities associated with outcomes, which are useful for decision-makers to assess risk. **Disadvantages**: - **Complexity**: Requires more statistical knowledge and computational power, as it involves running many simulations. - **Interpretation**: The outputs may be more complex and require careful interpretation, especially for non-technical stakeholders. **rgamma ():** for parameters that can adopt values ranging from 0 to infinite. These can be used for cost parameters. **rbeta ():** for parameters that can adopt values ranging between 0 and 1. These can be used for utilities, probabilities, and sometimes rates. **rlnorm():** for parameters that follow a lognormal distribution. These can be used for hazard ratios, risk ratios, and odds ratios. ![](media/image34.png) strategy AB has the highest expected costs and QALYs. Standard of care has the lowest expected cost and QALYs. Strategy B is more effective and least costly than Strategy A. Strategy A is a strongly dominated strategy ![](media/image36.png)At WTP thresholds less than \$80,000 per QALY gained, strategy SoC has both the highest probability of being cost-effective and the highest expected NMB. This switches to strategy B for WTP thresholds between \$80,000 and \$120,000 per QALY gained and to strategy AB for WTP thresholds greater than or equal to \$120,000 per QALY gained **Willingness-to-Pay (WTP) and Reimbursement** **Question:**\ What is the likelihood that a new cancer treatment costing €100,000 per patient and providing 0.5 QALYs will be considered cost-effective at a WTP threshold of €200,000/QALY? - **Calculation:** - Incremental Cost-Effectiveness Ratio (ICER) = 100,000/0.5​=200,000€/QALY - The ICER is exactly equal to the WTP threshold (€200,000/QALY). - **Answer: C) 100%** ![](media/image38.png) Let's calculate how many patients remain in the \"Stable\" state after two cycles. **Transition Probabilities:** - **Probability of staying in the \"Stable\" state** = 1−(0.3+0.1)=0.61 - (0.3 + 0.1) = 0.61−(0.3+0.1)=0.6 - This means that in each cycle, there is a 60% chance that a patient will remain in the \"Stable\" state. **After the First Cycle:** - **Initial number of patients in \"Stable\"** = 300. - After the first cycle, the number of patients remaining in the \"Stable\" state is: 300×0.6=180 patients remaining in \"Stable\" **After the Second Cycle:** - Now, 180 patients remain in the \"Stable\" state, and after the second cycle, 60% of these patients will remain in the \"Stable\" state: 180×0.6=108 patients remaining in \"Stable\" **Interpreting the Model:** - **Pasta**: - Probability of staying in the \"Pasta\" state = **0.50**. - Probability of transitioning from \"Pasta\" to \"Burrito\" = **0.30**. - Probability of transitioning from \"Pasta\" to \"Dahl\" = **0.20**. - **Burrito**: - Probability of staying in the \"Burrito\" state = **0.50**. - Probability of transitioning from \"Burrito\" to \"Pasta\" = **0.25**. - Probability of transitioning from \"Burrito\" to \"Dahl\" = **0.25**. - **Dahl**: - Probability of staying in the \"Dahl\" state = **0.20**. - Probability of transitioning from \"Dahl\" to \"Pasta\" = **0.40**. - Probability of transitioning from \"Dahl\" to \"Burrito\" = **0.40**.

Use Quizgecko on...
Browser
Browser