Integrative Methods Midterm - Heshynee Mae Aiko Tagaro PDF

Summary

This document appears to be a midterm presentation or study material on integrative methods in healthcare. It covers topics such as primary and secondary data methods, systematic literature reviews, meta-analysis, and the assessment of evidence quality. It includes models such as Markov models and Monte Carlo simulation.

Full Transcript

MIDTERM Integrative methods Heshynee Mae Aiko Tagaro PCARE 3208 Primary Data methods involve collection of original data, ranging from more scientifically rigorous approaches for determining the causal effect of health technologies, such as randomized controlled t...

MIDTERM Integrative methods Heshynee Mae Aiko Tagaro PCARE 3208 Primary Data methods involve collection of original data, ranging from more scientifically rigorous approaches for determining the causal effect of health technologies, such as randomized controlled trials (RCTs), to less rigorous ones, such as case series PCARE 3208 Primary Data methods categorized based on multiple attributes or dimensions – Comparative vs. non-comparative – Separate (i.e., external) control group vs. no separate (i.e., internal) control group – Participants (study populations/groups) defined by a health outcome vs. by having been exposed to, or received or been assigned, an intervention – Prospective vs. retrospective – Interventional vs. observational – Experimental vs. non-experimental – Random assignment vs. non-random assignment of patients to treatment and control groups PCARE 3208 PCARE 3208 types of validity Whether they are experimental or non-experimental in design, studies vary in their ability to produce valid findings. Validity refers to how well a study or data collection instrument measures what it is intended to measure. PCARE 3208 types of validity Internal validity refers to the extent to which the results of a study accurately represent the causal relationship between an intervention and an outcome in the particular circumstances of that study. External validity refers to the extent to which the results of a study conducted under particular circumstances can be generalized (or are applicable) to other circumstances. PCARE 3208 Assessing the Quality of Primary Data Studies assess internal validity – Prospective, i.e., following a study population over time as it receives an intervention orexposure and experiences outcomes, rather than retrospective design – Experimental rather than observational – Controlled, i.e., with one or more com parison groups, rather than uncontrolled – Randomized assignment of patients to intervention and control groups – Blinding of patients, clinicians, and investigators as to patient assignment to intervention and control groups PCARE 3208 Assessing the Quality of Primary Data Studies assess external validity – Flexible entry criteria to identify/enroll patient population that is representative of patient diversity likely to be offered the intervention in practice – Dosing, regimen, technique, delivery of the intervention consistent with anticipated practice PCARE 3208 Integrative methods secondary or synthesis methods these medthods involve combining data or information from existing sources, including from primary data studies Most HTA programs rely on integrative methods (especially systematic reviews), particularly to formulate findings based on available evidence from primary data studies that are identified through systematic literature searches. PCARE 3208 Integrative methods Methods used to combine or integrate data from primary sources include the following: – Systematic literature review – Meta-analysis – Modeling (e.g., decision trees, state-transition models, infectious disease models) – Group judgment (“consensus development”) – Unstructured literature review – Expert opinion PCARE 3208 Integrative methods Four major types of integrative methods: –systematic literature reviews –meta-analysis –decision analysis –consensus development PCARE 3208 Systematic Literature Reviews A systematic literature review is a form of structured literature review that addresses one or more evidence questions (or key questions) that are formulated to be answered by analysis of evidence. often includes a meta-analysis involves – Objective means of searching the literature – Applying predetermined inclusion and exclusion criteria to this literature – Critically appraising the relevant literature – Extraction and synthesis of data from evidence base to formulate answers to key questions PCARE 3208 “PICOTS” format Population: e.g., condition, disease severity/stage, comorbidities, risk factors, demographics Intervention: e.g., technology type, regimen/dosage/frequency, technique/method of administration Comparator: e.g., placebo, usual/standard care, active control Outcomes: e.g., morbidity, mortality, quality of life, adverse events Timing: e.g., duration/intervals of follow-up Setting: e.g., primary, inpatient, specialty, home care PCARE 3208 Working with Best Evidence evidence of internal validity should be complemented by evidence of external validity wherever appropriate and feasible to demonstrate that a technology works in real-world practice The “best evidence” may be the best available evidence, i.e., the best evidence that is currently available and relevant for the evidence questions of interest. PCARE 3208 Meta-Analysis Meta-analysis refers to a group of statistical methods for combining the data or results of multiple studies to obtain a quantitative estimate of the overall effect of a particular technology on a defined outcome. PCARE 3208 Meta-Analysis Evidence collected for HTA often includes studies with insufficient statistical power to detect any true treatment effects. By combining the results of multiple studies, a meta-analysis may have sufficient statistical power to detect a true treatment effect if one exists, or at least narrow the confidence interval around the mean treatment effect. PCARE 3208 Guidelines for Reporting Primary and Secondary Research The conduct of systematic reviews, meta-analysis, and related integrative studies requires systematic examination of the reports of primary data studies as well as other integrative methods. PCARE 3208 Instruments for assessing the reporting of research AMSTAR (Assessment of Multiple Systematic Reviews) (Shea 2009) CHEERS (Consolidated Health Economic Evaluation Reporting Standards) (Husereau 2013) CONSORT (Consolidated Standards of Reporting Trials) (Turner 2012) GRACE (Good ReseArch for Comparative Effectiveness) (Dreyer 2014) MOOSE (Meta-analysis of Observational Studies in Epidemiology) (Stroup 2000) PRISMA (Preferred Reporting Items for Systematic Reviews and Meta- Analyses) (Moher 2009) QUOROM (Quality Of Reporting Of Meta-analyses) (Moher 1999) STARD (Standards for Reporting of Diagnostic Accuracy) (Bossuyt 2003) STROBE (Strengthening the Reporting of OBservational Studies in Epidemiology) (von Elm 2008) TREND (Transparent Reporting of Evaluations with Nonrandomized Designs) (Des Jarlais 2004) PCARE 3208 Assessment of Multiple Systematic Reviews (AMSTAR) One of the tools developed to assess the quality of systematic reviews which was derived using nominal group technique and factor analysis of previous instruments. PCARE 3208 1. Was an ‘a priori’ design provided? 2. Was there duplicate study selection and data extraction? 3. Was a comprehensive literature search performed? 4. Was the status of publication ([e.g.,] grey literature) used as an inclusion criterion? 5. Was a list of studies (included and excluded) provided? 6. Were the characteristics of the included studies provided? 7. Was the scientific quality of the included studies assessed and documented? 8. Was the scientific quality of the included studies used appropriately in formulating conclusions? 9. Were the methods used to combine the findings of studies appropriate? 10. Was the likelihood of publication bias assessed? 11. Was the conflict of interest stated? PCARE 3208 PCARE 3208 Modeling Quantitative modeling is used to evaluate the clinical and economic effects of health care interventions. Models are often used to answer “What if?” questions. they are used to represent (or simulate) health care processes or decisions and their impacts under conditions of uncertainty, such as in the absence of actual data or when it is not possible to collect data on all potential conditions, decisions, and outcomes of interest. PCARE 3208 Modeling The high cost and long duration of large RCTs and other clinical studies also contribute to the interest in developing alternative methods to collect, integrate, and analyze data to answer questions about the impacts of alternative health care interventions. Indeed, some advanced types of modeling are being used to simulate clinical trials. PCARE 3208 Modeling Among the main types of techniques used in quantitative modeling are –decision analysis (*midterm) –Markov modeling –Monte Carlo simulation PCARE 3208 Markov models Markov models are analytical frameworks that represent disease processes evolving over time and are suited to model progression of chronic disease as this type of model can handle disease recurrence and estimate long-term costs Time spent in each disease state for a and life years single model cycle (and transitions gained/QALYs. between states) is associated with a cost and a health outcome PCARE 3208 A Monte Carlo simulation uses sampling from random number sequences to assign estimates to parameters with multiple possible values, e.g., certain patient characteristics. Archimedes model: a large-scale simulation system that models human physiology, disease, and health care systems – In diabetes, for example, the Archimedes model has been used to predict the risk of developing diabetes in individuals, determine the cost-effectiveness of alternative screening strategies to detect new cases of diabetes, and simulate clinical trials of treatments for diabetes. PCARE 3208 Models and their results are only aids to decision making; they are not statements of scientific, clinical, or economic fact. The report of any modeling study should carefully explain and document the assumptions, data sources, techniques, and software. Assumptions and estimates of variables used in models should be validated against actual data as such data become available, and the models should be modified accordingly. PCARE 3208 Assessing the Quality of a Body of Evidence Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group (Balshem 2011) Cochrane Collaboration (Higgins 2011) US Agency for Healthcare Research and Quality Evidence-based Practice Centers (AHRQ EPCs) (Berkman 2014) Oxford Centre for Evidence-Based Medicine (OCEBM Levels of Evidence Working Group 2011) US Preventive Services Task Force (USPSTF) (US Preventive Services Task Force 2008) PCARE 3208 factors when assessing the quality of a body of evidence: – Risk of bias – Precision – Consistency – Directness – Publication (or reporting) bias – Magnitude of effect size (or treatment effect) – Presence of confounders that would diminish an observed effect – Dose-response effect (or gradient) PCARE 3208 Risk of bias refers to threats to internal validity, i.e., limitations in the design and implementation of studies that may cause some systematic deviation in an observation from the true nature of an event, such the deviation of an observed treatment effect from the true treatment effect. PCARE 3208 Precision refers to the extent to which a measurement, such as the mean estimate of a treatment effect, is derived from a set of observations having small variation (i.e., are close in magnitude to each other). Precision is inversely related to random error. Small sample sizes and few observations generally widen the confidence interval around an estimate of an effect, decreasing the precision of that estimate and lowering any rating of the quality of the evidence. PCARE 3208 Consistency refers to the extent that the results of studies in a body of evidence are in agreement Consistency can be assessed based on the direction of an effect, i.e., whether they are on the positive or negative side of no effect or the magnitudes of effect sizes across the studies are similar. PCARE 3208 Directness of comparison refers to the proximity of comparison in studies, that is, whether the available evidence is based on a direct comparison of the intervention and comparator of interest, or whether it must rely on some other basis of comparison where there is no direct evidence pertaining to intervention A vs. comparator B, evidence may be available for intervention A vs. comparator C and of comparator B vs. comparator C PCARE 3208 Directness of outcomes refers to how many bodies of evidence are required to link the use of an intervention to the impact on the outcome of interest direct evidence: determining whether a screening test has an impact on a health outcome, a single body of evidence that randomizes patients to the screening test and to no screening and follows both populations through any detection of a condition, treatment decisions, and outcomes PCARE 3208 indirect evidence: Requiring multiple bodies of evidence to show each of detection of the condition, impact of detection on a treatment decision, impact of treatment on an intermediate outcome, and then impact of the intermediate outcome on the outcome of interest PCARE 3208 Publication bias refers to unrepresentative publication of research reports that is not due to the quality of the research but to other characteristics – includes tendencies of investigators and sponsors to submit, and publishers to accept, reports of studies with “positive” results, such as those that detect beneficial treatment effects of a new intervention, as opposed to those with “negative” results (no treatment effect or high adverse event rates) PCARE 3208 managing publication bias – Prospective registration of clinical trials (e.g., in ClinicalTrials.gov) – adherence to guidelines for reporting research – efforts to seek out relevant unpublished reports PCARE 3208 Magnitude of effect size Magnitude of effect size can improve confidence in a body of evidence where the relevant studies report treatment effects that are large, consistent, and precise. PCARE 3208 Plausible confounding that would diminish observed effect refers to instances in which plausible confounding factors for which the study design or analysis have not accounted would likely have diminished the observed effect size example: a group of patients receiving the new treatment has greater disease severity at baseline than the group of patients receiving standard care, yet the group receiving the new treatment has better outcomes, it is likely that the true treatment effect is even greater than its observed treatment effect. PCARE 3208 Dose-response effect refers to an association in an individual study or across a body of evidence, between the dose, adherence, or duration of an intervention and the observed effect size PCARE 3208 PCARE 3208 PCARE 3208 strength of evidence grades and definitions for the approach used by the AHRQ EPCs PCARE 3208 Consensus Development refer to particular group processes or techniques that generally are intended to derive best estimates of parameters or general agreement on a set of findings or recommendations used to set standards, make regulatory recommendations and decisions, make payment recommendations and policies, make technology acquisition decisions, formulate practice guidelines, define the state-of-the-art, and other purposes PCARE 3208 Consensus Development qualitative in nature involve group methods such as Delphi technique with face-to-face meetings; video and web conferencing and related telecommunications approaches PCARE 3208 Consensus Development In HTA, consensus development is not used as the sole approach to deriving findings or recommendations, but rather as supported by systematic reviews and other analyses and data. Virtually all HTA efforts involve some form of consensus development at some juncture, including one or more of three main steps of HTA: interpret evidence, integrate evidence, and formulate findings and recommendations. PCARE 3208 Consensus Development Consensus development also can be used for ranking, such as to set assessment priorities, and for rating, such as drawing on available evidence and expert opinion to develop practice guidelines. PCARE 3208