Critical Appraisal - Ross University

Summary

These lecture notes from Ross University cover critical appraisal, a vital skill in evidence-based medicine. The lecture series details the importance of evidence-based medicine and outlines a critical appraisal protocol for different study designs, emphasizing the evaluation of research quality and the significance of patient values in practice.

Full Transcript

Critical Appraisal Presented by Dr. Rashida Daisley MBBS MPH PGDIP, Assistant Professor Learning Objectives By the end of this lecture, students should be able to: • Discuss the importance of evidence-based medicine. • Develop a critical appraisal protocol for various study designs including clin...

Critical Appraisal Presented by Dr. Rashida Daisley MBBS MPH PGDIP, Assistant Professor Learning Objectives By the end of this lecture, students should be able to: • Discuss the importance of evidence-based medicine. • Develop a critical appraisal protocol for various study designs including clinical trials, case control studies, cohort studies and systematic reviews. • Evaluate the quality of research publications. • Evaluate the methods of study designs for strengths and weaknesses. • Apply existing available critical appraisal tools including Critical Appraisal Skills Program (CASP) checklists. Evidence-Based Medicine • Evidence-based medicine (EBM) is the process of systematically finding, appraising, and using results obtained from well-designed and conducted clinical research to optimize and apply clinical decisions. • EBM is meant to complement, not replace, clinical judgment tailored to individual patients. • EBM also must be culturally, socially and individually acceptable within each unique context. Evidence-Based Medicine • EBM application means relating individual clinical signs, individual clinical experience with the best scientific evidences obtained by the clinical research. • The practice of evidence-based medicine is a process of lifelong, self-directed, problem-based learning in which caring for one’s own patients creates the need for clinically important information about diagnosis, prognosis, therapy and other clinical and health care issues. • Evidence-based medicine “converts the abstract exercise of reading and appraising the literature into the pragmatic process of using the literature to benefit individual patients while simultaneously expanding the clinician’s knowledge base.” Patient Values • It is important to consider patient preferences and values in evidence-based practice. • Patient preferences can be religious or spiritual values, social and cultural values, thoughts about what constitutes quality of life, personal priorities, and beliefs about health. • Even though healthcare providers know and understand that they should seek patient input into decisions about patient care, this does not always happen because of barriers such as time constraints, literacy, previous knowledge, and gender, race, and sociocultural influences. • To elicit patient values requires effective communication and that the physician listening attentively to the patient. • Patient values include: • • • • patient perspectives beliefs expectations and goals for health and life processes that individuals use in considering the potential benefits, harms, costs, and inconveniences of the management options in relation to one another Patient Values • Informed patients may choose not to follow a guideline that does not incorporate their preferences. The ATP III guideline (Adult Treatment Panel III), e.g., recommended statins for all patients with diabetes. Patients with diabetes at low cardiovascular risk were 70% less likely to opt for a statin after receiving information about the small absolute reduction in coronary risk statins could afford them than patients receiving guideline-directed care. • Where the use of statins in patients with diabetes is linked to quality measures or performance incentives, clinicians face the conflict of following either the guideline or the informed patient. Clinical Expertise • Clinical expertise includes the general basic skills of clinical practice as well as the experience of the individual practitioner. • Good clinical judgment integrates our accumulated wealth of knowledge from patient care experiences as well as our educational background. • We learn quickly as healthcare professionals that one size does not fit all. What works for one patient may not work for another. What we can do is draw from our clinical expertise and past experiences to inform our decisions going forward. • Clinicians must be atop not only the research evidence, but they must also acquire and hone the skills needed to both interpret the evidence and apply it appropriately to the circumstances. Components of Evidence-Based Medicine The practice of EBM involves five essential steps: EBM Step 1 – Generating A Clinical Question • As future clinicians, you will identify information needs/knowledge gaps during clinical encounters in practice. • The first step of EBM requires translation of information needs/knowledge gaps into a scientifically sound, answerable question. • Failure to produce valuable clinical evidence is unlikely to be due to lack of information but the most probable cause is a poorly constructed question. EBM Step 1 – Generating A Clinical Question • To specify a clinical question for EBM as much as possible by applying the standard of PICO (T) Model. • This model acts as a framework to refine each area of the question and helps to make the question more answerable. EBM Step 1 – Generating A Clinical Question • An attending endocrinologist is seeing a 37-year-old female who was recently diagnosed with type 2 diabetes mellitus and was placed on the biguanide, metformin, but after three (3) months of adherence, there was no improvement in her HbA1C. As an evidence-based practitioner, he developed the following clinical question: What is the best treatment for diabetes? • Is this question “PICO Compliant”? • This about the following questions? • • • • • Is there a clear study population? …. NO Is there a specified Intervention? …. NO Is there a defined comparison? … NO Do we know the desired outcome? … NO EBM Step 1 – Generating A Clinical Question • An attending endocrinologist is seeing a 37-year-old female who was recently diagnosed with type 2 diabetes mellitus and was placed on the biguanide, metformin, but after three (3) months of adherence, there was no improvement in her HbA1C. As an evidence-based practitioner, he developed the following clinical question: Does the GLP-1 Agonist Dulaglutide result in a greater improvement in HbA1c compared to metformin in patients with type 2 diabetes aged 35-60 over a 3month period? • Is this question “PICO Compliant”? • This about the following questions? • • • • Is there a clear study population? …. YES (Patients with T2DM Age 35–60) Is there a specified Intervention? …. YES (Dulaglutide is the intervention.) Is there a defined comparison? … YES (The usual care comparator is metformin.) Do we know the desired outcome? … YES (Improvement in HbA1c in 3 months) EBM Step 2 – Find the Best Evidence • The conditions for the evidence acquired in EBM are that the evidence should be • • • • • • attainable obtained externally from research or from an expert up-to-date timely of high quality applicable to individual patients Linking EBM to Critical Appraisal • The third step of EBM is critical appraisal of the literature found. • The goal of critical appraisal is to assess the quality of the publication. Review – Hierarchy of Evidence • The hierarchy of evidence is a core principle of evidence-based practice (EBP). • EBM hierarchies rank study types based on the strength, rigor and precision of their research methods. • Most experts agree that the higher up the hierarchy the study design is positioned, the more rigorous the methodology and hence the more likely it is that the study design can minimize the effect of bias on the results of the study. What is Critical Appraisal? • This is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. • A well-conducted critical appraisal: • Is an explicit systematic, rather than an implicit haphazard, process. • Involves judging a study on its methodological, ethical, and theoretical quality. • Is enhanced by a reviewer’s practical wisdom, gained through having undertaken and read research. Usefulness of Critical Appraisal • According to Carl Sagan (1996) ‘the method of science, as stodgy and grumpy as it may seem, is far more important than the findings of science.’ • The extent to which readers can have confidence in research findings is influenced by: • the methods that generated, collected, and manipulated the data • how the investigators employed and reflected on these methods • Critical appraisal can be utilized in many different settings including: • • • • conducting literature reviews for grant proposals for new project evaluating the effectiveness, costs, and benefits of health programs, Intervention etc., establishing new innovative in the health programs setting right the gaps while implementing health policies and public health decision making Usefulness of Critical Appraisal • While most of us know not to believe everything we may read on social media, it is also true that we cannot rely 100% on papers written in even the most respected academic journals. • Critical appraisal allows us to: • • • • • • Reduce information overload by eliminating irrelevant or weak studies. Identify the most relevant papers. Distinguish evidence from opinion, assumptions, misreporting, and belief. Assess the validity of the study. Assess the usefulness and clinical applicability of the study. Recognize any potential for bias. Example of Irresponsible Use of Data- Wakefield • In The Lancet medical journal, Wakefield et al. (1998) reported gastrointestinal disease and behavioral disorders (autism) in twelve previously normal children. In most cases, onset of symptoms occurred after they received the measle, mumps, and rubella (MMR) vaccine. Eight of the cases were attributed to MMR vaccination, either by the child's parents or their physician. • Wakefield et al. called for further investigations into the "possible relationship between autism and the vaccine" and, at a subsequent press conference, argued that the MMR vaccine should be withdrawn. • Wakefield's paper produced a flood of letters to the journal pointing out (amongst other things) that merely observing A precedes B does not mean that A causes B! • Observing that few people marry after they die (most marriages occur beforehand) does not show that marriage causes death. Example of Irresponsible Use of Data- Wakefield • The Wakefield study was scientifically flawed on numerous counts. Neither the editor nor the reviewers identified these flaws when the paper was submitted for appraisal prior to publishing. • If rigorous critical appraisal was done, the public would have been saved the confusion and anxiety caused by false credibility conveyed by publication of the study in this prestigious journal. Step 1 – Assess Location of the Study • For instance, in July 2008 an article was published on the Daily Mail (tabloid in the UK) claiming that there is a link between vegetarian diet and infertility (Daily Mail Reporter 2008). The article was based on a cross-sectional study on soy food intake and semen quality published in the medical journal Human Reproduction (Chavarro et al. 2008). • Behind the Headlines, a NHS service providing an unbiased daily analysis of the science behind the health stories that make the news, issued the following comment: • The Daily Mail today reports on, “Why a vegetarian diet may leave a man less fertile.” It said research has found that eating tofu can significantly lower your sperm count. • The study behind this news had some limitations: It was small, and mainly looked at overweight or obese men who had presented to a fertility clinic. It focused only on soy (soya) intake, and the Daily Mail’s claim that there is a causal link between eating a ‘vegetarian diet’ and reduced fertility is misleading(NHS Knowledge Service 2008). Location of the Study • When reviewing the literature published in scientific/medical journals, we should consider that papers with significant positive results are more likely to be: • Submitted and accepted for publication (publication bias). • Published in a major journal written in English (Tower of Babel bias). • Published in a journal indexed in a literature database, especially in less developed countries (database bias). • Cited by other authors (citation bias). • Published repeatedly (multiple publication bias). • Quoted by newspapers. Structure of a Research Publication • Title • Abstract • Introduction • Background/review of literature • Organizational context • Methodology • Results • Discussion Five (5) Questions When Evaluating a Paper 1.What health-related question(s) is/are the authors addressing? 2.What was the study design? Was it appropriate for the question? 3.Was the study valid (free from biases)? 4.Was there a relevant effect size? 5.Is the outcome (population, type of organization) generalizable to your situation? Publication Title • A publication title is not always a good indication of the content of the article. • A good research title has three components: • What is the purpose of the research? • What tone is the paper taking? • What research methods were used? Step 2 - Research Question/Aim • As previously discussed, the first step of EBM practice is generating a detailed clinical question that follows the PICO outline. The same is true for evaluating an article; the publication’s objective should be clearly outlined and follow the same PICO (T) framework. • Below is the framework again for your review: Step 2 - Research Question/Aim • Does the below research aim adequately follow the PICO (T) Framework? • If not, how will this impact the study? Assessing Study Population Review from “Overview of Research Studies” • How were participants recruited? • Where were the participants recruited? • How were participants then selected? • Was the sample size large enough? • Was the study adequately powered? Evaluating Inclusion/Exclusion Criteria • Have all reasonable inclusion/exclusion criteria been addressed? • Establishing inclusion and exclusion criteria for study participants is a standard, required practice when designing high-quality research protocols. • Inclusion criteria are defined as the key features of the target population that the investigators will use to answer their research question. • Typical inclusion criteria include demographic, clinical, and geographic characteristics. • Exclusion criteria are defined as features of the potential study participants who meet the inclusion criteria but present with additional characteristics that could interfere with the success of the study or increase their risk for an unfavorable outcome. • Common exclusion criteria include characteristics of eligible individuals that make them • • • • • Be highly likely to be lost to follow-up. Miss scheduled appointments to collect data. Provide inaccurate data. Have comorbidities that could bias the results of the study. Increase their risk for adverse events (most relevant in studies testing interventions). Appropriateness of Study Design • Different research questions require different study designs. • The hypotheses that can be tested in any study, particularly regarding cause and effect, will depend on the study design. • Some study designs may offer benefits in terms of cost, time, and administrative effort, but in general, studies that are quicker and cheaper to perform will provide weaker evidence. Appropriateness of Study Design • Review – Provided in previous lecture on “Overview of Research Studies” Assessing Bias • Bias is defined as any tendency that prevents unprejudiced consideration of a question. In research, bias occurs when “systematic error is introduced into sampling or testing by selecting or encouraging one outcome or answer over others.” • Bias can occur at any phase of research, including study design or data collection, as well as in the process of data analysis and publication. • Naive Bayes (NB) – Bias is covered in a later lecture. Assessing Bias • Response Bias Those who participate may be systematically different than those who do. • Recall Bias Those with the disease may remember differently than those without the disease; the interviewer may prompt cases more. • Attrition Bias Those who are lost to follow-up may be systematically different than those who are not. • Allocation Bias There is a failure to randomly allocate persons to interventions and control groups. • Publication Bias Studies are more likely to be published if they have positive findings; may overestimate the effect. Evaluating Methodology – Cross Sectional • Was the sample size justified? • Could the way the sample was obtained introduce (selection)bias? • Is the sample representative and reliable? • Are the measurements (questionnaires) likely to be valid and reliable? • Was the statistical significance assessed? • Are important effects overlooked? • High external validity? (generalizability) Evaluating Methodology – Case Control • Were the cases and controls defined precisely? • Was the selection of cases and controls based on external, objective, and validated criteria? (selection bias) • Are objective and validated measurement methods used and were they similar in cases and controls? (misclassification bias) • Did the study incorporate blinding where feasible? (halo-effect) • Could there be confounding? • Is the size of effect practically relevant? Evaluating Methodology – Cohort Study • Was the cohort recruited in an acceptable way? (selection bias) • Was the cohort representative of a defined population? • Was a control group used? Should one have been used? • Are objective and validated measurement methods used and were they similar in the different groups? (misclassification bias) • Was the follow-up of cases/subjects long enough? • Could there be confounding? • Was this a retrospective or prospective cohort? Was this appropriate? Evaluating Methodology – RCT • Were subjects randomly allocated to the experimental and control group? If not, could this have introduced bias? • Are objective inclusion/exclusion criteria used? • Were groups comparable at the start of the study? (This can be assessed by reviewing the baseline table.) • Are objective and validated measurement methods used and were they similar in the different groups? (misclassification bias) • Were outcomes assessed blind? If not, could this have introduced bias? • What methods were used for blinding and randomization? • Is the size of effect practically relevant? Evaluating Methodology – Systematic Review/Meta-Analysis • Factors to look for: • literature search (Did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?) • quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts) • homogeneity of studies • presentation of results (clear, precise) • applicability to local population Correlation vs Causation Correlation vs Causation • When assessing the results of research studies, it is important to evaluate the extent to which causality can be inferred. • Findings of causality also require judgement and validity. Causal significance is never self-evident—so, in critical appraisal, judgements about causation must always be carefully weighed. • A commonly used set of criteria was proposed by Sir Austin Bradford Hill; it was an expansion of a set of criteria offered previously in the landmark Surgeon General’s report on smoking and health. The criteria are known as the Bradford Hill criteria of causation. Bradford Hill Criteria for Causation Validity of the Results • The focus of critical appraisal is judging both internal validity and generalizability (external validity). (It is important to assess them both when critically appraising a publication.) • Internal Validity Internal validity is defined as the extent to which the observed results represent the truth in the population we are studying and, thus, are not due to methodological errors. The internal validity of a study can be threatened by many factors, including: • Bias – Bias is any systematic error that can produce a misleading impression of the true effect. • Chance – Chance is random error, inherent in all observations. • External Validity External validity refers to the question of whether the results of the study apply to patients outside of the study, particularly the specific patient (or population) being studied. Validity • Study patients are unlike patients in usual practice. They have been referred to academic medical centers after careful consideration by the physician, meet stringent inclusion criteria, and are free of potentially confounding conditions or disorders. • To increase internal validity, investigators should ensure careful study planning and adequate quality control and implementation strategies— including adequate recruitment strategies, data collection, data analysis, and sample size. • External validity can be increased by using broad inclusion criteria that result in a study population that more closely resembles real-life patients, and, in the case of clinical trials, by choosing interventions that are feasible to apply. Validity • Questions to ask when assessing internal validity include: • • • • • Were there enough subjects in the study? Was a control group used? Were the subjects randomly assigned? Was the study started prior to the intervention or event? Was the outcome measured in an objective and reliable way? Validity • Questions to ask when assessing external validity include: • Will this study prove effective if a different population of participants is used? • Will this study be effective if used with a different type of behaviors? • If the study was done in a clinic, will it be effective if conducted in a school classroom setting? What about in a home environment? Critical Appraisal Tools • CASP (Critical Appraisal Skills Programme) checklists are a series of checklists involving prompt questions to help you evaluate research studies. They are academically acclaimed and used internationally to systematically appraise scientific publications. • The CASP checklists are usually structured around three main sections asking: • Are the results of the study valid? • What are the results? • Will the results help locally? (in my setting) • They are available for download at: https://casp-uk.net/casp-toolschecklists/ Critical Appraisal Tools Tying it all together! • Case Presentation • A 28-year-old female with a history of subfertility due to polycystic ovarian syndrome (PCOS) presented to your office with her partner to discuss their possibilities regarding having a baby. She read about the use of metformin in managing infertility in PCOS and is interested in this medication. • Past Medical History: Irritable bowel syndrome (IBS; causes patient to have frequent diarrhea) • Social History: Patient is a taxi driver. Tying it all together! • What clinical question would you propose to assist with the management plan of this patient? • Write suggestions in the Zoom chat or share in person! Tying it all together! • Clinical Question • Does the use of metformin improve fertility in women aged of reproductive age with polycystic ovarian syndrome (PCOS)? Tying it all together! Tying it all together! Is this a good study that we use to support the use of metformin in this patient? Tying it all together! 1. Inappropriate Study design. 2. Population being studied - Not human 3. Outcome – Linked to our outcome but small differences Tying it all together! Is this a good study that we use to support the use of Metformin in this patient? Tying it all together! 1. Systematic Review 2. Outcome – Pregnancy Rate 3. Results are statistically significant Tying it all together! • Patient Values • Irritable bowel syndrome (IBS) symptoms – diarrhea • Clinical Expertise • Clinical expertise – One of the most common side effects of metformin is gastrointestinal (GI) upset presenting as nausea and diarrhea We can see that although this medication can be useful in this patient; the side effects may affect her adherence and not lead to the expected outcome. We can offer her another fertility medication with similar efficacy but more suitable to her lifestyle. Back to the EBM drawing board! Assignment • Critical appraisal is to integral practicing evidence-based medicine. The ability to read and critically evaluate articles and bodies of research is a necessary skill for competent physicians. • This facilitates our goal of cultivating doctors that meet all of Accreditation Council for Graduate Medical Education (ACGME) Core Competencies of a physician. These competencies include patient care (which should be evidenced-based) and communication skills (communicating research to patients in plain language, but also communicating appraisals to peers). • Towards our goal of helping you to achieve milestones within these competencies, this assignment is to select one of the four (4) academic articles related to this assignment and critically appraise your chosen article. When evaluating the article, please follow the assignment rubric and consider the following: • • • • appropriateness of the study design to the research question results of the study including effect size, statistical significance internal and external validity limitations (drawbacks) of the study • We strongly recommend using the provided critical appraisal tool to guide your critique of this paper, for example an appropriate Critical Appraisal Skills Program (CASP) checklist. Your assignment must be presented in a word-processed format within text referencing using APA citation style. • Word Limit: 350–700 words. Thank you! • For any queries feel free to contact me at [email protected]

Use Quizgecko on...
Browser
Browser