Critical Appraisal of Research Papers (PDF)
Document Details
Uploaded by VictoriousBowenite4961
The University of Manchester
Dr. Heba El Weshahi
Tags
Related
- Critical Appraisal - Ross University
- Lecture 3 Appraising Research & Deciding If Therapy is Effective (RCTs) PDF
- Evidence Based Medicine & Critical Appraisal of Research PDF
- HTH SCI 3C04 Week 3 - Appraisal of Intervention Studies - Part A W2023 PDF
- Quinnipiac Critical Appraisal and Concepts: Test I Exam Blueprint PDF
- Critical Appraisal Worksheet - Harm - Sheehy et al. PDF
Summary
This document presents a lecture on critical appraisal of research papers. It covers topics such as defining critical appraisal and its importance in evidence-based medicine (EBM). The lecture also details various aspects of research paper appraisal, including validity, results, and relevance within a specific context. The document outlines common biases and factors to consider when evaluating research papers.
Full Transcript
Critical Appraisal of a research paper (Reading a research paper) Dr. Heba El Weshahi Assistant professor public health, preventive and social medicine Community medicine department By the end of the lecture you will be able to: 1. Define critical appra...
Critical Appraisal of a research paper (Reading a research paper) Dr. Heba El Weshahi Assistant professor public health, preventive and social medicine Community medicine department By the end of the lecture you will be able to: 1. Define critical appraisal 2. State the importance of critical appraisal in practicing EBM 3. Identify tools used to critically appraise a research paper 4. Define validity and differentiate between internal and external validity 5. Identify common types of bias affect the validity of a study 6. Determine the key points that should be reviewed while appraising any research paper, regardless of the study design employed. 7. Critically appraise a research paper Evidence Based Medicine: Evidence based medicine (EBM) is the integration of clinical expertise, patient values, and the best research evidence into the decision-making process for patient care. Critical appraisal A systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness (clinical impact), applicability and validity of research findings. Why we need to critically appraise a research paper: 5 As to practice EBM Ask focused Question(s) Acquire the Evidence(s) Appraise the Evidence(s) Apply the best Evidence Assess your Performance Critical appraisal is the process of systematically examining the available evidence to judge its: 1. Validity 2. Results 3. Relevance in a particular context. 1: Assessment of validity: Refers to how accurately research measures actually what is intended to measure in order to decide to what degree the conclusions of a research paper are trustful. External validity: Internal validity: The extent to which the study Ensuring that the study was conducted findings are generalizable beyond the limits of the study population to carefully (the integrity of the study design ) the whole target population. Biases diminish Internal Validity i.e. a study that is sufficiently free from bias is said to have internal validity. Bias: Bias: “Any systematic error in the Selection of participants, Collection, Analysis and Interpretation of data, that can lead to conclusions that are systematically different from the truth”. Common types of bias: 1. Selection bias: 2.Information bias: This occurs at the step of assigning or selecting Which occurs from errors in measuring exposure or participants for a study. It occurs in case of selection disease. It might lead to misclassification of of participants (cases or controls) with characteristics participants or results away from the truth. different from the target population regarding the probability of exposure and outcome. Measures to minimize bias: 1. Selection bias: 2.Information bias: Random selection of participants Use of valid, accurate and reliable instrument for Random assignment (in clinical trials): Random data collection. allocation of participants to study groups Training of interviewers or use one. Ensure that study participants meet clearly Blinding of participants and outcome assessors defined criteria in clinical trials Confounding factors Another factor that affect the internal validity of the study as it might lead results away from the truth This is an extra variable, associated with the exposure which also influences (confounds) the disease outcome For any factor to act as a potential confounder it should be associated with the outcome variable, related to the exposure variable and doesn’t lie in the chain of sequence between the exposure and outcome. Confounding factors Exposure Outcome Third variable Confounding variable Confounding factor Birth Order Down Syndrome Maternal Age Maternal age is correlated with birth order as usually high gravidity is associated with advanced maternal age advanced maternal age is a known risk factor even if birth order is low If the proportion of those with advanced age is more among cases (with Down syndrome) this leads to distortion of the relation between birth order and Down syndrome II: Results Once the study design and methods are valid free from any systematic errors think about THE RESULTS OF THE STUDY What does the study show? What are the results? Are the results statistically significant, and what is the size of the effect? II. Assessment of Results: If appropriate statistical procedures/ were used given the number and type of dependent and independent variables, the number of study groups, the nature of the relationship between the groups (independent or dependent groups), and the objectives of statistical analysis. a. P value (is the difference statistically significant) b. Relative risk, absolute risk reduction and number needed to treat.(Size of the effect and its confidence interval ) Comment on the results P value Confidence Intervals Refers to the probability that any particular Are a measure of the precision of the results of a outcome would have arisen by chance. The study. For example, "36 [95% CI 27-51]", a 95%CI smaller the P value the less likely the data was by range means that if you were to repeat the same chance and more likely due to the intervention. clinical trial a hundred times you can be sure that Standard scientific practice usually deems a P 95% of the time the results would fall within the value of less than 0.05 as "statistically significant". calculated range of 27-51. The smaller the P value the higher the Wider intervals indicate lower precision; narrow significance. intervals show greater precision. III: Clinical Relevance Whether the study is applicable to the clinical outcome for a particular patient or population. Any important differences between the participants in the published trial and the patient or population that you are interested in. Relevance in a particular context. Does your local setting differ much from that of the study? Are the results clinically significant Clinical importance is a medical judgment, not statistical: Clinicians should change practice only if they believe that the study has definitively demonstrated a treatment difference. This treatment difference is large enough to be clinically important. What help you in critical evaluation: There are checklists available that enable us to evaluate research papers systematically. They break down these three issues identified above (validity, results and relevance), into relatively simple questions that can be addressed step-by-step. Examples Critical Appraisal Skills Program (CASP) :http://www.casp-uk.net CONSORT focuses specifically on RCTs http://www.consort-statement.org/ STROBE provides guidelines for designing observational studies :www.strobe-statement.org The Oxford Centre for Evidence-Based Medicine (CEBM).http://www.cebm.net Tools used to appraise a research paper: A critical appraisal tool is a form of checklist which identifies key aspects of a research study which should be present in the high- quality work. Some of them are specific to the study design others are generic. This article provides a standardized way for assessing the methods in a paper. It details six broad questions that you can ask to help you to clarify what the results are, whether they are valid, and whether they are applicable to your patients, participants on your study etc. Remember There are important points to consider before starting the checklist What type of study is it? Is it an appropriate way or the most appropriate way to address the clinical question posed? Each study design has different strengths and weaknesses, sometimes a particular study design will be most appropriate for a question, even if it is not high on the hierarchy of evidence. Another reason to identify the type of study early on is that there are different critical appraisal checklists applicable to different study designs. Hierarchy of evidence: key points that should be reviewed when assessing validity of any research paper, regardless of the study design employed. 1. Was the study original? Consider whether the study adds anything to the literature. Is the clinical issue addressed of sufficient importance. 2. Who was the study about? Consider inclusion/exclusion criteria, how participants were recruited. 3. Was the study design sensible? What the intervention or treatment was, and what the authors compared it to what outcome was measured and how it was measured. 3. Was bias avoided or minimized? - Consider, for example: - Whether a control group was used (and if the groups were alike). - Whether adequate randomization was achieved? Was randomization appropriate?. - any other sources of bias? 4. Was assessment “blind”? Were preliminary statistical questions addressed? - Consider, for example: - The size of the sample. - Duration of the follow-up. - How complete was the follow-up? - Statistical significance (or otherwise) of the results. Study population Is there a clear description of what was target population studied? Were sample size calculations conducted prior to starting the study? If yes, were these numbers satisfied to ensure that the study had adequate power to detect the proposed study effect? Were there eligibility criteria, including clear inclusion and exclusion criteria? 5. Study methods A. intervention/exposure Is there a clear description of both the intervention and the comparison? Is there a clear description of what was the exposure? How was the exposure status measured? Was the exposure measured using the same approach in all study groups? 5. Study methods B. outcome: Is there a clear description of what outcome was measured? How was the outcome measured? Was the outcome measured using the same approach in all study groups? Is the outcome objective or subjective? 5. Study methods C. Blinding Were the researchers/subjects blinded to the treatment/exposure allocation? 5. Study methods D. Follow up: For longitudinal studies, was the study follow-up sufficiently long for cases of disease to occur? Have study subjects been lost to follow-up? Thank You for Kind attention