Summary

This document provides an overview of evidence-based practice, covering key concepts including clinical decision-making, efficacy, and effectiveness. It also explores various research methodologies, such as explanatory and exploratory research, as well as qualitative research approaches.

Full Transcript

Evidence Based Practice • provision of quality of care will depend on ability to make choices based on best evidence currently available • clinical decision making Evidence based decision making • consider all info then make choices for successful outcome for pt Efficacy • benefit of interven...

Evidence Based Practice • provision of quality of care will depend on ability to make choices based on best evidence currently available • clinical decision making Evidence based decision making • consider all info then make choices for successful outcome for pt Efficacy • benefit of intervention in comparison to control, placebo, standard program Effectiveness: benefit/use of something in real world conditions (cant control) Systematic review • comprehensive analysis of full range of literature on particular topic (intervention etc) • looks at quality of the studies Meta analysis • statistically combining findings from several studies for summary • have inclusion/exclusion criteria • like what we did for our project Explanatory Research • experimental designs comparing two or more conditions/interventions • cause/effect • independent variable controlled: treatment/intervention • dependent: pain level, ROM, disability Exploratory Research • observation designs • examine phenomenon of interest/dimensions and how it relates to other factors • predictive relationship Qualitative research • collection of data through interview and observation • dont manipulate variables PICO p: population/problem I: intervention C: comparison/control O: outcomes Independent Variable • predicts/causes outcome • “C” “o” Dependent variable • response/effect that varies depending on independent • null hypothesis: no difference/relationship “o” Evidence Based Practice • incorporates scientific info w/ other sources of knowledge • “conscientious, explicit & judicious use of current best evidence in making decisions about the care of individual patients” • “best evidence with clinical expertise and the patient’s unique values and circumstances” Hierarchy of evidence clinical research • translational research • clinical trials (behavior, epidemiology, therapy) on humans • Pragmatic trials • hypothesis and study design developed to answer questions faced by decision makers Barriers to research • insufficient time provided by management (MAIN ONE) • lack of generalizability of findings to patient pop • lack of research skills • lack of understanding of statistical analyses continuous quality improvement model looking at quality indications, pt satisfaction, cost of new care, clinical • satisfaction etc Father of EBP • david sackett Numerals • ex: 1 = strongly disagree dichotomous numerals • number no quantitative meaning only 2 values ex: 0= no 1 = yes VS • known quantity • assigned values etc continuous variables • can take on any value within a defined range • can be an integer/fraction ex: strength, distance, weight, chronological time discrete variable WHOLE NUMBERS • Ex: HR, number of children in family non parametric parametric tests interval/ratio data • • ordinal/nominal data accuracy closeness to true value • precision • closeness/range of values to each other ex: standard deviation measurement error • dif b/w sample and true value measurement uncertainty • interval around measured value • quantifies precision confidence interval (CI) • uncertainty/certainty • 95-99% good categorical data ex: male/female continuous ex: interval/ration (25,26,27) reliability • measured value obtained/repeated systematic errors predictable errors of measurement • ex: instrumentation etc random errors • examiner, subject in attention, instrument imprecision, unanticipated changes relative reliability (aim for >0.8) • variance of scores • coefficient of 1.00 = best • closer to 0 = less reliability • UNITLESS • R value/pearson (-1 or +1) • + = directly correlated • - = inverse absolute reliability (kappa,categorical) • tells how much value is due to error • reliability between 2 dif outcome measures - pearson correlation • SEM: where true score could lie in UNITS ICC/Kappa • categorical data test retest reliability determine the ability of instrument to measure subject performance consistently • carryover • test administered more than twice this is a risk • motor learning, measurements can improve intra rater reliability stability of data recorded by one tester across 2 or more trials • inter rater reliability • 2 or more raters who measure the same subject • don’t always agree Minimal detectable change smaller it is the greater the reliability is • learning effect learn the measure over time • order effect • memorize order • can avoid by randomizing bc maybe warmed up in first trial validity confidence that we have that our measurement tools give us accurate information • reliable and unbiased • face validity test what it’s intended to test • criterion related validity compared to gold standard • predictive validity measure will be a valid predictor of future criteria or behvior • diagnostic test • presence of condition prognostic test • predict outcome of condition outcome measurements • discriminative and evaluative blinding single blinding- applied to 1 of the following: either pt/participant, assessors, interpreters or stastisticians- takes out selection bias double-blinding- neither the patients nor the researchers/doctors know which study group the patients are in- removes performance bias triple blind- applied to 3 of the following: either pt/participant, assessors, interpreters or staticians- - takes out both, concerts outcome measures and detection bias sampling bias- falls under selection bias publication bias- only significant and relevant information is presented snowball sampling- recruitment by other participants cluster random sampling- randomized controlled trial in which pre-existing groups, called clusters of individuals are randomly allocated treatment intention to treat (ITT) analysis- ITT considers all randomized participants in the analysis, whether they drop out or not. - - per protocol analysis is opposite to ITT per protocol analysis - a PP analysis, researchers only analyze data from those who strictly adhered to the study protocol. compare and contrast parametric- ratio, interval, continuous non parametric- yes-no, ordering and ranking, classification- parallel to T test single subject designs- involves studying the behavior of an individual or a small group over time- consistent answers better case report- detailed and specific description of an individual patient's medical condition, treatment, and outcomes. - - didnt design reporting retrospectively n of 1 trials- type of experimental design where the focus is on a single individual or case. goal is to study the individual's response to different treatments or conditions- - find best treatment experimental study design - change observe- do not change quantitative- numbers (numeric) qualitative- subject reports systematic review- look back and report findings using inclusion/exclusion criteria meta analysis - stastical analysis, looks at data the researchers reported (plots, means, CI on diagram chi squared- multiple groups fishers exact- only 2 ordinal groups Kappa Coefficient(K) • Measurement of reliability on dichotomous outcomes -2 possible values - observations of present/absent; positive/negative; yes/no, etc -nominal or ordinal data (i.e. – mild, moderate, severe) Intraclass Correlation Coefficient (ICC) • Measurement of reliability on continuous data outcomes - Observations of interval or ratio numerical data values Sensitivity SN contingency table a/a+c • % of those w condition/ disease that test positive • the true + rate • proportion of true-positive patients w condition who test positive ( population that has the condition) • test that can correctly identify every person who has the condition has a sensitivity of 1.0 • TP rate = TP/(TP+FN) specificity (SP) d/b+d • % of those w/o condition/disease that test negative • true (−)rate • proportion of true-negative pts condition who test negative ( population that doesn’t have the condition) •test that can correctly identify every person who does not have condition has a specificity of 1.0 • TN rate = TN/(TN+FP) Positive Predictive Value a/a+b • probability that someone w a positive test will have the condition • proportion of patients w a positive test who actually have the condition (not the probability of having the condition if you test positive) Negative Predictive Value d/c+d • probability that someone with a negative test will not have the condition • proportion of patients with a negative test who actually do not have the condition (not the probability of not having the condition if you test negative) positive likelihood ratio +LR = sensitivity/(1-specificity) True positive rate / false positive rate negative likelihood ratio (-) LR = (1-sensitivity)/specificity False negative rate/ true negative rate Number Needed to Diagnose NND = 1/[SN – (1-SP)] odds of you being able to diagnose correctly Diagnostic Odds Ratio (TP/FP)/(FN/TN) (or True/False) • how much more likely injury is present if positive Overall Accuracy a+d/a+b+c+d *related to contingency table* index card alexis- make note card for this case control: start w/ outcome (have the diseasese) cohort: exposure or not and see the outcome content validity •

Use Quizgecko on...
Browser
Browser