Summary

This document provides an overview of evidence-based practice (EBP) and the steps involved in EBP. It also differentiates between background and foreground questions and introduces levels of evidence.

Full Transcript

**[EBM FINAL REVIEW ]** **[WEEK1 (Portney LG (2020) Chapters 1,3,4,5)]** Learning Objectives\ **▪ Define "Evidence-based Practice" (EBP) and how it contributes to clinical decision making** - Integration of clinical expertise, pt values, and best research evidence into the decision-making p...

**[EBM FINAL REVIEW ]** **[WEEK1 (Portney LG (2020) Chapters 1,3,4,5)]** Learning Objectives\ **▪ Define "Evidence-based Practice" (EBP) and how it contributes to clinical decision making** - Integration of clinical expertise, pt values, and best research evidence into the decision-making process for pt care.  **▪ Describe the five steps in EBP** - **Ask**- general knowledge questions and foreground questions often in the form of PICO (population, intervention, comparison, outcome) - **Acquire**- to find relevant literature based on the clinical question, there is a hierarchy of evidence - **Appraise**- to review the literature for its validity, meaningfulness, and how the results are relevant to the current patient - **Apply**- putting it all together to make a clinical decision based on the evidence, clinical expertise and patient values - **Assess** - to evaluate the effectiveness of the evidence, did the patient improve, is additional evidence needed, do more questions need to still be answered? **▪ Differentiate between background and foreground questions** - **Background questions** are etiology or general knowledge about the patient\'s condition, examples can include questions like: what are clinical signs of measles infection? How many people are affected by migraine headache? These questions are often a wider view of the clinical problem. - **Foreground questions** are more patient based. These questions ask for specific knowledge to inform clinical decisions about the patient's management often in a PICO format. (Population or problem, intervention, comparison (if relevant), and outcome. **▪ Identify Level of Evidence and its association with the type of study** - **Level of evidence** is organized by the quality of information and the amount of information. - These levels represent the expected rigor of the design and control of bias. Thereby indicating the level of confidence that may be placed in the findings. - **Levels of evidence**: Are viewed as a hierarchy, with studies at the top representing stronger evidence, and those at the bottom being weaker. - This hierarchy can be used by clinicians, researchers and patients as a guide to find the likely best evidence. ![](media/image2.png) - **Level 1**: Systematic reviews and meta-analysis - Critically summarizes several studies - Best first pass when searching for information on a specific question. - **Level 2**: Individual RCT or observational studies - Strong design - Strong outcomes - **Level 3**: Studies without strong controls of bias - Nonrandomized studies - Retrospective cohorts - Diagnostic studies without consistent reference standards - **Level 4**: Low level of evidence - Descriptive studies: - Case studies - Historical controls - **Level 5**: Mechanistic reasoning -- evidence founded on logical connections. - Pathophysiological rationale - **Categories of Research**: - According to purpose of the research - Descriptive: the research attempts to describe a group of individuals through a set of variables, to document their characteristics. - Developmental Research - Normative Research - Case Report/Case Studies - Historical Research - Qualitative Research - Mixed Methods Research - Exploratory: observational designs used to examine a phenomenon of interest and explore its dimensions, often in a population or community. - Cohort Studies - Case-Control Studies - Correlational/Predictive Studies - Methodological Studies - Explanatory: Compares two or more conditions or interventions. - RCT - Pragmatic Clinical Trials - Qusai-Experimental Design - Utilize similar structures to experimental designs, but lack either random assignment, comparison groups, or both. - According to the approach of the research - Quantitative: Measurement of outcomes using numerical data under standardized conditions. - Tell us which treatment is better or the degree of correlation between variables. - Qualitative: Considered descriptive because it involves the collection of data through interviews and observation. - Stands apart from other forms of inquiry because of its deep-rooted intent to understand the personal and social experiences of patients, families and providers. - According to the degree of manipulation and control of researchers - Experimental: Design in which the investigator manipulates the independent variable. - Cause and effect relationship - Non-experimental: Researcher does not control, manipulate, alter the independent variable or subjects. - Relies on interpretation, observation, or interactions - Survey research - Correlational Study - The association with the type of study explains quantitative or descriptive data that may be excluded from the hierarchy and sometimes not seen as true forms of evidence. These research approaches do make significant contributions to our knowledge, and identification of mechanisms offers important perspectives about patient concerns. **▪ Formulate a well-defined clinical question using the PICO acronym** ![](media/image4.png) - In Hospitalized Children, how does double-checking pediatric medication with a second nurse compare to not double-checking affect medication errors? - **P** Patient- hospitalized children - **I** Intervention- double-checking pediatric medications - **C** Compare- to not double-checking medication - **O** Outcome- does this affect medication errors **[Week 2 (Portney LG (2020): Chapters 1,3,5,6,36)]** Learning Objectives **\*List common databases for medical research (PubMed, Cochrane Library, etc.)** - Common databases for medical research include: - MEDLINE, - Complete and Pub Med, - Up to Date, - Cochrane library and - CINAHAL complete. - Google is not a synonym for research ** Apply strategies (e.g., Boolean logic and Medical Subject Heading - MeSh) for using search engines and databases to acquire research literature** - Use advanced search features in databases (e.g., filters for publication dates, languages, etc.). Identify key concepts from the research question.  - Use synonyms and related terms to cover different variations.  - Apply Boolean logic for complex queries. Utilize Boolean operators (AND, OR, NOT) to refine searches.  - Utilize subject-specific databases like PubMed for medical literature.  A diagram of a physical therapy Description automatically generated![A diagram of a physical therapy Description automatically generated](media/image6.png) ** Distinguish between primary and secondary sources of information** - A **primary source** is reports provided directly from the investigator. a. Examples: - Research articles, - Journals, - Presentations at professional meetings all from the researcher. - A **secondary source** includes reviews of studies presented by somebody other than the original author. b. Examples: - Review articles and - Textbooks ** Compare/contrast among descriptive, exploratory and explanatory research** - **Descriptive Research** is a documentation of nature and characteristics of phenomena through systematic collection of data. - **Exploratory Research** is to investigate relationships between or among variables. c. Usually involves one group of subjects with measurements taken of different variables. A mathematical relationship is established among variables. - **Explanatory Research** is to compare it to differences in outcomes measured between and among different treatments or conditions. ** Describe anatomy (structure/element) of a research article and the associated function of each element** - **Introduction**\ Define main problems underlying the study\ Identify the knowledge gap in the literature\ Provide rationale for a need of the study (i.e., setting a stage for the need)\ Present the research hypothesis to be tested (for intervention study)\ State the specific purposes/objectives of the study - **Methods**:\ Describe the conduct/performance of the study, and sub-divided into: d. Subjects/Participants\ Who were the subjects? Inclusion/exclusion criteria? e. Design\ Is it appropriate for answering the research question?\ Was bias sufficiently controlled such as use of blinding? f. Instrument/Evaluation\ How have the authors documented the reliability and validity of the\ instruments for outcome measures? g. Procedure\ Were data collection procedures described clearly and in sufficient\ detail to allow replication? Operationally defined? h. Data analyses\ How were data analyzed? Analyzed appropriately?\ What statistical tests were used? Were they appropriate to the research\ question? i. Summarized as W&H questions:\ Who\ Where: site of performance; Institutional\ Review Board (IRB) or ethical approval\ What: Procedures/Protocol (tests or\ treatments)\ How: data analyses - **Results**:\ Report the findings of the study without interpretation or commentary (fact only)\ Answer questions in the order of the stated purposes in the text (often accompanied by tables/figures)\ Present the outcome of statistical analyses - **Discussion**:\ Present authors' interpretation of the results\ Compare results with previous pertinent studies (in agreement or in contrast)\ Indicate limitations of the study\ Discuss the relevance to clinical practice\ Suggest future directions - **Conclusion**:\ Restate the findings of the study with respect to the purpose or hypothesis outlined in the introduction ** Discuss key elements about validity and clinical applicability of research evidence** - **Validity** in research refers to the accuracy and truthfulness of the findings. j. **Internal validity** is the extent to which a study accurately shows cause-and-effect relationships, k. while **external validity** refers to the generalizability of findings to real-world settings. - **Clinical applicability** is the relevance of research findings to practical medical care, considering factors like patient population and context.  l. Were the subjects in the study sufficiently similar to my patient? m. Is the approach feasible in my setting and will it be acceptable to my patient.   **[WEEK 3 (Portney LG (2020): Chapters 13-14)]** **[Learning Objectives for Selection (ch 13)]**\ ** Differentiate among a sample, accessible population & population** - **Population** is the entire group of interest to whom the findings will be generalized, - **Accessible population** is the subset of the population available for study, - **Sample** is the sub-group of the accessible population who have been selected for the study A diagram of a sampling process Description automatically generated ** Describe the purpose of selection (or inclusion & exclusion) criteria in sampling for research studies.** - **Inclusion criteria**: the primary traits of the target and accessible populations that will make someone eligible to be a participant (characteristics of interest) - **Exclusion criteria**: factors that would preclude someone from being a subject (ineligible), i.e., undesirable attributes -- confounding factors/variables: n. variables that may confound the results or interfere with interpretation\ of the findings, often called **confounding/extraneous variables** ** Describe probability and nonprobability selection methods, and each type of selection methods** - **Probability Samples** o. **Random selection**: generates less bias and better representation of population - **Simple Random**: Each member of the population has an equal chance of being chosen. - Subjects are assigned to groups using a form of randomization. - **Systematic**: set interval - ex. Every 10th participant - **Stratified Random**: random samples from subset groups - **Cluster**: Sample is a randomly selected county, block, household, etc - **Disproportional Sampling**: A form of stratified sampling used when certain strata are underrepresented in the population. - Correcting representation by oversampling certain groups - **Nonprobability Samples** p. **Nonrandom methods**: more frequently used in clinical studies - **Convenience**: Based on availability. - Recruited as they enter a clinic - Volunteering for a study - **Purposive**: Invited to participate because of known characteristics. - Subjects are hand picked - Specific expertise or experience from participants are needed. - Often used in qualitative studies - **Quota**: Subjects recruited to represent various strata - Lack randomization - **Snowball Sampling**: Subjects help identify more subjects - Word-of-mouth ![A screenshot of a scientific research method Description automatically generated](media/image8.png) **[Learning Objectives (Chapter 14)]**\ ** Describe methods used for assigning subjects to groups within a study, and methods used in random assignment strategy** - Simple Random Assignment - Block Random Assignment - Stratified Random Assignment - Cluster Random Assignment - Random Consent Design - Assignment by Patient Preference - Run-in-Period A screenshot of a paper Description automatically generated ** Discuss the importance of blinding/masking in research protocols** - Observational bias is an important concern; may affect the recording, reporting, and outcomes. q. Participants' knowledge of their treatment status r. Investigator's expectations s. Conscious or unconscious t. Those involved with the study include: - Participants, subjects, patients - Physician researchers who administer treatment or interventions - Outcome assessors who perform evaluations - Protection is achieved through blinding, also known as masking. u. Those involved in the study are unaware of the subject's group assignment. v. Substantially strengthens the validity of conclusions. w. Double blinding: - Strongest approach - Neither subjects nor investigators are aware of treatment groups until after the data are collected. x. Single-Blinding: - Perhaps the subjects are blinded - A placebo or sham is offered - Perhaps the investigators are blinded - Treatment is offered in an unobtrusive way. ** Discuss methods for controlling confounding influences related to subject participation in a study** - Random sampling and Random assignment - Control group - Blinding/masking ** Describe the role of clinical trials in health care research** - Therapeutic trials y. Effect of an intervention - 25 years of clinical trials have shown radical mastectomy is not necessary for reducing the risk of recurrence or spread of breast cancer and limited resection can be equally effective in terms of recurrence and mortality - Diagnostic trials z. Accuracy of diagnostic procedures - Selective tests for suspected DVT showing different strategies can be used with equal accuracy. - Preventive trials a. Evaluation of whether a procedure or agent reduces risk of developing a disease or disorder. - Field study of poliomyelitis vaccine in 1954. The incidence of poliomyelitis in the vaccinated group was reduced by more than 50% compared to those receiving the placebo, establishing strong evidence of the vaccine's effectiveness. ** Define phases of clinical trials** - Phases of Clinical Trials Phase 1: Is the treatment safe. - Testing small groups, no controls Phase 2: Does the treatment work - RCT on larger group with placebo Phase 3: How does treatment compare with standard care - Larger RCT against "gold standard" Phase 4: What else do we need to know - Continued evaluation after drug has gone to market   **[WEEK 4 (Portney LG (2020): Chapters 15, 16-17)]** **[Learning Objectives (Chapter 15)]\ Discuss the concepts of internal, external, construct, and statistical conclusion validity related to quantitative research** - **Internal validity** b. Is there evidence of a causal relationship between the independent and dependent variables ![A diagram of a cause Description automatically generated](media/image10.png) - **External Validity** c. Can the results be generalized to other persons and settings outside of specifications of the study A diagram of a group of people Description automatically generated - **Construct Validity** d. Related to design concerns - how variables are conceptualized - the meaning of variables within the study - IV and DV are well-established and correctly labeled - Often (not exclusively) related to measures/tests to compare the variables with their measures to determine if they truly represent the variable. - **Statistical Conclusion Validity** e. Is there a relationship between the independent and dependent variables f. Are appropriate statistical analyses used to assess ![A diagram of a structure Description automatically generated with medium confidence](media/image12.png) ** Recognize potential threats to research validity** - **Internal** g. **Attrition**: A reduction in sample size and/or imbalance baseline characteristics of groups h. **Instrumentation:** Problems with tool used to measure the variable and/or inappropriate selection of tool i. **Assignment or selection:** unequal baseline characteristics of groups that can influence study outcomes, bias in the way subjects were assigned to groups - **External** j. **Biased Sample selection:** Influence of selection, Narrow or poor sample selection k. **Setting Differences:** Influence of the setting, the environment of the study was different from what would be applied in clinic l. **Time:** Influence of history, clinical practice significantly changed during the course of the study - **Construct** m. **Faulty operational definitions** n. **Experimental bias of subjects and/or investigators:** Blinding/ masking - **Statistical Conclusion** o. **Low statistical power:** Small sample size p. **Violated assumptions:** incorrect/inappropriate selection of statistical test, q. **Failure to** **use:** intention to treat (ITT) analysis - **Hawthorne Effect** r. Subjects behave differently when they know they are being observed or studied ** Identify safeguards/solutions implemented to prevent threats to research validity** **Internal Validity** - Problem: **Attrition (Loss of subjects)** s. Solution/Safeguards: Replacement of subjects, Statistical "Intention to treat" analysis - Problem: **Instrumentation-** are changes due to how the DV was measured? t. Solution/Safeguards: - Select appropriate technique or instrument - Calibrate instrument - Train users of instruments - Problem: **Assignment or Selection-** were the groups unequal in baseline characteristics to start? u. Solution/Safeguard: - Random assignment to groups, - adequately described inclusion/exclusion - Statistical Adjustments (Analysis of Covariance- ANCOVA) ** Describe the importance of each validity to the conclusion of research studies** - The importance of each validity is to determine the truthfulness or accuracy of the study results, the four categories, external, construct, internal, and statistical conclusion all ensure that the study results are valid and accurate avoiding potential threats. As well as ways to check for threats in each category as well as solutions to fix/ avoid problems in future studies. - When validity is threatened, conclusions should be suspected. (ANYTHING ELSE?) **[Learning Objectives (Chapter 16 and 17)]**\ ** Describe the structure of basic experimental designs for independent groups and repeated measures.** **Experimental Design-** Between-subject design: assigned to independent groups, Within-subject: subjects act as their own controls - **Independent Designs** 1. **Pretest-Posttest Control Group Design** (aka Parallel Group Designs) Scientific standard for investigating cause and effect relationships - two or more independent groups, one IV, one or more DVs - Random assignments - **Repeated Designs** (aka within- subjects design) Subjects are their own control, The order of test conditions may be randomized, a repeated measure design - Ex. What is best for knee pain? Expose subject to walking unaided, then walking with cane then walking with cane on other side. ![](media/image14.png)**Cross-over Design** another type of repeated design commonly used to increase sample size as participants are they own controls - Participants randomized to a treatment sequence (controls for order effects of treatment sequence) - Important to leave long enough washout period between switching ** Differentiate between different types of experimental design** - **Mixed Design** - Two IV's - One repeated for all subjects (IV= time, pretest, posttest\....) - The other is randomized to groups (IV= treatment) ![](media/image16.png) - **Quasi Experimental** - Lack random assignment and/or comparison group - Likely to have threats to internal validity - All subjects receive the same treatment - No comparison group, limits internal and external validity - Independent variable is **time** rather than intervention A close-up of a sign Description automatically generated ![A white rectangular box with black text Description automatically generated with medium confidence](media/image18.png) ** Outline advantages and disadvantages of different types of experimental designs.** - **Experimental Design**- can show evidence of cause-and-effect relationship between the IV and the DV. Protects against threats to validity - **Quasi Experimental-** Sometimes experimental can't be done, more likely to have threats to internal validity\   **[WEEK 5 (Portney LG (2020): Chapters 8-10, and 22-25)]** **[Learning Objectives: Chapter 8]\ Define the basic terms of measurement**: - The process of assigning numerals to variables to represent quantities of characteristics according to certain rules. - The process of assigning numerals to a variable: - **Variable**: a property that can differentiate individuals or objects. - Represents an attribute that can have more than one value. - **Continuous variables**: - Any value along a continuum within a defined range. - Strength, distance, weight, and chronological time - **Discrete variables**: - Described in whole integers units only. - Heart rate, the number of trials needed to learn a motor task, the number of children in a family. - **Precision**: - The exactness of a measure - A function of the sensitivity of the: - measuring instrument and - data analysis system - and the variable itself. - Usually numbers are reported in clinical research to one or two decimal points. - **Value**: denotes - Quantity (age, blood pressure) - Attributes (sex, geographical region) - **Numerals**: - Labels - No quantitative meaning - Coding data of opinion as "1 Strongly Disagree" to "5 Strongly Agree" - Measurement values represent quantities of characteristics: - Most measurement is abstract or conceptualized. - Temperature is not directly measurable. - We use a column of mercury in a thermometer to observe temperature. - We do not visualize the electrical activity of a heartbeat or muscle contraction, but an EKG or EMG will record the data. - It is infer(ed) - Very few variables are measured directly. - Range of motion is directly observed and measured in degrees and length in centimeters. - To measure a variable, regardless of how indirect it may be, we must define it. - **Constructs**: - Abstract variables - Not observable - Measured according to expectation of how a person who possesses the specified trait would behave, look, or feel in a certain situation. - A value that is assumed to represent the underlying variable. - We must understand the fundamental meaning of a variable and find a way to characterize it. - Intelligence, health, pain, mobility, and depression. - Assigning values to objects according to certain rules. - Reflect the amount and unit of measurement. - A yard stick (inches), scale (pounds), thermometer (degree) - **Relative order**: - Number denote relative order among variables - A is greater than B, B is greater than C, therefore, A is greater than C. - Pain scales do not fit within this structure, although they are numbers. - Patient A states pain is a 6 - Patient B states pain is a 6 - There is no way to know if the pain is equal - Another patient states pain is a 4 but may have pain of a 6 -- we don't know! - However, a patient reporting pain of a 6, provided treatment, and now report pain of a 2 -- we can assume the pain is less than before. - The rule which defines the system of order is valid within an individual, but not across different individuals. ** Define and provide examples of four scales/levels of measurement: nominal, ordinal, interval, and ratio data** - Classification of variables on their characteristics. - A special set of rules for manipulating and interpreting numerical data. - **Nominal**: - Lowest level of measurement - Classificatory scale. - Objects or people are assigned to a category based on certain criteria. - Categories are coded (name, number, letter -- purely labels, with no quantitative value or relative order). - Categories are mutually exclusive - No object or person can be assigned to more than one category. - Assume the rules for classification are exhaustive - Every subject can be accurately assigned to one category. - No inherent rank - The only permissible mathematical operation is counting the number of subjects within each category. - Blood type, handedness, ethnicity, gender, sex - **Ordinal**: - Categories are ranked-ordered - Based on an operationally defined characteristic - A "greater than-less than" relationship. - Many clinical measurements are based on this scale - Sensation (normal\>impaired\>absent) - Intervals between ranks may not be consistent or known. - "some college" can represent any number of years. - Scores are essentially labels, like nominal - They do not represent quantity - They represent position within a distribution. - Cannot be added, subtracted, multiplied, or divided. - Distinguished on whether or do not have a true zero point. - A category labeled "zero" may simply refer to performance below a certain criterion rather than the total absence of function. - Tests of function, strength, pain, and quality of life; surveys using strongly agree\>agree - **Interval**: - Rank-order characteristics of the ordinal but has known and equal intervals between consecutive values. - Relative difference and equivalence within the scale can be determined. - No true zero point - The value of zero is not indicative of the complete absence of a measured trait. - Negative values are also possible - May represent lesser amounts of the attribute rather than the deficiency. - Can be added and subtracted - Operations cannot be used to interpret actual quantities. - Number of years on the Gregorian calendar (B.C. and A.D.), measurement of temperature (Fahrenheit and Celsius) -- each have an artificial zero, which does not represent the absence of heat, and temperatures can be negative. - **Ratio**: - Highest level of measurement. - Absolute zero has meaning. - Zero represents the total absence of the property. - Negative values are possible. - All mathematical and statistical operations are permissible. **Discuss the relevance of identifying measurement scales for statistical analysis** - It is important to accurately identify the "level" of measurement - Selection of statistical test is based on certain assumption about data - Parametric tests: - Apply arithmetic manipulations - Require interval or ratio data - Nonparametric tests: - Designed to be used with ordinal and nominal data **[Learning Objectives: measurement reliability (ch 9)]**\ ** Define reliability in terms of measurement error.** - **Reliability**: The extent to which a measured value can be obtained consistently during repeated assessment of unchanged behavior. - Nature of reality is measurements are rarely perfectly reliable. - All instruments are fallible. - All humans respond with some level of inconsistency. - **Classical Measurement Theory**: - Any observed score (Xo) consists of two components: - A true score (X1), a fixed value. - An unknown error component (E) which may be large or small depending on the accuracy and precision of the measurement procedures. - A difference between the true value and the observed value is a **measurement error**; noise. - Types of Measurement Error: - **Systematic error**: - Predictable errors of measurement. - Occur in one direction. - Consistent over/underestimating the true value - May be correctable by recalibrating an instrument or adding or subtracting a constant from each observed value. - No threat to reliability. - Threaten only validity of the measurement. - **Random error**: - A matter of chance. - May be over/underestimations of the true value. - Arising from factors such as examiner or subject inattention, instrument imprecision, or unanticipated environmental fluctuation. - Assuming random errors happen by chance, over/underestimates occurring with equal frequency, recording averages across several trials should cancel out. ** Describe typical sources of measurement error.** - **The rater**: - The individual taking the measurement - Errors occur because: - Lack of skill - Not following the protocol - Distractions - Inaccurately recording or transcribing values - Unconscious bias - **The instrument**: - Imprecise instruments due to: - Environmental changes - Fluctuating performance - **Variability of the characteristics being measured**: - Physiological responses (blood pressure respond vary moment to moment). - Changes in subject performance (motivation, cooperation, fatigue changes). ** Define and provide examples of test-retest & rater reliability.** - **Test-retest reliability**: an assessment to determine the ability of an instrument to measure subject performance consistently. - Will the instrument perform the same way trial to trial? - To perform a test-retest study: - A sample of individuals performs an identical test on two separate occasions, - Keeping all testing conditions consistent. - The coefficient is: **test-retest coefficient** - An estimate of reliability in situations in which raters are minimally involved. - Self-report questionnaires - Instrumented physiology test that provides automated digital readouts - **Rater reliability**: The instrument and the response variable are assumed to be stable so that any differences between scores on repeated tests can be attributed solely to rater error. - **Intra-Rater reliability**: stability of data recorded by one tester across two or more trials. - In test-retest, when a rater's skill is relevant to the accuracy of the test, intra-rater reliability and test-retest reliability are essentially the same. - Blood pressure measurements -- it is not possible to distinguish between the skill of the examiner verse the consistency of the sphygmomanometer. - Bias of the rater -- they may be influenced by their memory of the first recording. - **Inter-Rater reliability**: is a variation between two or more raters who measure the same subjects. - Best assessed when all raters can measure a response during a single trial. - They can observe a subject simultaneously yet independently of one another. - Video tapes of patient performing activities ** Discuss how reliability is related to the concept of minimal detectable change/difference (MDC). (Not clear what to discuss)** - In test-retest, some portion of change may be error, and some portion may be real. - **Minimal Detectable Change (MDC**): - The amount of change in a variable that must be achieved (beyond the minimal error in a measurement) before we can be confident that error does not account for the entire measured difference; - That some true change must have occurred. - The amount of change which goes beyond error. - The greater the reliability of an instrument, the smaller the MDC. - Based on Standard Error of the Mean (SEM). - Also known as: - **Minimal Detectable Difference (MDD)** - **Smallest Real Difference (SRD)** - **Smallest Detectable Change (SDC)** - **Coefficient of Repeatability (CR)** or - **Reliability Change Index (RCI)** **[Learning Objectives: Measurement validity (Chapter 10)]\ Compare and contrast between reliability and validity.** - **Validity**: the confidence we have that our measurement tools are giving us accurate information about a relevant construct so that we can apply results in a meaningful way. - Concerned with the meaning or interpretation given to a measurement. - "the extent to which a test measures what it is intended to measure". - Also addresses the interpretation and application of measured values. - Three types of questions: - Is a test capable of discriminating among individuals with and without certain traits, diagnoses, or conditions? - Can the test evaluate the magnitude of quality of a variable or the degree of change from one time to another? - Can we make useful and accurate predictions about a patient's future status based on the outcome of a test? - Valid measures are those which offer sound footing to support the inferences and decisions that are made. - The extent to which measurements align with the targeted construct. - Reliability: the extent to which a test is free of random error. - High reliability a test will deliver consistent results over repeated trials. - May still have systematic error - Such tests would deliver results that are consistently "off the mark". - Distinction between validity and reliability. - ![A diagram of a diagram of a high visibility Description automatically generated with medium confidence](media/image20.png) ** Define and provide examples of content, criterion-related, and construct validity (3 Cs).** Diagram of evidence for validation Description automatically generated - **Content Validity**: purpose is to establish the multiple items which make up a questionnaire, inventory, or scale adequately sample the universe of content which defines the construct being measured. - Content universe cannot be covered in totality, the items of the questionnaire, inventory, or scale must be representative of the whole. - Examples: - Educational Tests, - Attitude Scales - Clinical Measure of Function or Quality of Life - **Criterion-related Validity**: purpose is to establish the correspondence between a target test (to be validated) and a reference or "gold standard" (as the criterion) to determine that the target test is measuring the variable of interest. - Based on the ability of a test to align with results obtained on an external criterion. - If results from the two tests are correlated or in agreement, the target test is considered a valid indicator of the criterion score. - Example: - Heart rate (target test) has been established as a valid indicator of energy cost during exercise and is correlated with standardized oxygen consumption values (gold standard). - **Construct Validity**: purpose is to establish the ability of an instrument to measure the dimensions and theoretical foundation of an abstract construct. - Physical events do not directly manifest in abstract constructs, inferences are made through observations of behavior, measurable performance, or patient self-reporting. - Example: - Pain construct -- subjective and unobservable - Intellect - Depression ** Define minimal clinically important change (MCID).** - **Minimal Clinically Important Difference (MCID**): The smallest difference in a measured variable which signifies an important difference in the patient's condition. - An indicator of the instrument's ability to reflect impactful changes when they have occurred. - Reflect the test's validity and can be helpful when: - Choosing an instrument - Setting goals for treatment - Determining whether treatments are successful - Subjective determination, typically by the patient, who indicate whether they feel "better" or "worse". - Also known as: - Minimal Clinically Important Change (MCIC), - Minimal Important Difference (MID), - Minimal Clinically Important Improvement (MCII) **[Learning Objectives: Chapter 22- Descriptive Statistics]** **Descriptive Statistics**- used to [characterize the shape], [central tendency], and [variability] w/in a set of data, called a **[distribution]** - Both central tendency (e.g. mean) and measure of variability (SD) are needed to describe group data ** Describe methods for graphic presentation of distributions.** - Histogram ![A group of graphs and diagrams Description automatically generated](media/image22.png) \ ** Calculate measures of central tendency and discuss their appropriate applications.** - Mean- average - Median- middle score - Mode- the score that occurs most frequently in a distribution - In normal distribution all 3 Ms are at the time point ![A diagram of normal distribution Description automatically generated](media/image24.png) ** Define measures of variability including range, variance, and standard deviation.** - **Variability** is a measure of the spread of scores w/in a distribution and expressed in different ways: - **Range**- from minimum to maximum (example CRT test) - **Variance**- expressed as a sum of square (SS) - Should be small if scores are close together, and large if they are spread out - **Standard Deviation (SD)**- square root of variance- in same units as v ** Explain the properties of the normal distribution.** - AKA bell-shaped distribution or Gaussian distribution - Constant and predictable characteristics **[Learning Objectives: Chapter 23- Foundations of statistical inferences]** ![A diagram of a sampling process Description automatically generated](media/image26.png) ** Discuss the concept of probability in relation to observed outcomes.** - Alpha level- level of significance - The amount of chance researchers are willing to tolerate - I.e @ 0.05 level (default), willing to accept a 5% chance that significant results occurred by chance - \*\* what the researchers are willing to accept\*\*\* - P-value - The likelihood that any one event will occur, given all the possible outcomes - Implies uncertainty- what is likely to happen - Is a product of data analysis - \*\*what the researchers actually find out\*\*\* - \*\* **if p \< alpha, there is a statistically significant difference** ** Define a confidence interval (CI) for the mean.** - Range of scores with specific confidence limits. Represents a specified probability (95% traditional value) that the true population value is w/in a range - CI=95%, if a study repeated 100 times, 5% of samples would not contain true population mean - i.e 20 random samples, w/ one confidence interval not contain the population mean ** Explain the difference between the null hypothesis and the alternative hypothesis.** - **Null hypothesis**- NO difference between the groups or intervention - **Alternative hypothesis**- There IS a difference - **Statistical Conclusion**: "Disproving' the null hypothesis - Either reject null or - Do not reject null ** Define Type I and Type II errors.** - **Type 1 (Alpha)** [is a false positive], rejecting the null hypothesis when it is true - Mistakenly finding a difference - **Type 2 (Beta)** [is a false negative], failure to reject null hypothesis when it is false - Mistakenly finding NO difference when there is a difference ** Explain the conceptual difference between parametric and nonparametric statistics.** - **Parametric statistics** analyze results using interval or ratio scale data and in samples with normal distribution with roughly equal variance - **Nonparametric** statistics are used with data from nominal & ordinal scales **[Learning Objectives: The t-Test (Chap. 24)]**\ ** Discuss the application of the t-test for paired and unpaired research designs.** The t-test is a statistical method used to determine if there is a significant difference between the means of two groups. It can be applied in both paired and unpaired research designs, each serving different purposes and scenarios. ### Paired t-Test - The paired t-test is used when comparing two related samples or measurements. This could be the same group of subjects measured at two different times or under two different conditions. - Common scenarios include pre-test/post-test designs, where the same subjects are tested before and after an intervention, or crossover trials, where subjects receive both treatments in a random order. - **Example**: - Measuring the blood pressure of patients before and after administering a new medication. - **Assumptions**: - The differences between paired observations are normally distributed. - The pairs are independent of each other ### Unpaired t-Test (Independent t-Test) - The unpaired t-test is used to compare the means of two independent groups. This is appropriate when the subjects in one group are not related to the subjects in the other group. - Common scenarios include comparing the test scores of students from two different schools or the effectiveness of two different treatments on separate groups of patients. - **Example**: - Comparing the average weight loss between a group following a diet plan and a group following an exercise regimen. - **Assumptions**: - The two groups are independent of each other. - The data in each group are normally distributed. - The variances of the two groups are equal (homogeneity of variance) ### Key Differences - **Paired t-Test**: Used for related samples, focuses on the differences within pairs. - **Unpaired t-Test**: Used for independent samples, compares the means between two groups. ** Interpret computer output (e.g., SPSS) for the paired and unpaired t-test.** - Look at t and p value given. - Larger t values: more likely p is significant - p value: likelihood of getting your result if the null hypothesis is true. - Often p\

Use Quizgecko on...
Browser
Browser