Epi Methods II Final Exam Notes 20XX PDF

Document Details

Uploaded by Deleted User

Tags

epidemiology hypothesis testing confidence intervals statistical analysis

Summary

These notes contain information about hypothesis testing, confidence interval estimation, crude vs multivariate analysis in epidemiology. They cover various aspects like p-values, and different methods of analysis.

Full Transcript

Epi Methods II - Final exam notes (from lecture on 10/09-end) Make sure to add basic measure of association formulas again to the formula sheet just in case 10/09: Hypothesis Testing, Confidence Interval Estimation, Crude v. Multivariate Analysis -​ Reasons why an unadjusted crude measurement ma...

Epi Methods II - Final exam notes (from lecture on 10/09-end) Make sure to add basic measure of association formulas again to the formula sheet just in case 10/09: Hypothesis Testing, Confidence Interval Estimation, Crude v. Multivariate Analysis -​ Reasons why an unadjusted crude measurement may not be real -​ Random error -​ Bias (systematic error) -​ Ways to control for confounding: multivariable analysis, stratification, restriction, matching, randomization, adjustment -​ Hypothesis testing -​ We look at the role that random error (chance) may have played in our measure of association/impact using p-values -​ Confidence interval -​ We consider how precision and variability affect our measure of association/impact -​ Point estimate -​ Gives an idea of the strength of the association -​ Elements of Statistical Analysis -​ Parameter estimation -​ Point estimate -​ Single value computed from study data to represent the extent of the association or magnitude of effect (e.g. CIR, IDR, OR) -​ Determined by many factors, like bias and random error -​ Unlikely to equal “true” population estimate -​ Interval estimate (confidence interval, CI) -​ Range of possible values around point estimate having a designated probability of including the true parameter -​ 95%: the interval would include the true value 95% of the time if the study were repeated -​ Strongly influenced by sample size -​ Bigger study → thinner CI → more power -​ Confidence intervals (CIs) -​ Z = 1.96 (95% CI), 1.645 (90% CI) -​ exponential^{ln(RR) ± Z x SE(lnRR)} -​ For IDR: SE(lnRR) = √(1/a + 1/c) -​ For CIR: SE(lnRR) = √((b/a(a+b))+(d/c(c+d))) -​ For OR: SE(lnOR) = √(1/a + 1/b + 1/c + 1/d) -​ Hypothesis (statistical) testing -​ Concerned with measuring the likelihood (probability) that the study results were produced by chance -​ Accomplished by stating the study’s null hypothesis and alternative hypothesis -​ H0: no difference in the effect b/w groups A and B (completely due to random variation) -​ HA: effect of group A is stronger than group B (one-tailed) or is simply different from group B (two-tailed) -​ Components of hypothesis testing -​ P-value -​ Probability, assuming that H0 is true, that study data will show an association equal to or more extreme -​ Judged by alpha level, usually.05 -​ As p-value decreases, likelihood that H0 will be rejected increases -​ Measures the “compatibility b/w the data and the H0” -​ Is NOT: the level of significance, the probability that the H0 is true, the alpha level of the test, indicative of the magnitude of association (that’s the measure of association) -​ Error -​ Type I error (alpha): probability of rejecting the null when the null is true -​ Type II error (beta): probability of retaining the null when the null is false -​ Power = 1-beta -​ Power -​ The probability of identifying an association when one exists -​ Probability of rejecting the null when it is false -​ If type II error = 20%, then power of the test is 80% (1-20%) -​ Some examples of hypothesis tests: chi-square, logistic regression, generalized estimating equations -​ Hypothesis testing - the steps -​ State a null hypothesis (no association, CIR/IDR/OR = 1.0) -​ State an alternative hypothesis -​ One-tailed: states direction of association (>1.0 or crit value, reject null -​ Crude vs. Adjusted Analysis -​ ??? -​ Prepare data for analysis -​ Clean your data (distribution of study variables, identify outliers, quantify missing data and decide how to handle) -​ Preliminary analysis -​ Crude analysis -​ Chi-square tests -​ T-tests, wilcoxon-mann-whitney, etc -​ ANOVA, etc -​ Crude analysis is used to plan more complicated analysis -​ Multivariate analysis -​ Log reg, linear reg, cox reg, etc -​ Presenting study results -​ Step 1: general characteristics of study population (descriptive stats) -​ Step 2: bivariate (crude, unadjusted) analysis -​ Step 3: multivariable (adjusted) analysis 10/18?: Selection Bias -​ Bias -​ Systematic error in design or conduct of study -​ Error can be due to: systematic error, random variability (sampling error) -​ Study is valid if: design/procedures unbiased -​ Validity -​ Internal: groups under study are selected and compared in such a manner that the observed differences between them may be attributed to the hypothesized effect under investigation -​ External (generalizability): if study can produce unbiased inferences regarding a target population -​ Hierarchy of populations: study population (sample), actual population (statistical inference), target population (internal validity), external population (external validity) -​ Reliability -​ The precision and reproducibility of the data collected -​ Influenced by variability -​ Random error -​ Error (random or systematic) -​ Can be introduced by the: -​ Study investigator, participant, or instrument -​ During the process of: -​ Selection of study subjects -​ Measurement of disease and/or exposure -​ Analysis or interpretation of findings -​ Random error: affects precision (reliability) -​ Systematic error (bias): affects validity -​ Bias -​ Results from -​ Flaws in method of selection of study participants (selection bias) -​ Procedures for gathering relevant exposure and/or disease info (information bias) -​ -​ Prevention and control of bias -​ Appropriate study design -​ Procedures for data collection are established and carefully monitored -​ How do we prevent threats to validity (systematic error) in our research? -​ Study design: minimize bias -​ Study implementation: quality assurance and quality control -​ Quality assurance activities (before data collection starts) -​ Development or identification of validated data collection instruments -​ Developments of manual of procedures -​ Staff training, etc -​ Quality control activities (after data collection starts) -​ Field observation -​ Validity studied -​ Data checks, double-data entry, etc -​ Selection Bias -​ A systematic error in the way we choose (or retain) study participants -​ When individuals in the eligible population have different probabilities of inclusion related to: -​ Disease status in a cohort study or -​ Exposure status in a case-control study -​ Results from: -​ Procedures used to select participants, and/or -​ From factors that influence participation -​ Distortion of the measure of association b/w exposure and outcome -​ Selection bias in: -​ Case-control studies -​ Selection of subjects may be influenced by exposure status -​ Choice of sampling frame (e.g. hospital based–Berkson’s bias) -​ Use of prevalent cases resulting in differential survival -​ Separate selection processes for cases and controls -​ Cohort studies -​ Differential losses to follow-up b/w exposed and unexposed -​ Selection bias quantitative assessment -​ Requirement for an unbiased OR: -​ Sampling fraction is the same within groups of cases and controls -​ No differential sampling based on exposure status -​ -​ Surveillance bias -​ Exposed cases have a higher probability of being selected for the study than other categories of individuals -​ Medical surveillance/detection bias: medical exposure (OC use, high BP) leads to surveillance for outcomes (cancer, etc) that may result in higher probability of detection in exposed individuals -​ Identification of outcome is not independent of knowledge of exposure -​ Selection bias in case-control studies (identification of outcome is influenced by presence of exposure) -​ Information bias in cohort studies (exposed undergo a more thorough examination than unexposed) -​ To prevent surveillance bias -​ Assess outcome systematically (standardized) -​ Mask exposure status when ascertaining presence of outcome (not possible in case-control) -​ Obtain info on variables that indicate awareness of health problem -​ Stratify by disease severity at diagnosis -​ Berkson’s bias -​ Selection bias that leads to hospital cases and controls in a case-control study to be systematically different from one another -​ Occurs when the combination of exposure and disease under study increases risk of admission to hospital (higher exposure rates among cases than controls) -​ Self-selection/Referral bias/Participation bias -​ When participants willingly choose to participate in a study rather than being randomly selected -​ Self-referral of subjects is considered a threat to validity -​ Can occur before subjects are identified in study -​ Threat when using non-probability sampling -​ Healthy worker effect -​ Allows relatively healthy people to become or remain workers -​ Unemployed, retired, disabled, are out of active worker population and are, as a group, less healthy -​ More common in occupational studies (the health of active workers is generally better than the general population which includes both healthy and unhealthy people) -​ Incidence-prevalence bias -​ Basically when a study fails to account for cases that have already resolved or resulted in death before the study began -​ In cross-sectional studies -​ Incidence-prevalence bias from inclusion of prevalent cases (e.g. current smoking and lung cancer. Survival of lung cancer patients who smoke is shorter than those who quit. Measure of association underestimate because more severe cases have died) -​ In case-control studies -​ Newly diagnosed cases used as proxies for newly developed cases -​ “Incident” cases actually include incident and prevalent cases (especially in diseases that evolve subclinically for many years prior to diagnosis) -​ Prevention of incidence-prevalence bias -​ Avoid by careful ascertainment and exclusion of prevalent cases and careful watch for newly developed outcomes -​ Do a cohort study -​ Duration/Survival bias -​ Duration of disease (after its onset) is different between exposed and unexposed persons -​ When disease prevalence is low, and duration (prognosis) is independent of exposure (duration same in exposed and unexposed), not a problem -​ BUT when exposure increases disease risk and affects prognosis, bias is present -​ Temporal bias -​ Proper exposure-disease sequence is not established (i.e. if you study one “exposure” and one “outcome” at the same time without establishing temporality (which came first), then you can’t know which causes which -​ In cross-sectional studies, time sequence unknown -​ Avoid by using a prospective study where exposure levels can be measured over time -​ Improve by obtaining data through questionnaires (cross-sectional) -​ Publication bias -​ Reporting bias -​ Accepting published findings assumes: -​ Each published study is unbiased -​ Published studies constitute an unbiased sample of all unbiased studies -​ Not always true -​ Tendency to publish “positive” / statistically significant findings -​ Affects meta-analyses -​ Prevention of publication bias -​ Create study registers -​ Advance publication of research designs -​ Is it possible to avoid? -​ Evaluation of Screening Interventions -​ Selection bias: prevent w/ experimental design -​ Incidence-prevalence bias: occurs in pre-post studies (1st exam identifies prevalence cases, “post” exam identifies incident cases) -​ Length bias: occurs when a better prognosis for cases detected by screening procedure ( than for cases diagnosed between screening exams) as evidence of screening effectiveness -​ Reality may be longer preclinical phase reflecting slower progressing disease -​ Use experimental design to compare prognosis -​ Lead time bias: occurs when estimating survival time from diagnosis, overestimation of survival when disease detected early (like with screening) -​ Occurs only when estimating survival from time of diagnosis -​ Prevented by calculating mortality risk/rate among all screened and control subjects -​ Selection bias in cohort studies -​ Usually bias due to attrition via withdrawals and loss to follow-up -​ Preventing selection bias -​ Design phase -​ Define the source population and if possible enumerate it -​ Conduct a population-based study using random sampling if possible -​ Work to get high response rates -​ Analysis phase -​ Some SB can be dealt w here (sensitivity analysis) but not as easy as in design phase -​ Sensitivity analysis of selection bias -​ Observed data Exposed Unexposed Cases a b Controls c d -​ Sampling prob.s Exposed Unexposed Cases Sa Sb Controls Sc Sd -​ Crude OR = ad / bc -​ Corrected OR = Crude OR x (SbSc / SaSd) 10/23?: Bias Continued (Information Bias) -​ Systematic error leads to bias. Bias results in incorrect measure of association -​ Validity -​ Measuring what we say we will -​ Sensitivity: correctly identifying those who have disease/characteristic -​ Specificity: correctly identifying those who do not have disease/characteristic -​ Information bias -​ Observation, classification, measurement bias -​ A flaw in measuring exposure or outcome data that results in different quality of information between comparison groups -​ Results in the systematic misclassification of exposure and/or outcome status (or covariates) -​ Sources -​ Interviewer bias: investigators eliciting or interpreting information differently -​ Recall bias: study subjects reporting events in a non-comparable manner -​ Typical sources of information bias: measuring instruments, lab tests, poorly designed survey questionnaires, interviewer bias, outcome information bias, problems recalling info -​ Interviewer/observer bias -​ Ascertainment of outcome is not independent from knowledge of exposure status -​ Ascertainment of exposure status is not independent from knowledge of disease status -​ Interviewers may gather info from participants in a systematically different way (e.g. probe cases more thoroughly for exposure) -​ How to avoid -​ Careful design, protocols (manual of operations, trained staff, standardized data collection, masking, multiple observers) -​ Recall bias -​ Systematic error due to differences in accuracy of completeness of recall to memory of past events or experiences -​ Might exaggerate (if believed to cause illness) or minimize (appear more acceptable) exposure -​ How to avoid -​ Use objective markers of exposure/outcome (genetic markers, medical records), stratified criteria rather than Y/N, standardized questionnaires, prospective data -​ Outcome Identification Bias -​ Bias due to imperfect definition of outcome or to errors in data collection -​ Observer bias, respondent bias -​ Observer bias -​ When an observer’s determination of outcome status is related to their knowledge of exposure status (usually cohort) -​ How to avoid -​ Mask exposure status, multiple observers -​ Respondent bias -​ Self-reporting -​ How to avoid -​ Use objective instruments (hospital charts, lab data), obtain info on related symptoms that may be part of diagnostic constellation, use tools/methods that have been pilot tested -​ Misclassification -​ Differential -​ Misclassification differs between the groups compared -​ May dilute or strengthen true association (bias can be either away or toward the null) -​ Due to better ascertainment of exposure in cases than control (case-control) or unequal diagnostic procedures between exposed and unexposed (cohort) -​ In cohort studies, this can be caused by detection bias and diagnostic bias -​ Basically, sensitivities and specificities will not be consistent across cases and controls -​ -​ sensitivity/specificity? -​ Case/diseased Control/nondiseased Exp A B Unexp C D Sensitivity Specificity a/a+c d/b+d -​ -​ Non-differential -​ Misclassification is not dependent upon group, equal probability of error among disease groups or exposure categories -​ With a binary variable results in bias toward the null (2 exposure categories) -​ -​ If >2 exposure categories (maybe low, high, unexposed), then bias can be away or toward null -​ Greater concern in studies indicating no effect 10/25: Misclassification - Review and Selection Bias -​ Source of bias and direction -​ What bias would you suspect in a survey of the prevalence of disability in the elderly population of a city, based on an investigation of members of senior citizen’s clubs? -​ Selection: elderly in clubs are probably less likely to be disabled -​ What bias would you suspect in a survey of the prevalence of various ECG abnormalities after acute MI, conducted by examining all the patients treated for this condition in hospitals in the city? -​ Selection bias, survivorship bias, also selecting from hospitals -​ What bias would you expect in a survey of the prevalence of peptic ulcer, based on questions about the occurrence of typical ulcer pain? -​ Information bias: may overestimate the prevalence of peptic ulcer -​ What bias would you expect in a survey of the prevalence of drug abuse? -​ Information bias: social desirability bias -​ Reliability -​ Repeatability of measurement under identical conditions -​ Precision -​ The quality of being sharply defined -​ Lack of random error -​ More important for an estimate to be valid than precise -​ Making inferences -​ Making a generalization about a larger group of individuals on the basis of a subset or sample -​ Possibility that inference will be inaccurate or imprecise -​ Decreases as size fo sample increases -​ Smaller sample → more variability, less likely findings reflect total pop -​ Larger sample → less variability, more reliable the inference -​ Sample size -​ Size of study influences p-value and likelihood that observed difference will be statistically significant -​ Sample size calculations: to choose # of subjects to assure probability of detecting statistically significant effect of given magnitude if one truly exists -​ Power calculations: to determine how likely a statistically significant effect of a given magnitude can be identified among a limited pool of subjects -​ Power -​ Probability of rejecting null hypothesis and concluding there is a statistically significant difference between groups if one truly exists and is 1-β (beta) -​ If β is 0.20, then power is 1-.20 =.80 -​ Type I error: rejecting null when null is true. Standard is 0.05 -​ Power: 80% is standard -​ As power increases, sample size increases -​ Sample size case-control & cohort formulas: see formula sheet -​ Power calculation case-control & cohort formulas: see formula sheet N/D: Confounding -​ Confounding -​ Situation in which a noncausal association between an exposure and outcome is observed as a result of the influence of a 3rd variable or group of variables -​ Results in biased measure of effect, upward or downward -​ Rules of confounding -​ Confounding variable is: -​ Causally associated with outcome -​ Confounder is an independent risk factor for the outcome, even among those who lack the exposure under study -​ Non-causally or causally associated with exposure -​ Confounder is not, though, a consequence of exposure -​ Not an intermediate variable in the causal pathway between exposure and outcome (not on causal pathway) -​ Ex. age is a confounder for death rates in certain states. Age is causally associated with death, associated (not causally) with state of residence, and is not on the causal pathway of state of resident → death -​ Ex. obesity → smoking → death/disease -​ Smoking NOT on causal pathway so valid confounder -​ Ex. obesity → total cholesterol → death/disease -​ Is on pathway, probably not a confounder -​ How do we assess the presence of confounding? -​ Is the confounder related to both exposure and outcome? -​ Does the exposure-outcome association seen in crude analysis have the same direction and magnitude of association seen within strata of the confounding variable? -​ Does the exposure-outcome association seen in crude analysis have the same direction and magnitude seen after controlling (adjusting) for confounder? -​ Is it appropriate to conduct a statistical test for confounding? -​ No, confounding is a validity issue -​ Prevention of confounding, generally: multivariable analysis, adjustment, stratification, randomization, restriction, matching -​ In the design phase: -​ Randomization, restriction, matching -​ In the analysis phase: -​ Stratification, multivariate analysis -​ Randomization -​ Random allocation of study subjects into exposure group -​ Helps to ensure potential confounders are evenly distributed b/w groups -​ Restriction -​ Limit eligibility to those that fall within a specified category or categories of confounders -​ May limit subject pool, generalizability -​ Matching -​ Subjects are selected in such a way that the potential confounders are distributed in an identical manner among each of the study groups -​ Usually case-control, expensive -​ Stratification/stratified analysis -​ Compare exposed and unexposed groups within homogeneous categories of confounding variables -​ Compute an adjusted measure of association -​ Multivariate analysis -​ Linear reg, log reg, proportional hazards regression -​ Residual confounding -​ Occurs when categories of confounders controlled for are too broad or when confounding variables remain unaccounted for -​ Collinearity -​ High correlation between confounding variable and exposure of interest (adjustment is very difficult/impossible) -​ Ex. air pollution and area of residence -​ Types of confounding -​ Positive: overestimation of true strength of association (unadjusted, crude RR brings true, adjusted RR “away” from 1) -​ Negative: underestimation of true strength of association (unadjusted, crude RR bings true, adjusted RR “toward” 1) -​ Qualitative: inverse in the direction of the association (one reports harm, the other reports protective) N/D: Interaction/Effect modification -​ A situation in which two or more risk factors modify the effect of each other with regards to the occurrence or level of a given outcome -​ The magnitude of the effect of the exposure on the outcome (RR, OR, AR, etc) differs based on the presence (or absence of a specific covariate) -​ Synergism, antagonism -​ Positive interaction -​ The presence of the effect modifier accentuates effect of the exposure of interest -​ Negative interaction -​ The presence of the effect modifier diminishes or eliminates the effect of the exposure of interest -​ Ex. can stratify data to determine potential effects of potential interacting covariate. For coffee drinking (E) and heart attack (D), stratify by smoking -​ So two strata: 1.) coffee drinking (E) and heart attack (D) among smokers, and 2.) coffee drinking (E) and heart attack (D) among nonsmokers -​ Interaction (effect modification) -​ Situation where the effect of the exposure (E) on the outcome (D) differs based on the presence of another variable (Z), the effect modifier -​ Definition of interaction -​ Homo- or heterogeneity of effect -​ Effect of a putative risk factor A on the risk of an outcome Y is not homogenous in strata formed by a third variable Z (the effect modifier) -​ Difference in comparison b/w observed and expected joint effects -​ When the observed joining effect of A and Z differs that expected on the basis of the independent effects of A and Z -​ Effect can be measured either by -​ Difference measures - attributable risk (additive model) -​ Relative measures - e.g. relative risk (multiplicative model) -​ -​ Homogeneity of effects - additive model -​ Additive interaction is present when the AR (absolute difference in risk) varies across strata of the effect modifier (Z) -​ When AR increases at the same rate across Z, no additive interaction. Ex. AR goes from 0 to 10 for both men and woman (Z, effect modifier) -​ When AR increase at different rates across Z, additive interaction is present. Ex. AR goes from 0 to 5 for men, and 0 to 20 for women -​ Can also be seen on a graph, if absolute differences differ depending on Z -​ Homogeneity of effects - multiplicative model​ -​ Multiplicative interaction is present when the CIR, IDR, or OR varies across strata of the effect modifier (Z) -​ When IDR (for ex) increases at the same rate across Z, no multiplicative interaction. Ex. IDR goes from 1.o to 2.0 for “no” and 1.0 to 2.0 for “yes” (our effect modifier, Z) -​ When IDR (for ex) increases at a different rate across Z, multiplicative interaction is present. Ex. IDR goes from 1.0 to 2.0 for “no” and 1.0 to 5.0 for “yes” -​ Can also be seen on a graph -​ Homogeneity of effects - statistical testing -​ Is the observed heterogeneity produced by chance? -​ Test statistic formula: see formula sheet -​ Assumes a multiplicative model -​ Comparison of observed and expected joint effects - conceptual framework -​ Interaction is present when the observed joint effect of A and Z differs from expected joint effect -​ The expected joint effect can be estimated by assuming that the effects of A and Z are independent -​ So, need to estimate both of their independent effects -​ No interaction present -​ Joint effect of risk factor (A) + modifier (Z) equal the combined effects of their independent effects -​ Positive interaction present -​ Observed joint effect of risk factor (A) and modifier (Z) is greater than expected based on the sum of their independent effects -​ -​ Negative interaction present -​ Observed joint effect of risk factor (A) and modifier (Z) is less than expected based on the sum of their independent effects -​ -​ Comparison observed and expected joint effects - additive interaction -​ Joint effect of exposure (A) and modifier (Z) is estimated as the arithmetic sum of the independent effects measured by ARexp -​ Strata Observed incidence Observed ARe density A-Z- 10.0 0 A-Z+ 20.0 10.0 A+Z- 30.0 20.0 A+Z+ 40.0 30.0 -​ Joint expected AR = ObsARA+Z- +ObsARA-Z+ = 10.0 + 20.0 = 30.0 -​ Joint observed AR = 30.0 -​ NO additive interaction -​ Strata Observed incidence Observed ARe density A-Z- 10.0 0 A-Z+ 20.0 10.0 A+Z- 30.0 20.0 A+Z+ 60.0 50.0 -​ Joint expected AR = ObsARA+Z- +ObsARA-Z+ = 10.0 + 20.0 = 30.0 -​ Joint observed AR = 50.0 -​ Additive interaction present -​ Comparison observed and expected joint effects - multiplicative interaction -​ Basically the same as additive you just look to see if A-Z+ and Z+Z- multiply to equal A+Z+. if not, then there’s multiplicative interaction -​ Match data -​ Additive interaction: CANNOT USE ADDITIVE for case-control when one variable is MATCHED -​ Multiplicative interaction: CANNOT USE JOINT EFFECTS for case-control, only homogeneity of effects -​ Multivariate modeling -​ Fit regression models that contain cross-product terms and then analyze regression coefficients -​ Log reg models detect multiplicative interaction, not additive -​ Linear models can be used to assess both additive and multiplicative interactions -​ Interaction vs. effect modification -​ Both address impact of a 3rd variable on relationship being studied -​ Effect modification: effect of an exposure on an outcome is assessed in different strata of the third variable -​ Interaction: the joint effect of exposure is assessed -​ If interaction is found, it is inappropriate to adjust for the effect modifier 11/06: Stratification -​ Stratification and multivariate analysis are used to: -​ Assess presence of confounding and control it, assess presence of effect modification and summarize association of >1 predictor variable on the risk of developing disease -​ Parsimony: all things being equal, choose simplest possible explanation or simplest possible solution -​ Assessing interaction -​ Eyeball -​ Chi-square test for homogeneity (test the null that degree of variability among strata are consistent with random variation -​ Stratification -​ Simplest method to analyze presence of confounding -​ Assess presence of interaction -​ Control of confounding -​ Adjustment methods -​ Direct: apply distributions of a standard population to get rid of confounding -​ Indirect: use specific rates from a standard population and apply them to study population -​ Stratified analysis -​ Useful to calculate single overall estimate once effect of confounding has been taken into account; combining results of stratum specific estimates into a single unconfounded estimate: pooled (adjusted) estimate -​ Disadvantage” only feasible when controlling for a few variables -​ Multivariate analysis -​ Conditions for stratification -​ Sufficient numbers in all strata -​ Appropriate categorization -​ Meaningful categories -​ No residual confounding -​ Basic procedure for stratified analysis 1.​ Categorize each potential confounding variable to be controlled -​ All data in format of a 2x2 table, categorical whether by nature (gender) or categorization (15%) 2.​ Assign study subjects to appropriate strata 3.​ Calculate stratum-specific effect estimates for E-D relationships of the confounder -​ Conduct simple analysis within each strata using point estimates -​ When there is confounding, associations across strata are usually similar, but different from crude 4.​ Assess homogeneity of estimates across strata -​ Assess for interaction through: visual comparison of stratum-specific estimates or statistical tests (chi-square) -​ If interaction exists, stop here -​ If stratum-specific estimates are homogenous, proceed 5.​ If appropriate, calculate adjusted point and interval estimates -​ Calculating weighted averages of stratum-specific estimates -​ CIs on weighted averages 6.​ If appropriate, conduct an overall test for association -​ Accumulates info in each stratum while controlling for confounding -​ Test stat to use is mantel-haenszel statistic extended for stratified data: 2MHS -​ (5) Methods to calculate adjusted (pooled) estimates -​ Adjustment: direct and indirect -​ Both typically used for age adjustment when comparing rates b/w geographic areas -​ Direct: limited use due to more sophisticated techniques, achieved by applying distribution of standard pop onto study pop, issue = choice of standard pop, requires stratum-specific number to be >5 -​ Indirect (SMR): no specific stratum specific rates needed for study population–apply rates from a standard population to the stratum specific distribution to determining observed vs expected, used often in environmental and occupational studies where there are small populations being studied -​ SMR (standardized mortality or morbidity ratio) = observed/expected % -​ Woolf’s method (inverse variance weights) -​ Not optimal method? -​ Maximum Likelihood -​ Computationally difficult -​ Mantel-Haenszel Weighted Averages -​ Easy to calculate, almost as accurate as maximum-likelihood Quality Assurance and Control -​ Quality assurance -​ BEFORE data collection -​ protocols/manuals of operation, standardization of procedures, training/certification of staff 1.​ Specify study hypothesis 2.​ Specify general design to test study hypothesis (study protocol) 3.​ Choose and prepare specific instruments (develop operation manuals) 4.​ Train staff 5.​ Using trained staff, pretest and pilot-study data collection 6.​ Modify 2 and 3 and retrain staff on basis of 5 -​ Quality control -​ Efforts DURING the study to monitor the quality of the data -​ Data acquisition, processing -​ Observation monitoring (Double stethoscope, taping interviews and review) -​ Quantitative monitoring -​ Should be done blinded/masked -​ Random repeat measurement -​ Quality control pools -​ External -​ Internal -​ Monitoring technicians for deviations from expected values -​ Gold standard -​ Method, procedure or measurement widely accepted as being best available -​ Some quantitative measures of validity -​ Sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratios -​ Sensitivity and specificity -​ Quality control measures that apply to evaluation of exposure and outcome when definitions are categorical -​ Sensitivity -​ Ability of a test to correctly identify those who have disease or characteristic of interest (true positives) -​ a/a+c -​ Specificity -​ Ability of a test to correctly identify those who do not have disease or characteristic of interest (true negatives) -​ d/b+d -​ Net sensitivity and net specificity -​ Sequential testing -​ Net sensitivity decreases, net specificity increases -​ Simultaneous testing -​ Sensitivity increases, specificity decreases -​ Positive predictive value -​ If a person tests positive, what is the probability they actually have the disease -​ a/a+b -​ Negative predictive value -​ If a person tests negative, what is the probability they don't have the disease -​ d/c+d -​ Likelihood ratios -​ Derived from sensitivities and specificities -​ Positive likelihood ratios -​ Prob. of positive test result in presence of disease/prob. of positive test in absence of disease -​ Negative likelihood ratios -​ Prob. of negative test result in absence of disease/prob. of negative test result in presence of disease -​ J statistic -​ J = sensitivity + specificity - 1.0 -​ When J = 0, test performs no better than by chance alone -​ Reliability -​ Percent agreement: a+d/a+b+c+d -​ Percent positive agreement: a/[(a+c) + (a+b) / 2] x 100 -​ Kappa -​ Proportion of observed agreement not due to chance in relation to the maximum non-chance agreement -​ K = PO-Pe / 1 - Pe -​ Pe = [(a+c)(a+b)/T + (b+d)(c+d)/T] / T -​ P0 = calculated percent agreement as decimal? -​ K>.75=excellent, K 0.4-0.75=good, K

Use Quizgecko on...
Browser
Browser