🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Bias and confounding Binder2.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

AmpleJudgment

Uploaded by AmpleJudgment

Tags

epidemiology public health research methods

Full Transcript

Bias and confounding Presentation by: Ann Vuong, DrPH, MPH “evidence” may cause “harm” Hearing screens and SIDS Case-control study of 31 infants who died of SIDs compared to 31 newborns that survived the first year of life Compared (TEOAE) hearing screens between cases and controls SIDS infants had...

Bias and confounding Presentation by: Ann Vuong, DrPH, MPH “evidence” may cause “harm” Hearing screens and SIDS Case-control study of 31 infants who died of SIDs compared to 31 newborns that survived the first year of life Compared (TEOAE) hearing screens between cases and controls SIDS infants had significantly decreased signal-to-noise ratios on the right side compared to control infants The Aftermath... Failed newborn screening related to SIDS Abnormalities in newborn hearing screening tests predictive of SIDS “Simple Hearing Test May Predict Sudden Infant Death Syndrome” Criticisms voiced by scientific peers Focused on signal-to-noise ratios No failed newborn screens (no infants failed) Not necessarily abnormal findings They repeated 8 statistical tests and found 3 significant findings Unable to control for factors that are associated with SIDS (e.g., smoking, SES) MMR vaccine and Autism Case-series on 12 children with pervasive developmental disorder Enrolled consecutively as all referred to gastroenterology with intestinal abnormalities Investigators reported developmental regression associated with gastrointestinal disease Linked the onset of behavioral problems with MMR vaccine Criticisms voiced by scientific peers Too few cases to make generalization Symptoms appeared before MMR in some Age of MMR often coincided with first parental concerns Large studies have found no such relationship % of children with autism who received MMR same as unaffected children No difference in age of diagnosis of autism between vaccinated and unvaccinated (Campbell-Scherer, 2019; Gentile et al., 2013; Hviid et al., 2019; Jain et al., 2015; Uno et al., 2012; Uno et al., 2015 ) The Aftermath... Immunization rates for MMR dropped Need 90-95% for herd immunity 2001-2004, 14 measles outbreaks in the US 251 total cases Majority of cases were preventable Ireland, vaccination rate dropped to 72% 2000: 1603 infected children, 3 deaths 147 cases previous year In 2002, over 8 weeks, 200 cases reported England and Wales, 2009: 1348 confirmed measles 1998: 56 confirmed cases To show a valid Association We need to assess Bias | confounding | chance Bias Any systematic error in the design, conduct, or analysis of a study that results in a mistaken estimate of an exposure’s effect on the risk of disease Bias Can occur in the design or conduct of the study Can either inflate or deflate your estimates of association Difficult to evaluate or eliminate effects analytically Must be considered during the design In epidemiology, bias does not imply prejudice or deliberate deviation from the truth bias Information bias Misclassification Bias selection Bias Survival Bias Nonresponse Bias Berkson’s bias Healthy worker effect Losses to follow-up Differential Non-differential Interviewer bias Surveillance bias recall bias reporting bias Selection bias Distortion of the estimate of effect, because the study sample is not representative of the underlying population in terms of the distribution of exposure and/or outcomes Systematic differences in participant characteristics at the start of the trial Most likely to occur when there is some pre-existing ‘hypothesis’ about the relationship Are relatives of women with breast cancer really more likely to carry BRCA1, or are they just more likely to be screened for it? More prone to occur in case-control and retrospective cohort studies Minimal for prospective cohorts Selection bias occurs when... Different surveillance, diagnosis or referral criteria for cases and controls Improper procedures for selecting sample population Differences in participation rates in cases and controls, if associated with exposure Types of Selection Bias Survival Bias Occurs if cases have to be alive to be included and exposure affects mortality Example: Survival bias may attenuate the estimated effects of HIV on small-for-gestational-age (SGA) births, because SGA cannot be observed for a pregnancy ending in stillbirths Nonresponse Bias If response rate of potential subjects is higher in people with disease who were exposed vs. people with disease who were not exposed, could observe an “unreal” association People who do not respond often differ from those who do Demographics, SES, cultural, lifestyle, medically Types of Selection Bias Berkson’s Bias aka hospital patient bias Healthy Worker Effect Occupational exposure studies Working individuals are healthier Hospital controls’ conditions are than individuals who are not working also related to exposure in the study Controls or unexposed individuals in Admission preference of disease of occupational studies should also be interest workers Biases toward the null hypothesis Biases toward the null hypothesis Types of Selection Bias Losses to follow-up Prospective cohort Occurs when individuals leave the study before the end of the follow-up and there is a difference in exposure status in those that leave versus those that stay Scenario 1 A study is conducted to see if serum cholesterol screening reduces the rate of heart attacks. 1,500 members of an HMO are offered the opportunity to participate in the screening program and 600 volunteer to be screened. Their rates of MI are compared to those of randomly selected members who were not invited to be screened. After 3 years of follow-up, rates of MI are found to be significantly less in the screened group. Is selection bias present? Yes Scenario 2 Researchers are planning to conduct a case-control study of the association between an occupational exposure and a health outcome. The researchers plan to study exposed workers from one factory and compare them with unexposed retirees who have never worked in a factory. What type of selection bias may be present? Healthy worker effect Scenario 3 Researchers conducted a prospective cohort study of the association between air pollution exposure and asthma. Some study participants were lost to follow-up over time. The researchers were able to obtain data on the exposure and the health outcome for participants who remained in the study as well as for participants who dropped out. The researchers discovered that the rate of loss to follow-up did not differ when comparing exposed and unexposed groups. The researchers also found that the rate of loss to follow-up did not differ when comparing people who developed asthma and people who did not develop asthma. Was there selection bias? No Scenario 4 In a study to determine the incidence of a chronic disease, 150 people were examined at the end of a 3-year period. Twelve cases were found, giving an incidence rate of 8%. Fifty other members of the initial cohort could not be examined; 20 of these 50 could not be examined because they died. Does this loss of subjects to follow-up represent a source of selection bias? If so, what type? Survival bias Minimizing selection bias Minimize non-response Collect data on non-responders Even basic demographic data will help Minimize study personnel knowledge of exposure/case status Standardize criteria for approaching and enrolling cases and controls Do population-based studies (no selection) Information bias Occurs when the means for obtaining information about the subjects in the study are inadequate so that as a result some of the information gathered regarding exposures and/or disease outcome is incorrect Can occur in any type of study design (casecontrol, prospective or retrospective) Types of information Bias Misclassification Bias Misclassifying cases as controls, exposed as non-exposed, and vice versa Types: Rate of misclassification differs between study groups Can inflate the RR or OR, biases results away from the null Example: Cases may be misclassified as exposed more often than controls Rate of misclassification does not differ between groups Not related to status of exposure or disease Usually dilutes the RR or OR, biases results towards the null Differential misclassification Examples Differences in accurately remembering exposures are unequal between groups Mothers of children with birth defects will remember the drugs they took during pregnancy better than mothers of normal children Interviewer or recorder bias Interviewer has subconscious believe about the hypothesis More accurate information in one group versus the other Case-control study with cases from one facility and controls from another with differences in record keeping Nondifferential misclassification Examples Difficulty remembering exposure that is present in both groups Case-control study of obesity and past physical activity; difficulty remembering frequency, duration, and intensity of exercise in the past year Recoding and coding errors in records and databases ICD-9 codes in hospital discharge summaries Using surrogate measures of exposure Using prescriptions for anti-depressants as an indication of treatment Non-specific of broad definitions of exposure or outcome “Do you smoke?” to define exposure to tobacco smoke Types of information Bias Interviewer Bias Surveillance Bias An interviewer’s knowledge may influence the structure of questions and the manner of presentation, which may influence results Disease ascertainment may be better in the monitored population than in the general population Example: More in-depth questioning about disease of people known to be exposed May also follow-monitor exposed more closely or longer than comparison group Types of information Bias Recall Bias Those with a particular outcome or exposure may remember events more clearly or amplify their recollections Driven by the need to “figure out why” Reporting Bias A subject may be reluctant to report on exposure due to attitudes, beliefs, or perceptions Examples: Denying certain exposures related to lifestyle Smoking, drinking, sex, drugs Scenario 5 It has been suggested that physicians may examine women who use oral contraceptives more often or more thoroughly than women who do not. If so, and if an association is observed between phlebitis and oral contraceptive use, the association may be due to what type of information bias? Surveillance bias Bias is a result of an error in the design or conduct of the study Minimize information bias Study personnel (and participants, if possible) blinded to study group Hospitalized patients and controls are both sick – may not guess the hypothesis/study group Interviewer should not know the purpose of the study Collect information on exposure before the outcome occurred, if possible Divide tasks Person doing measurement differs from person determining study group Evaluate information bias Cannot control for bias, but evaluation is possible and recommended Compare evaluations done by different investigators to determine extent of agreement/disagreement Scenario 6 Researchers conduct a case-control study of the association between diet of young children and diagnosis of childhood cancer, by age 5 years. The researchers are worried about the potential for recall bias since parents are being asked to recall what their children generally ate, over a period of 5 years. Which control group would reduce likelihood of recall bias? Parents of children with other known, diagnosed, serious health problems aside from childhood cancer. What is the difference between bias and confounding? Bias Creates an association that is NOT true confounding Describes an association that is true, but potentially misleading An Epidemiological study... May observe an association and try to derive a causal inference, when, in fact, the relationship is not causal Relationship between the exposure and the outcome is really due to a third variable (often unmeasured or unconsidered) Confounding Can either create an association that does not exist or mask a true association Refers to two variables whose effects cannot be differentiated Exists when the association between two variables is altered after accounting for the effects of a third variable A variable that distorts an association wholly or partially due to its association with both the outcome and the exposure Confounder criteria 1 Must be associated with the disease (may be a risk factor) 2 Associated with the exposure , but is not a result of the exposure 3 Cannot be an intermediate step in the causal pathway between exposure and disease What is confounding? Factor A Disease B Factor X Factor X is a Confounder ! Factor X is a known cause of Disease B Factor X is associated with Factor A, but is not a result of Factor A Is smoking a confounder? Coffee consumption Lung Cancer Smoking Yes Is religion a confounder? Oral contraceptive use Cardiovascular disease Religion No Are HDL levels a confounder? Alcohol consumption Heart disease HDL levels HDL levels are a step in the causal chain between alcohol consumption and heart disease. Moderate alcohol cionsumption increases serum HDL, which decreases risk of heart disease. As such, HDL levels cannot be a confounder in this relationship. No nuances of confounders Confounders may be proxies for other measures Age, sex, and race are associated with many diseases, and also associated with many exposures May not be causally related, but are proxies for many likely confounders, even if you can’t pinpoint the particular relationship Reducing confounding Randomization Restriction Matching Experimental Studies: Randomly assign study participants to exposure group If exposure groups are created via randomization, then the distribution of any variable is theoretically the same in the exposed and nonexposed group Issues: Does not always work Randomization may not be possible Reducing confounding Restrict enrollment to only those subjects who have specific value or range of the confounding variable Randomization Restriction Matching Issues: Limits the number of eligible subjects Still may have residual confounding Restrictions limit generalizability Cannot evaluate the factor you restricted to Reducing confounding Only non-exposed or non-case (control) subjects are chosen who match those of the comparison group in terms of the confounders in question Randomization Restriction Matching Issues: Cannot evaluate the factors you matched on Harder to recruit if matching on many factors Reduces power if matching on a nonconfounder Will not explicitly control confounding by things not matched on Reducing confounding Evaluates the relationship between exposure and outcome in homogenous categories (strata) with respect to potentially confounding variable Within stratum there is no variability in the confounder, so the effect is removed Stratified analysis Regression Controlling For Confounding: stratification Scenario 7 : Case control study of oral contraceptive use and heart attack. Age is a confounder Stratification Example Crude (Unadjusted) OR Oral contraceptive use Heart Attack 46 200 60 640 OR 46 ×640 60×200 2.5 Stratification Example 10 90 35 465 OR 10 × 465 90×35 Heart Attack Stratum Specific OR 1.48 Oral contraceptive use Oral contraceptive use Heart Attack 36 25 OR 110 175 36 ×175 110×25 Stratum Specific OR 2.29 Stratification Example Crude Overall OR OR 2.5 Stratum Specific OR OR 1.48 OR 2.29 Reducing confounding Evaluates the relationship between exposure and outcome in homogenous categories (strata) with respect to potentially confounding variable Within stratum there is no variability in the confounder, so the effect is removed To create a single unconfounded (adjusted) estimate for the relationship in question Pooling stratum-specific estimates/effect measures to get an overall estimate Strata = levels of confounding If there is no interaction, pooling will give an unbiased estimate Stratified analysis Multiple regression analysis Cochran-Mantel-Haenszel method For odds Ratio For relative Risk rRcmh ∑ ∑ Ai (Ci + Di ) ni Ci (Ai + Bi ) ni ORcmh ∑ ∑ Ai Di ni Bi Ci ni Stratification 10 Heart Attack 90 35 465 ORcmh ∑ ∑ Oral contraceptive use Oral contraceptive use Heart Attack Example Ai Di ni Bi Ci ni 36 110 Adjusted OR (adjusted for age) 25 175 (10×465) 600 (90×35) 600 + + (36×175) 346 (110×25) 346 1.97 Stratification Example Stratum Specific OR Crude OR OR OR 2.5 Adjusted OR OR 1.97 1.48 OR 2.29 Crude and Age-adjusted ORs both indicate a positive association between oral contraceptive use and heart attacks. Conclusions drawn from both point estimates are the same. However, the strength of the association is slightly different, with the crude OR (2.5) indicating oral contraceptive may play a bigger role in the pathogenesis of heart attacks than the age-adjusted OR of 1.97. Stratification Example Adjusted OR Crude OR OR 2.5 OR 1.97 Reducing confounding Multiple regression analysis including and excluding confounder If relationship between exposure and disease change when confounder is included then it’s likely a confounder 10% rule of thumb Confounder does not have to be significant to be a confounder, and not all significant factors in a model are confounders Stratified analysis Multiple regression analysis Multiple Regression analysis Example Scenario 8 : Casecontrol study of obesity anddiabetesrisk Israceaconfounder? 5.0 Yes Action: Addracetoregressionmodel 2.0 Issexaconfounder? Yes Action: Addsextoregressionmodel 2.5 Final key points on confounders Confounders obscure the relationship between the factor of interest and the outcome Confounding is not an error in the study and must be understood Failure to account for confounding in results/interpretation is the error Often want to address issues of confounding, but sometimes the confounded relationship is useful Always: Identify and measure potential confounders Data should be sufficiently detailed to ensure later use Interaction Describes the situation when the association between two variables is different at different levels of a third variable Is synergistic interaction present? Scenario 9 : Investigation of smoking and alcohol consumption on oral cancer development Interaction Example Alcohol consumption Smoking RR10 1.00 1.53 1.23 5.71 RR11 Interaction Example Expected RR for smoking and alcohol consumption: Expected RR = RR10 x RR01 = 1.53 × 1.23 = 1.88 Alcohol consumption Smoking If synergistic interaction is present, then: RR11 > RR10 x RR01 1.00 1.53 1.23 5.71 Observed Expected 5.71 > 1.88 Yes Results suggest that synergistic interaction is present in the multiplicative model Associations can arise simply due to random variation in the study More likely to occur in: Small studies with limited power With poor or inaccurate study measurements If you are testing many interrelated hypotheses   Design strategies to reduce chance: Increase sample size Increase measurement precision  Small sample size can easily be affected by variability in sample selection Large sample sizes can overcome a few outliers, tend to be more representative, and provide more precise measures of actual variability Reduces random variability in measurements, making it clear what the true association is Summary Threats to your associations in epidemiologic studies that NEED to be addressed: Bias Confounding Chance These effects can be mitigated in the design and/or analysis phase of your study Not all damage can be fixed in the analysis phase* Effects can either create or destroy associations Bias and Confounding - Part 1: Intro Talked a little bit about bias and the founding already so far. We touched a little bit on. This topic and the study design lectures that we've had, we talked about it and case control studies, core studies, as well as the descriptive FBI studies. But for today's lecture, we're going to go a little bit more in depth. So by the end of this lecture, you'll have a deeper understanding of what bias and founding is and what it can possibly do to influence the study results. Of a particular paper, as well as how it can influence what is reported and how it is interpreted. Evidence may cause harm. This statement is especially important when we know the power that research has. In 2007, there was a paper published by Rubens and Clouds and they examined hearing screening tests and Sudden Infant death Syndrome forces. They conducted a case control study by identifying 31 cases. Infancy dialysis in 31 control infants, those who survived in the first in a block, their matched based off of sex, gestational age and whether they were in the Nikki or baby nursery. After they gathered their cases and controls, they compare the results of the hearing screening test and found that the cases actually have lower signal to noise ratios on the right side compared to the controls the baby still lived. This study found that there was a unilateral difference in copier function and since the findings resulted in several messages not only to the scientific community but the public as well. It basically translated us here. The screens can help identify infants who will have sits. Reports were published in several places including Science, News and even Web MD. However, from the scientific community there were there were many criticisms of this study and as you know when you're searching Pub Med for anything that. You say there could be letters or follow-ups to a particular study. And so for this study by Rubens and colleagues, there were several letters that were written to the editor, and this particular one was put forth by Hamilton men. And they're worse. The concern that the difference between the groups with regard to signal to noise ratio is actually really small. Display significantly, it was only about half of the standard deviation difference, and none of the emphasis study had actually had a full screening test. So what this means is that if you go through the equation for signal detection theory, and you want to make sure you detect 95% of infants who die from SIDS who actually end up falsely alerting. 87% of normal infants that they were documented when they walked right. It's by no means acceptable. They also criticized their statistical analysis since Rubens and college performing 8 statistical tests and found three sentence using this approach and populates the error rate. Another thing that was subscript in this particular letter was their lack of control risk factors like smoking and SES. But despite all these concerns, it was published without describing implementations and final conclusions. Because of this, the authors urged that the interpretation be taken with extreme caution. And they also reported that because of this study and the news hours, they had over 50 calls 2 institutions from parents and grandparents who are freaking out about their baby's hearing screenings. Also terrified that. The next page that we're going to talk about is 1, published by Wakefield and Qualities in 1998, and this one that pretty sure everyone has heard about. It was a paper that discussed the MMR vaccine and autism. This paper was published in Lancet very prominent journal. It was the case series. So remember, case series are just the stuff above the case study. Is by no means the study in which we would deem very high quality in terms of its ability to provide a validity on a particular association, right. So this case series just looked at 12 children who had pervasive developmental disorders. They were all enrolled consecutively after being referred to the pediatric gastroenterology unit. Intestinal problems. They found that the behavioral problems were linked back to the MRI, with eight of the 12 children developing behavioral problems after the vaccination. But there are a lot of problems with this study. First, the sample size is way too small. Take any generalizations right, They have a sample size of 12, right? So that's extremely small. How can you make any conclusions on a sample size of 12? Not only that, this is only one study, even if the study had about 100 or 200. People involved. You cannot face the entire conclusion between exposure, outcome from one study alone when you were conducting your literature review and you are assessing whether the relationship is present in your exposure or outcome and your research projects that you're having. Now you're selecting only three studies, but at the final. Conclusion At the end of the day, when you're assessing the relationship between exposure and outcome and EPI literature, you're going to have to look at a huge number of studies right at different diverse populations to see if that relationship is truly there. So basing it on one study, a lot of one. Significantly flawed study on sample size of 12 is outrageous. Another thing that was very problematic in this particular study was that many of the symptoms of the behavioral problems are present were there even before the kids thought the MRI, right? Temporality is a big thing and exposure needs perceived outcome. So there was documentation that the parents voiced their concerns about the kids behaviors when the vaccine was given or even before the vaccine was given. So that was very problematic. In 2410, of the 13 authors from the Wakefield paper signed a statement a Rotary traction to their. To make it clear. That in this paper no causal link was established in the MMR vaccine and autism as the data were insufficient, and view of this we consider now is the appropriate time that we should together formally retract the interpretation placed upon the science. Since then there have been several very large studies that have found no relationship between. Remove vaccine autism. So if you're interested, I have received some on the bottom. These are just some of the studies. There have been several very large scale studies to this potential relationship can be found, you know. So now when you search the article and come in and you download it, what you actually see is this huge text on the paper. Just so you know when you're reading it that the authors have taken that what they put forth in this particular step, but we know that the damage is done. Animations to work, we need a certain percentage of the population vaccinated. MMR herd immunity is about 90 to 95%. In many instances we have not been able to reach this, and this has resulted in outbreaks of measles in various parts of the world. The paper was published in 1998 What we saw in the United States from 2000 and 2004. Outbreaks and Ireland vaccination rates off 72% and they subsequently fell over 1600 measles cases and three deaths in 2000. This was a big increase from the previous year of only 150. In England and wells there were over 1300 cases in 2009. This is a stark increase from 1998, when they only had 5619. So this illustrates how much damage so-called evidence may have, right? And This is why we need very critical findings and we need to think about the study design. How is the study design that would influence the results and conclusions and what? Investigators not consider and their analysis that they should have, right? So we'll spend time talking about bias, compounding in the role of chance, chance, and especially so. At this point end, of course, I hope you realize that when you read a study, you cannot just read its final concluding statement and take it as a fact. We need to quickly examine how the investigators design studies. What population did they include in their study? What were the inclusion exclusion criteria? How did they measure exposure? Closure app. How do they affect outcome? How did they define outcomes? What do they consider and their model when they were conducting their statistical analysis? And how did all these decisions influence what they found? Right so. At this point, you should be very weary of every single study that you read to critically examine it, and not just to read it's final conclusion latent. And take that as you know the final thing. You should be very skeptical and critical of every single site that you're. Bias and confounding - Part 2: Selection Bias Systematic error and the design conduct for analysis of the study that results in a mistaken estimate of the exposures effect on the risk of disease. Systematic error is non random deviation which results in beginning the wrong answers most of the time. Why is can occur at various stages in the second, from the design of the study to when the study is becoming. Depending on what the bias is, it may cause your associations to be inflated, meaning that it seems to be an association which may not accurately reflect what is happening in the real world, or it can result in a estimate that is deflated. So it's closer to the null value of 1, suggesting that there is no association once bias is present. You can try to eliminate it from your study when you start to analyzing the data, but it's actually really difficult to do this. So it's very important that you have to think about all the differential biases that may arise during the study design. While bias has a negative connotation in the real world, generally in ethnic studies. Five stars. It means that the atheology list or the vestors or purposely. Deviating from the truth, right? We're not really trying to find a sociation that isn't bad, right? We're trying to get at what the real relationship has. So bias does not have that negative connotation. She bias falls into two major categories. There's selection bias and information. So we have here the various sources, selection bias that we'll be discussing on the left hand side and then we have the very specific information we'll talk about on the right. Again, this is not a comprehensive list. There are many types of biases and some of them are more specific to different fields of epidemiology somehow going to cover. All of them. Selection bias happens when the people who are included in your study are not representative of the population in terms of exposure or disease status. You can also have selection bias with 2% people in your study. Let's say you have a case increase setting. Your cases are significantly more likely to be older and less educated than your controls. Then you're said to have selective. Why? You want your entire sample to be representative of the population from which they are drawn. They should also be similar to each other. So okay selection bias can occur because there is a pre-existing relationship between the hypothesis already. For example, our relatives are women with breast cancer more likely to happen. Or are they just more likely to be screaming? Remember that chasing tool studies and retrospective studies from more prone to selection bias. And while Cora says can have selection bias, it is minimal compared to the other two and court studies comment section bias. Could you be talking about selective attrition that results in selection by? Selection bias can occur when there are differences and surveillance, diagnosis, or referral criteria for cases in terms if researchers are using improper procedures for selecting the sample population. For instance, if there are different criteria for chases versus criteria for controls, this may potentially result in inclusion of people in one group. Are more likely to have these people. And lastly, selection bias concur when there are differences. Discussion rates between cases and trials present if you're enrolling people in your study and you incentivize them with some monetary compensation. But this results in more people enrolling their study would have lower income than this may result in selection wise, particularly when low income may. Potentially affect exposure distribution. The first time was selection bias is survival. This happens in studies where the inclusion of the study varies upon mortality sentence. And what I mean by that is if your outcome isn't mortality, it's something else and the individual dies before they can be included, then this will affect your silence off. For example, let's say you want to examine. HIV and small fabrication. This may be affected by survival bias. Because HIV is associated with steel first, so the baby is stillborn. You cannot assess SGA. Therefore you'll end up not including several potential cases. Now, response bias happens when people who volunteer for a study, they possess characteristics from the average individual in the target population. Individuals who do not respond to requests we study generally have different baseline characteristics and responders. Fires will be introduced. Association from disclosure and a health outcome differs between study volunteers. Not responding. Personal bias is sometimes referred to as hospital patient bias. It may occur when hospital controls are used in case control study. It controls the hospitalized region exposure that is also related to the health outcome under the study. Then the measure of the effect may be reaching resulting of no association. He healthy worker is another type of selection bias that may result in in findings that are towards the null, again meaning that you're not going to finance association. This is particularly problematic from observational studies where you're including non working individuals in your study. People who are working tend to be healthier. So if you are not working, they may be less help. So when you have an occupational studies recommended that for your controls or your unexposed group you include people who are in your work sector. Losses to follow up is problematic and perspective the words as you know attrition is a problem and you always expect some loss of followers. And of course study no matter how long right? Even if you're quote study is following people for only one year, you're going to expect some attrition. You're never going to have 100%. The concern here is if people leaving your study are going to be different from people who are staying in their study study, especially in terms of exposure size, then this will bias the result. The direction in which the results will be either inflated or deflated will depend on who's leading. So let's look at our first scenario. Have a study that's conducted PCR serum cholesterol screening reduces the rate of 1500 members of an HR offered the opportunity to speak in this green program, and 600 volunteer discreet areas of MI are compared to those of randomly selected members who were not invited to be. After three years of follow-up, rates of MI are found to be significantly less. Selection box. Yes, selection bias is concerned because people who volunteered to the screen may have a vested interest in doing something because they are concerned about their sealing cluster levels because they are at higher risk than people are not impressed with these. In a second scenario, researchers are planning to conduct a feasibility study of the association between an occupational exposure and health outcome. The researchers plan to study exposed workers from 1 factory and compare them with unexposed retirees who have never worked in a factory. That type of selection bias may be present here. This is a glaring problem because retirees should not be compared to factory workers. They are not as likely to be held. This will result in null associations. In the first scenario, we have researchers conducting the president of the association with air pollution exposure and password some study for distance for loss of all of overtime. The researchers were able to obtain data on the exposure and health outcomes for his concerning settings as well as other participants were dropped out, The researchers discovered that the rate of loss to follow. Not different than unexposed groups, researchers also found that the rate of loss to follow-up did not differ when compared to the developed asthma. The people who did not develop asthma was their selection bias. The researchers in this particular scenario was very they were very lucky because they had data on people who were lost in follow up. So they were able to compare exposure. They were also able to compare outcomes, that is, and they could not find any statistically significant difference between those who stayed versus those who were lost because of this. This alleviates concerns from potential bias because attrition was not driven by either exposure or disease. There is no selective attrition present in this scenario. For scenario four, we have a submissive terminate incidence of chronic disease. 150 people were examined at the end of a three-year. Both cases were found giving an incidence rate of 8%. Fifty other members of the initial report cannot be examined or any of these 50 cannot be examined because they had business loss of subjects to follow up. Represent a source of selection? If so. Survival bias is concerned here because about 1/3 of the subjects were lost in all of the selection survival biases suspect. You don't know why the 20 people that they died of the disease in question or the incidence rate would if. If so, the incidence rate would probably be higher than 8%. You can minimize selection bias and the study design phase by trying to reduce non response, try to increase participation maybe by offering up an incentive perhaps by continuing to follow up with the placement so they remember that they are still in our study, that they are connected to your study and stuff for us perhaps. You will have the confidence you've also faltering visit three that they are eligible to come back for visit for, meaning that they are not required to come back for every single follow up visit to the Mayor of study. Another thing that you can do is to try to find people who are lost to follow right to reenroll on back. So there are a lot of methods now to identify people. People are even using Facebook to try to connect with participants to somehow we're lost and we will previous rounds of follow up. Another thing that you can do is to collect information on non responders, even if it's just basic formatting information. This will really help remediate from some some concerns and that there are differences between those who stayed in their study versus those who did not. Another thing that you can do is you can have study personnel blinded to a status of the participant and then fourthly, you can standardize the criteria for enrolling chases, the control so good so that there's no difference in the inclusion criteria, right? Remember, you do not want to have inclusion criteria or exclusion criteria that are different between. One person, another. If you have as your criteria for controls that none of the controls should have the exposure and then it criteria for chasing that everyone should get exposed, then of course you're gonna find insolation, right? So you need to have inclusion, inclusion criteria that are the same for your cases. And then another way to avoid selection bias is to conduct a population based study because they tend to be a little bit more represented. Bias and confounding - Part 3: Information Bias And the information on exposure or disease as well? Can happen at any type of study. Zion is not more prominent in one versus the other. The first type of information bars we're going to talk about is misclassification bias. This happens when the study precipitants are misclassified with respect to their exposure or disease staff. There are two types of misclassification lives. The first type is differential misclassification class and this is the one that you don't want to see if you. This is when the rate of misclassification is different between the groups, meaning that one group is more likely to be misclassified than another. This results in estimates that are inflated, meaning that you will end up seeing an association when it isn't really there. For example, if your cases are more likely to be misclassified as exposed. Is that what you're going to end up with is an Oz ratio that is going to be biased away from the null, meaning that you're likely to find something that is just not significant, which may not be accurate in terms of what the true relationship is. You're going to say that the disease is significantly associated with higher odds of being exposed when the true sociation is potentially null. The second type of misclassification is called non differential, so this is the one that you prefer to have. If you have a misclassification bias, this is when misclassification. Then your study is not occurring more in one group or another. It's pretty much equal. So if it's equal, then the error is essentially the same in both groups. This minimizes any true difference between the groups, so it would be biased estimates toward the null. It's always better to find no association when there really is 1 than to report an association. When there isn't one. Epidemiology is always better to be conservative. So first, if there's a difference between the ability of one group to accurately remember their exposure status, then you would say that there is different from its classification. The example that we discussed previously in the case there was recall bias among mothers of infants with birth defects. These mothers will spend more time recalling medications during pregnancy. Interview or recorder bias is when interviewer has knowledge about the status of the placement, so they may subconsciously influence the answers of the previous. The last example of refraction with Constipation is when one group reports more accurate information than the other. For example if you have a case control study. New cases are from 1 to 30 and your controls are from another and there are differences in the record keeping between the two facilities so that one facility will report more accurate information on the other and this is an example of different from this past situation. Some examples of non differential misclassification include difficulty remembering exposure and both of you ask all your participants about the frequency relation for intensity of exercise, but they they engaged in Up here most people have a hard time remembering. This would be similar in both. The Bluetooth device is sometimes there are errors in the recording records and databases. This error is not likely to occur more one group versus another, so will be equally distributed in both groups. So this is going to be considered another example of non differential misclassification. Let's say you're using surrogate measures of exposure. This will likely result in some exposure misclassification, but it will be the same in both groups. So instead of asking people if they were treated for depression, you ask them if they were prescribed antidepressants and use this as an indication of being treated for depression. Thanks so much. I love to use a broad definition of exposure outcomes. Then the computer. This is should be the same number groups. So if you're asking people do you smoke, Then they are both. Let's talk about some other different types of information bias, the first type of bias. The interviewer. The individual status has either exposed or having to be used or not, and because of this, this will influence how the interviewer asked to participate. The questions the interviewer may follow bit more provide comments that may be suggestive. Participants to answer a certain way. The second type of information bias that we see here on the screen and several inquiries. This is when disease Africans better and the monitor population than in general. It may also occur in investors more closely monitor the newspost individuals longer than to not expose individuals. Bias is another type of violence that we talked about at length in this course is when subject remove all past exposures better or more messages she has. The disease is driven by the need to have figure out why the company disease or condition reporting bias is a cause and effect. So that's going to put forth during group 2. How do they feel? They cannot be truthful because the subject matter is sensitive sentence. They can be related to smoking, drinking, factory dogs. This may be. This may result in because this is more inclined to provide inaccurate information. So an example could be that many women asking about alcohol. Smoking or drug usage? They may. Not provide accurate information to you. Maybe it underestimation of what they explained to you because of the stigma that is in place for women who are engaging in these other believing. Let's look at a couple different scenarios. In this scenario, has suggested that physicians may care women who worked for consciousnesses more often or more thoroughly than women who do not list them and make an association with reserves can providers in four countries, because the association may be due to what type of information. ICE is a concern of this particular scenario because limiting or conscious upper from more widely to the screen or not. Remember that Bias is sincere. In his study, designs were studied contract. There are several strategies that you can take to reduce information life. In the settings. You can take certain steps and the study behind for you, such as distance and your staff flighted to the city. You should try. What are you? By state and you have someone collecting information on exposure information as someone who can help determine the study group. There's lots of studies designed and conducted no longer control and the study analysis phase. You can only evaluate this compilation wise, responsibility and study. You can do this by comparing the people who are included in this party versus people who were loss of all activity if there were significant. If there weren't any statistics differences and the information bias is unlikely to affect your studies results. If they are statistically different, then you have to see what they differ on to determine how that would affect those results. You can also compare the evaluation from different investors to turn the. Agreement or disagreement. Other studies can be very helpful. For example, if you're worried about information bias and dietary questionnaire, maybe other studies have evaluated the ability and reliability of the questionnaire that you used, and they found that it was time. And this instance you can report in the discussion section of your study that while information bias may be a concern. Questionnaire that you realized it was previously found in another study to be reliable and valid. This information bias may not be such a big concern. Not going to make mistake. Mario 6 researchers are conducting a case control study of resocialization diet of young children and diagnosis of childhood cancer by age 5. The researchers are worried about the potential for recall bias. These parents are being asked to recall what their children can only ate over a period of five years. Which control group would reduce the likelihood of recall? Choosing to have your control group as parents of children who have other serious health problems to find other cancer rules through these recall. These parents are like to be quite concerned about any exposures that researchers aren't about. Therefore, these parents can be expected to recall exposures in a way that is more comfortable with. In contrast, the children have no health problems, or parents with children with only minor health problems are less likely to be concerned with carefully recalling orientation. So. Expressing concern. So yesterday, Trump was asked about. Is there any? Possibility. That wasn't enough. Yeah, it wasn't. It wasn't answer at all. Testimony. I just remember seeing it. As you watched the right quote, I don't remember how it got there or whatever, but there was no. What's your What's your response? Shakespeare. Yeah. People went there with one thing in mind, and that is. Donald Trump. If you would consider running for president on the third party. Rather than. Still. Every single poll that third party candidates are multiple third party candidates showed Trump with an even larger edge. Bias and Confounding - Part 4: Confounding Bias creates an association that is not true, but confounding describes an association that is true but potentially misleading. Often we observe a true sociation and our epic study and we try to derive a causal inference, but the relationship is not causal. In actuality, the relationship between our exposure and outcome is really due to 1/3 variable that was not measured or not even considered. This scenario is called compounding. Confounding can either create an association that really isn't there, or mask or distort. A true association is when the effects of the two variables cannot be teased apart. For instance, if women are studying in the morning and men were study in the afternoon, we cannot separate out sex and time effects. Cortana exists when the association between your exposure and your outcome is changed after we take into account the effects of the third variable. Mr. variable is referred to as a confounder is defined as a variable that distorts an association wholly or partially, due to its association with both the exposure and the outcome of interests. There are three requirements for being a confounder. First, it must be associated with the disease. Second, it has to be associated with your exposure of interest. And lastly, it cannot be an intermediate step in the causal pathway between the exposure you're studying and the outcome. And we'll go more into what that means. Investigators are studying the relationship between factor A and disease B. There is factor XA potential confounder. Increasing factor X is known to cause disease B. It is also associated with factor A, but is not a result of factor A which is the exposure. This demonstrates that factor X is confounder. Let's take a look at some more examples. You're setting the relationship between coffee consumption and lung cancer. Is smoking a confounder? Smoking is associated with coffee consumption and its associated with lung cancer. So it does meet the first two criteria of being a cofounder. The last criteria is that smoking should not be on the causal pathway between coffee consumption and lung cancer. Smoking is not on the causal pathway between exposure and outcome here. Because of that, smoking is a confounder in this scenario. Investigators were then considered smoking as confounder and will need to take this into account in their study. We'll talk about methods to address confounding in a little bit. And this scenario, investigators would like to examine the relationship between oral contraceptive use and cardiovascular disease. His religion and compounder. Religion is only associated with oral contraceptive use and not cardiovascular disease. It already fails to meet our second criteria of being a confounder. Because of this, religion is not going to be a confounder and this relationship between oral contraceptive use and cardiovascular disease. In the last example of identifying a potential confounder, investigators are studying moderate alcohol consumption and heart disease. Are HDL levels and potential confounder? What is associated with heart disease? But it is within the causal chain between alcohol consumption and heart disease. Moderate alcohol consumption increases serum HDL, which decreases risk of heart disease. So since it's a step in the causal chain, it cannot be a compound. There are some refinements to the definition of a component. Sometimes confounders like this may just be a proxy for some other factors that we don't know yet, but we know that these factors are associated with many diseases. That's why many EPI studies were almost always considered these as confounders. There are several ways to reduce confounding and an episode. Methods to reduce confounding can be completed at two time points, either in the study design phase or the study analysis phase and the study design phase. Investigators can't perform randomization. Randomization is an option only in experimental studies where investigators would randomly allocate recipients to various treatment groups or various exposures. The theory is that since you randomize people into different groups than whatever confounding is present in the relationship between the exposure and outcome would then be distributed. Between the studied arms. So then it would be the same across all groups of the study participants. Even if investigators randomized it doesn't always eliminate all the concerns the confounding. So usually investigators would still like to complete other strategies to reduce confounding in the study and not solely rely on randomization. Remember that most EPI studies investigators cannot randomize by exposure. This is because most of the studies that are completed in epidemiology are observational analytical studies, so meaning case control studies or course studies where random allocation of exposure is not possible. The second method investigators can utilize in the study design phase to alleviate concerns with confounding is to have a restriction in place for the study participants. So far, we've really talked about study criteria, remember inclusion and exclusion criteria. While some of the criteria for study are there with the sole purpose of reducing her funding concerns, restrictions should be made based on the confounder that investigators are concerned about and the relationship between the exposure and outcome. Now, whenever a restriction and a study is in place, it will limit the number of people who are eligible for the study. Depending on how stringent the exclusion criteria are, the impact on the sample size will vary. Rescuers were need to decide whether restriction is important enough to have in place. Says. Compounding can be reduced in a number of ways. Maybe the investigators don't want to have a restriction for their factor that they're concerned about. Perhaps they would rather take care of it in another way. But remember, even if you have a restriction in place, there's always a possibility of. You must be aware that the more restrictions you have in your study, the less generalizable it is. So you cannot evaluate the factor that you restrict on. The last method to reduce confounding and the study design phase is to match matching was previously discussed in the Case Control lecture as a mean to address confounding. Investigators can choose to match the unexposed group in cohort studies and your control group and the case control study to the comparison group. The confounders that are selected to match on should be carefully chosen so that investigators do not limit the sample size for nothing. Do you read every study? So you'll notice that many of the studies that do choose to match as a method to reduce confounding will end up matching on factors like age and gender. However, there are other confounders that can be matched on as well. It really depends on your research question. There are a number of issues to be aware of if this method is utilized. First, if investigators match on a factor, then they cannot evaluate it. So if they match on sex, investigators can no longer examine whether there is sex interaction and associations since the two groups will be the same with regard to sex. As such, if there is a chance that the potential confounder is one that you may want to explore further regarding his potential role in the exposure outcome relationship you're investigating, I would advise you not to match on it. You can always reduce confounding of that variable in the study analysis space. Again, always matching the most important founders since it really limits your sample size. Matching itself is difficult, expensive and time consuming. And if you're not careful you match with something that is not a confounder, then it reduces your power. Last thing is that matching on certain founders won't take care of the other confounders. But there are some that can't implicitly control for confounders not directly accounted for. For instance, if you have by age, sex, or race. These factors may help control for other unmeasured confounders, since the groups may be similar on the other factors. Of Prime Minister confounding can be reduced in the second phase of the study, which is referred to as the Study Analysis phase. This phase is when the investigators are analyzing the data, as its name implies. Stratification is a technique where the data are stratified by the levels of the compounding factor and the estimates are compared between the different strata. Stratification is easy for variables with limited number of categories. It allows you to evaluate confounding and interaction. You can also estimate the association between exposure and outcome. By zeroing levels and the factor. Let's take a look at certification as a means to control from, using an example from a case control study of oral contraceptives and heart attacks. Here, the confounder question is age. Data from the study are displayed in the two by two table, the numbers for the four close to be used to obtain our OZ ratio to plug these values into our standard formula for an odds ratio which yields a observatio of 2.5. Now this 2.5 is considered the crude or unadjusted odds ratio because it doesn't take into account any other factors in its calculation. It's only calculated with the exposure and outcome of lying. This official also represents the OZ ratio from the entire study population. This Austrasia is not the only AUS ratio we can calculate using the two by two table. We can investigate the relationship between oral contraceptive views and heart attacks and a subset or specific group of people within our study. To do those we need to stratify. Stratification merely means that we are going to break down the entire study population into different groups. Strata. After the study population is broken down into various strata, we can calculate stratum specific odds ratios. Let's explore stratification by age. Now we have 2 separate two by two tables as mentioned previously, stratification by age as being explored. To do this we first need to consider how to categorize our factor of interest, which is age. I decided to do this dichotomously with age broken down into less than 40 years including class. So if we stratify the participants from previous table into two groups by age less than 40 and 40 plus, we have the following tables. The two by two table on the left represents all participants in the study that were less than 40 years. The two by two table on the right includes all participants who reported. If so, within the two tables on the side, we'll sum up to the four siding cell of the original two by two table on the previous slide that represents the entire study population. This is because these stratification tables are simply a breakdown of the original study population. Everyone is still categorized based off of their exposure and disease status, so their location and the table itself with respect to letter blocks still holds true. To discuss separated out into two tables by 8. Using these two tables, we can calculate the odds ratio for participants who are less than 40, which comes out to be 1.48. Then we calculate the odds ratio for 40 plus which is 2.29. Both of these aspirations are referred to as stratum specific odds ratios or age specific odds ratios. So now we have 3 estimates of ours ratios. We have our crude or unadjusted AUS ratio from the participants which was 2.5. Then we have our strategy specific odds ratios for less than 40 and 40. Plus the odds ratio for less than 40 was about 1.5 and for those 40. Classes about 2.3. Looking at these odds ratios, we can surmise that the overall crude odds ratio 2.5 is being driven by individuals who are older. The second part of sophistication is to get an adjusted estimate of pulling the stratum specific estimates together. Let's take a look back at our example to get an adjustable pulled. One method of getting a pooled estimate is to use the Mantle handling methodology. For obtaining adjusted relative risk and odds ratio using the Cochrane Mantel handling method will be utilizing the equation for the adjusted odds ratio on the right. Since our example study is a case for study by measuring association is Austrasia. However if it was a cohort study. Calculating and rule to press then we need to utilize the copper maintenance on method for relative first on the left hand side. We have our staff specific to writing tables. These tables will be utilized to obtain an Aust ratio that is adjusted for age. Since we are calculating and adjusted odds ratio, we utilize the formula for that. The formula itself is not complicated. The numerator has 8 * C divided by the total number of participants. This is completed for each two by two table and this example there are only two two by two tables. Sometimes investors will start levels so they may have 5 separate ID. If that's the case that they would have to do this equation demonstrated here 5 * 1 time per two by two table We need to do this calculation 8 times. Together. And if not, we have literacy divided by the sample size. This again is completed for every 2.2 table that you have when you start fly out. Here we only have two two by two tables, so we only need to do that for the less than 40 table and the 40 plus table. And the numerator, the mini formula of A * C / n is now displayed twice, once for the less than 40 table it once for the 40 plus table. The first one has 10 * 465 / 600. We have 600 and the denominator here because the sum of the 1st. By two table less than 40 is 600. From the second two by two table 40 plus our display here now. So now we have an enumerator here for the 2nd grade 2 table 36 * 175 which corresponds to the blocks A&D&D second to my 2 table and then we would divide this by the total number of people in the 40 plus. Table which is 346. The same thing is completed for the denominator of the entire equation. We have here B from C / N so we pull the values from the less than 42 by 2 tables. 9635 so now we have 90 * 35 divided by the total number of people and be less than 42 by two table that would be 600 and then we do the same for our second two by two table so that would be 110 * 25 divided by the whole number of people in this table which includes 346. Now if we had five tables, then we have to do this five times. But for simplicity we only have two tables here, so we only need to do it twice. We calculate this all out. We get an adjusted odds ratio of 1.97, meaning that this is adjusted for age. Or you could say it is an age adjusted pause ratio. Taking a look at all our calculator Osprey shows, we see something different. The crude or unadjusted odds ratio was 2.5. The adjusted Aust ratio, which was age adjusted specifically, is roughly 2. If we solely relied on the crude odds ratio of 2.5 without taking into account age as a confounder, there would have overestimated the association between oral contraceptive use and heart attacks. After adjustment for age, we get an estimate that is only two. Both estimates would result in the same conclusion, a positive. Oral contraceptive use and other types. But the crude odds ratio of 2.5 would suggest that oral contraceptive use plays a bigger role in the pathogenesis of heart attacks than if the conclusions were based on the adjusted observation. This is why it's always important in emphasize the base the final conclusions on the adjusted point customers whether they be odds ratios or relative risk. Before deciding to report an age adjusted odds ratio, we must determine age with even like a founder and the relationship between oral contraceptive use and heart attacks. If it isn't, then we should not include this variable in the final model, meaning we don't need to adjust like age, the standard criteria to determine whether age is. Mr. C If after an adjustment is made, whether the point estimate changes 5.10%? Recruit US ratio was 2.5 and the adjusted odds ratio was 2. The difference between these estimates does meet the criteria necessary to consider age as confounder. A 10% change of the crude estimate, which was 2.5 is the difference of about 0.25 the age adjusted. Is 2.5, which means that it differs from the crude OSP ratio by about 20%. So age is a confounder. Because of this, some quarantine include age and the statistical model. Since it is the cofounder that must be taken into account in the analysis of this relationship, failure to do so will result in an overestimation of the measure of association. The second method investors can use to reduce confounding during the analysis phase follows multiple regression analysis. You would run the analysis without the confounder and see what the estimate is. This estimate is going to be the crude estimate. Then you would rerun the analysis with the confounder and see if the new estimate is different by 10% or more. You would include it as a potential confounder in the model if it does, because criteria say we're interested in the relationship between obesity and diabetes are current. Observatio is 5, meaning that individuals with diabetes, or quantized, more likely to be obese than individuals who do not have diabetes. Let's say you're concerned about race being a confounder, so you have race into your regression model. You end up getting an aggressive arts ratio of 2.0. Would you say that race is a confounder? Yes, that's because there's more than a 10% change from 5:00 to 2:00. It's about a 60% change. Second, under the add set to the regression model to check the estimate change, we get an adjusted Aust ratio of 2.5. So sex is a confounder based on the criteria of a 10% change. A couple of final key points I have found from Founders destroyed the relationship between exposure, your study, and the outcome of interest. It's not an error in the study with something that should be understood. If we don't take care of from founders in the study and the results would be erroneous and the interpretation would either be an over or. Imitation of the truth association. Remember the confounder is actually a true effect with the outcome. You just need to understand the relationship always, primarily to identify and measure for potential confounders. You can't account for something if you have no data on it, so you should always gather sufficient information from a study participants to use. Function. Apple Wallet. Bubble. Bubble. Justice Department. Very well. As a result. That ended up. Weather. So what, theoretically? Right. Right. Make sure that. Right. What they say. Space launch today. How about this? Details on this one. Or maybe it's Bias and confounding - Part 5: Interaction and Chance Between your exposure and your outcome at different levels of 1/3 variable. This is different from compounding is sometimes referred to as effect modification. Let's see if interaction is present in the investigation between 2 exposures, smoking and alcohol and oral cancer. This table displays A relative risk that we're already calculated, so there are not numbers of individuals that we would normally see. In two table we have end column smoking no Yes and we have an R rose alcohol consumption no. In yes we look at 1.53. This is the relative risk of having smoking exposure. No alcohol consumption exposure, right. So if we look at here, we have yes to smoking, no to alcohol consumption. And when we look at 1.23 here, this is the relative risk of having exposure only to alcohol and no exposure to smoking. Then lastly, 5.71 is a relative risk of having exposure to both alcohol and smoking. So one thing I want to point out that you may not have noticed are the notations after relative risk and epidemiology. When you look at the data sets for dichotomous variables, one will indicate yes. 0 indicate no, so this tells us the relative risk. 10 That means that one would be yes for exposure one, no for exposure to exposure one, and this instance would be smoking. So that's yes for smoking exposure, no for alcohol consumption. Then we when we go to the relative risk. 01 on the left hand side here it is No2 exposure one which would mean no smoking and yes to exposure 2 which means yes the alcohol consumption and then lastly for relative risk one. One that would be that there's yes to both exposures and then finally the relative risk 00 indicates no smoking or alcohol consumption. So if a synergistic interaction is present, then the relative risk of both exposures should be greater than the product of the relative risk from each exposure in isolation. So let's see if there's interaction between alcohol and smoking and the relationship with oral cancer. So what we would do is we we need to 1st calculate the expected relative risk for smoking and alcohol. We use the equation listed on the screen where you're calculating out the expected relative risk by multiplying the relative risk of each exposure in isolation. So when we plug the values in for relative risk with exposure to only smoking and then exposure to only alcohol consumption. We get the expected relative rise of about 1.88. If synergistic interaction is present, then the observed relative risk would exceed the expected relative risk. We've already calculated the expected relative risk as 1.88, so this is what we calculated here. We plugged in our expected relative risk and we plugged in our observed relative risk which is pulled from the two by two table On the left hand side. So when we look at this, we see that the expected relative risk is lower than the observed relative risk. So this indicates that interaction is indeed present and the multiplicative model. Answer arises from random successive variation in how you choose your sample and how measurements are made. Small studies with limited power or more likely to have chance occur because of the variability within the population and who you happen to choose. It can also occur with measurement error because of reduced precision, relatively effects, size you're trying to detect, and if you are testing several interrelated hypotheses. For instance, multiple testing. To decrease occurrence of chance, increase your sample size. This way the outliers won't play such a big role because the sample size is large. Larger sample size is tend to be more representative. The other thing you can do is you can increase your measurement precision. This reduces random variability and measurements. Proactive In summary, there are three major threats to your associations that need to be addressed and any study. They include bias, confounding, and chanting. All these can be reduced or prevented in either the design or analysis phase, and you can try to mitigate their effects in both phases of study. Once damage is done in the study design, you can't always fix the end analysis space. The effects of bias, confounding, and transition either create or destroy your association. Next. Intelligence. Right.

Use Quizgecko on...
Browser
Browser