Hypothesis testing methods (Parametric tests) PDF

Document Details

Uploaded by Deleted User

Tags

hypothesis testing statistics parametric tests ANOVA

Summary

This document provides an overview of hypothesis testing methods, focusing on parametric tests. It explains Student's t-test and Analysis of Variance (ANOVA), highlighting their applications in comparing means of groups. The document also touches upon the importance of assumptions about data distribution in choosing the correct test.

Full Transcript

Hypothesis testing methods (Parametric tests) Student's t test (t test) and analysis of variance (ANOVA) are statistical methods used in the testing of hypothesis for comparison of means between the groups when the testing variable (dependent variable) is normally distributed or approximatel...

Hypothesis testing methods (Parametric tests) Student's t test (t test) and analysis of variance (ANOVA) are statistical methods used in the testing of hypothesis for comparison of means between the groups when the testing variable (dependent variable) is normally distributed or approximately normally distributed continuous data. Both the above tests are Parametric tests. A parametric test relies upon the assumption that the data you want to test is (or approximately is) normally distributed. If your data does not have the appropriate properties then you use a non- parametric test. The Student's t test is used to compare the means between two groups, whereas ANOVA is used to compare the means among three or more groups. There are many statistical tests within Student's t test (t test) and ANOVA. T test It is one of the most popular statistical techniques used to test whether mean difference between two groups is statistically significant. Null hypothesis stated that both means are statistically equal, whereas alternative hypothesis stated that both means are not statistically equal. One-sample t test : This method is used to determine whether mean value of a sample is statistically same or different with mean value of its parent population from which sample was drawn. To apply this test, mean, standard deviation (SD), size of the sample (Test variable), and population mean or hypothetical mean value (Test value) are used. If population SD is not known, one sample t test can be used at any sample size. Note : One-sample t test is used when sample size is the critical F value, alternate hypothesis is accepted and there is significant differences between the groups. Its significant P value indicates that there is at least one pair in which the mean difference is statistically significant. One way ANOVA A one-way ANOVA uses one independent variable, while a two-way ANOVA uses two independent variables. Use a one-way ANOVA when you have collected data about one categorical independent variable and one quantitative dependent variable. The independent variable should have at least three levels (i.e. at least three different groups or categories). A one-way ANOVA uses the following null and alternative hypotheses: H0 (null hypothesis): μ1 = μ2 = μ3 = … = μk (all the population means are equal) H1 (alternative hypothesis): at least one population mean is different from the rest. Example data set for one way ANOVA Fertilizer 1 Fertilizer 2 Fertilizer 3 6 8 13 8 12 9 4 9 11 5 11 8 3 6 7 4 8 12 In the above example Fertilizer is the factor (Independent) and plant growth is the response variable(Dependent). The one way ANOVA is an omnibus test statistic. This implies that the test will determine whether the means of the various groups are statistically significant or not. However, it cannot distinguish the specific groups that have a statistically significant mean. Thus, to find the specific group with a different mean, a post hoc test needs to be conducted. Two way ANOVA The two-way ANOVA is extension of one-way ANOVA (In one-way ANOVA, only one independent variable, whereas in two-way ANOVA, two independent variables are used). The primary purpose of a two-way ANOVA is to understand whether there is any interrelationship between two independent variables on a dependent variable. Similar to one way ANOVA, a two way ANOVA also needs a post hoc test. Example data set for two way ANOVA In this case, we have the following variables Response variable (Dependent): plant growth Factors (Independent): 1. sunlight exposure, 2. watering frequency And we would like to answer the following questions: Does sunlight exposure affect plant growth? Does watering frequency affect plant growth? Is there an interaction effect between sunlight exposure and watering frequency? (e.g. the effect that sunlight exposure has on the plants is dependent on watering frequency) General steps for ANOVA-one way Step 1: Calculate all the means for all the groups. Then you also need to calculate to overall means with all the data combined as one single group. Step 2: Setup the null and alternate hypothesis Step 3: Calculate the Sum of Squares: Calculate the Sum of Squares Total (SST): The SStotal is the Sum of Squares(SS) based on the entire set in all the groups. In this case, you treat all the data from all the groups like on single combined set of data Calculate the Sum of Squares Within Groups (SSW): The is the sum of squares within each group. After calculating the sum of squares for each group, then you add them together for all the groups. Calculate the Sum of Squares Between Groups (SSB): This is the sum of squares with the groups taken as single elements. Assumming there are three groups you will have to do the the following: (group1_mean – total_mean)2 + (group2_mean – total_mean)2 + group3_mean – total_mean)2 Step 4: Calculate the Degrees of Freedom : Degrees of Freedom Total (DFT) = total of all the data sets combined (n) -1 Degrees of Freedom Within Groups (DFW) = number of groups (k) -1 Degrees Between Groups (DFB) = n-k Step 5: Calculate the Mean Squares: Mean Squares Between (MSB) Mean Squares Within(MSW) Step 6: Calculate the F Statistic Step 7: Look up statistical Table and state your conclusion. The table mentioned below can be used for calculating F values for one way ANOVA. For two way ANOVA or two factor ANOVA, calculations involve the sum of squares for both the variables and for the interaction. F value for both the variable and interaction is used further for p value estimation. The table mentioned below can be used for calculating F values for 2way ANOVA. Source of SS=Sum of df MS= Mean F variation squares Square Factor A SSA a-1 MSA = SSA/dfA FA = MSA/MSE Factor B SSB b-1 MSB = SSB/dfB FB = MSA/MSE AxB SSAxB (a-1)(b-1) MSAxB = FAXB = MSAXB/MSE (Interaction) SSAxB/dfAxB Error SSE ab(n-1) MSE = SSE/dfE (Within) Total SST N-1 Group 1 Group 2 Group 3 51 23 56 45 43 76 33 23 74 45 43 87 67 45 56 Types of error Type 1 : Leads to False positive, type I error rejects the null hypothesis when it is true. The probability of committing a type I error is equal to the level of significance that was set for the hypothesis test. Therefore, if the level of significance is 0.05, there is a 5% chance that a type I error may occur. Type 2 : It results in failing to reject the null hypothesis when it's actually false. It leads to false negative outcomes. The probability of committing a type II error is equal to one minus the power of the test, also known as beta. The power of the test could be increased by increasing the sample size, which decreases the risk of committing a type II error. Type III errors are rare, occurs when one correctly rejects the null hypothesis of no difference, but does so for the wrong reason. Post-hoc tests Post hoc tests allow us to test for difference between multiple group means while also controlling for the family-wise error rate. Some of the commonly used post hoc tests are : Tukey’s Method with ANOVA Bonferroni with ANOVA Dunnet with ANOVA Tukey test ( Tukey’s honest significant difference) Also known as Tukey’s range test. After you have run an ANOVA and found significant results, then you can run Tukey’s HSD to find out which specific pair of groups are different. This test compares all possible pairs. The test statistic used in Tukey's test is denoted q and it is a modified t-statistic that corrects for multiple comparisons. q is based on studentized range distribution. The sample sizes of all the groups are required to be equal for Tukey HSD. (If the groups have different sample sizes, a Tukey-Kramer Test is performed). General steps for Tukey test From ANOVA output, note the mean of each group, and Mean Square Within (MSw). Check the critical q value in the table using number of groups(k) or treatments and degrees of freedom within (df = total number of samples –k). Calculate the HSD for the Tukey test 𝑀𝑆𝑤 HSD = 𝑞𝑐𝑟𝑖𝑡. 𝑛 For each pair of groups 𝑋𝑚𝑎𝑥 − 𝑋𝑚𝑖𝑛 is calculated , if 𝑋𝑚𝑎𝑥 − 𝑋𝑚𝑖𝑛 > HSD, null hypothesis is rejected and alternate hypothesis is accepted for that pair of group. Xmax and Xmin are the larger and smaller means of the two groups being compared. Dunnett’s test Dunnett’s test is used when we want to compare one group (usually the control treatment) with the other groups. This test compares the means from multiple experimental groups against one control mean. Dunnett’s table is used for getting the critical value. 2𝑀𝑆𝑤 DDunnette= 𝑡𝐷𝑢𝑛𝑛𝑒𝑡𝑡 𝑛 𝑡𝐷𝑢𝑛𝑛𝑒𝑡𝑡 is the value from Dunnett’s table, MSw is the mean square within and n is the number of observations in a single group. For each pair of groups 𝑋𝑚𝑎𝑥 − 𝑋𝑚𝑖𝑛 is calculated , if 𝑋𝑚𝑎𝑥 − 𝑋𝑚𝑖𝑛 > DDunnette, null hypothesis is rejected and alternate hypothesis is accepted for that pair of group. Bonferroni test/correction The Bonferroni correction adjusts your significance level to control the overall probability of a Type I error (false positive) for multiple hypothesis tests. It is a series of independent t-tests performed after the correction of α value. For correction of α, Bonferroni’s correction formula is used αcorrected = α/number of comparisons So, if you’re running test with 20 comparisons simultaneously, the Bonferroni- adjusted significance level would be 0.05 / 20 = 0.0025. Tests for categorical data Two commonly used tests for the categorical data are 1. Chi square test 2. Fisher’s exact test Chi square test This test was first used by Karl Pearson in 1900. This test is used to compare the distribution of a categorical variable in a sample or group with the another one. This test is used for comparing experimentally obtained results with those expected theoretically or based on the hypothesis. A dataset should meet following requirements for performing a chi-square test It should have at least 2 categorical variables Sample size should be large enough (preferred n>50) Independence of observations The chi-squared test performs an independency test under following null and alternative hypotheses, H0 and H1, respectively. H0: Independent (no association) H1: Not independent (association) Chi-Squared test is also used for the detection of linkage. Properties of Chi-Square Test Mean distribution is equal to the number of degrees of freedom. Mean of chi-square variable (μ) = n Variance is double the times the number of degrees of freedom. Variance of chi-square distribution = 2n The Chi-Square is denoted by χ2. The chi-square formula is: χ2 = Σ(Oi – Ei)2/Ei where Oi = observed value (actual value) Ei = expected value. Steps for chi-square test 1. State the Hypotheses 2. Calculate expected frequencies (E). E is calculated under the assumption of independent relation or, in other words, no association. Expected Frequency = (Row Total * Column Total)/N 3. Compute the Chi-Square Statistic using the formula χ2 = Σ(Oi – Ei)2/Ei 4. Determine the Degrees of Freedom (df): df = (r-1)*(c-1). 5. Find the Critical Value of χ2 from the distribution table and Compare with calculated χ2. Compare the chi-square statistic to the critical value to decide whether to reject the null hypothesis. If χ2calculated > χ2 critical, alternate hypothesis is accepted. Contingency table for chi square test For performing chi-square table a contingency table is prepared. In a contingency table degrees of freedom is calculated in a different way: df = (r- 1)*(c-1), where r= no. of rows and c= no. of columns in a contingency table. Contingency table displays the multivariate frequency distribution of the variables. It is shown with an example in next slide. Example 60 subjects, of which were males and 30 were females, were randomly tested for the presence of periodontal pockets in a dental clinic during a regular dental visit and the outcomes were analyzed for association between periodontitis and gender. Present Absent Present Absent Male 22 8 30 Male 22 8 Female 7 23 30 Female 12 18 29 31 60 Expected Frequency = (Row Total * Column Total)/N (Oi – Ei)2/Ei Present Absent Present Absent Male (22-14.5)^2/14.5 (8-15.5)^2/15.5 30 Male 30*29/60 30*31/60 30 =3.88 =3.62 =14.5 =15.5 Female (7-14.5)^2/14.5 (23-15.5)^2/15.5 30 Female 30*29/60 30*31/60 30 =3.88 =3.62 =14.5 =15.5 33 26 60 33 26 60 χ2calculated > χ2 critical χ2 = 15 χ 2.050 = 9.488 Gender and periodontitis are linked To get the p-value of the chi-square test in excel use the function CHITEST(), it needs selection of cells with observed values and cells with expected values as input.

Use Quizgecko on...
Browser
Browser