T-Test Lecture 6 PDF
Document Details
Uploaded by RefreshingManticore7704
Shaheed Benazir Bhutto Women University
Dr Sonia Shagufta
Tags
Summary
This document provides a lecture on t-tests, a statistical method used to compare means of two groups. It explains the types of t-tests, including dependent and independent t-tests, their assumptions, and outputs. The document is focused on understanding and implementing t-tests in SPSS.
Full Transcript
T-test Lecture 6 Dr Sonia Shagufta T-tests Rather than looking at relationships between variables, researchers are sometimes interested in looking at differences between groups of people. If we want to see the differences between the group then we uses t-tes...
T-test Lecture 6 Dr Sonia Shagufta T-tests Rather than looking at relationships between variables, researchers are sometimes interested in looking at differences between groups of people. If we want to see the differences between the group then we uses t-test. T-tests are generally used when we have one IV (with two levels) and a DV We get the means for the two conditions of our IV T-tests compare these means Are they significantly different from each other? They test whether the difference between our means is significantly different from 0 There are two types of t-tests based on how we collected our data: Dependent t-test for repeated measures/within subject design(when same group is exposed to both condition) Independent t-test for between subjects design (when different In this design we will have two conditions, but this time different participants participate in each condition). Different t-tests: different assumptions There are two general assumptions of both t-tests: Normally distributed data The sampling distribution of differences in means should be normally distributed Compute the difference in means of conditions for several samples and this distribution is normally distributed second assumption is only relevant for independent t-tests because it uses different groups of people in each condition: Homogeneity of variance between our two conditions This assumption means that the variances should be the same throughout the data. In designs in which you test several groups of participants this assumption means that each of these samples comes from populations with the same variance. In correlational designs, this assumption means that the variance of one variable should be stable at all levels of the other variable Independence of scores: The observation between groups should be independent, which basically means the groups are made up of different people. You don’t want one person appearing twice in two different groups as it could skew you results. Low correlations Dependent t-tests SpiderRM.sav Analyze -> Compare means -> Paired samples t-test Put in to SPSS the pair of conditions we want to be assessed Picture and Real spider Depends on how much evidence you have to support the strength of your relationship between IV and DV Dependent t-tests The output Paired samples statistics table For each condition we have: mean, N (total no. of participants in that condition) std. dev. std. error of mean The output Paired samples correlations table Tells us the correlation of scores in our two conditions Correlation is not a problem for dependent t- tests as it is expected based on use of the same people in each condition We do not have to meet the assumption of independence of scores The correlation (.545) is not high above then.60 it mean highly correlated. Also we can look at the fact that it is not a significant correlation (p=.067) The output Paired samples t-test table Tells us whether our two conditions are significantly different Recall: it tests whether the difference between our group means is significantly different from 0 We have the mean group difference and the std. dev. and std. error mean for this group difference Confidence intervals Interval within which the population mean difference lies How t-test works: t = difference between means std. error mean This value is compared to standard values based on df We want our t to be larger than the value in the table to obtain significant results The output Here we see that t is significant as expected (p =.031) Also we see that t is negative Recall t was based on picture-real conditions This means that the second condition had a larger mean than the first condition Therefore the mean of the real condition was higher than the mean of the picture condition More anxiety felt in the real condition Therefore we can interpret our results as follows: Real spider exposure caused significantly more reported anxiety in spider phoebes than exposure to picture, t(11) = -2.47, p Compare means -> Independent samples t-test Test variable: Anxiety Grouping variable: Spider or picture We have to define our groups Tell SPSS what values we have given our two conditions in our data entry We should note which is being referred to as Group 1 and 2 Picture was given 0 and is group 1 in this analysis Independent t-tests The output Group statistics table For each condition we have N mean, std. deviation and std. error mean Independent samples t-test table Here we have two rows based on the assumption of homogeneity of variance: Equality of variances assumed or equality of variances not assumed To know which row we read, we look at the Levene’s test (recall we want non-significant results) We have non-significant results so we read equality of variances assumed The output Persons experienced greater anxiety to real spiders (M=47.00, SE=3.18) than to pictures of spiders (M=40.00, SE=2.68). This difference was not significant, t(22) = -1.68, ns