PSYC2010 Lecture 11 (Week 12) Correlation and Regression PDF

Summary

These lecture notes cover correlation and regression concepts in Psychology. Specific topics include covariance, point-biserial correlations, and Spearman's rho. The lecture also includes examples of data and analysis.

Full Transcript

PSYC2010 Lecture 11 (week 12) Last lecture Covariance and correlation calculation of r testing the significance of r (Rho, ρ) interpretation of r (variance explained, r2) Point-biserial correlation Why do we need to go one step further and calculate...

PSYC2010 Lecture 11 (week 12) Last lecture Covariance and correlation calculation of r testing the significance of r (Rho, ρ) interpretation of r (variance explained, r2) Point-biserial correlation Why do we need to go one step further and calculate r – why can’t we just calculate covariance? A. covariance is scale dependent B. covariance only provides the strength of the relationship, but tells us nothing about the direction of the relationship C. because I enjoy torturing you point-biserial correlation Pearson’s r is appropriate for describing linear relationships between two continuous variables if one variable is genuinely dichotomous (2 levels), score one level of that variable as 0 and the other as 1 (or 1 and 2, or 1 and -1, or any two numbers) ⚫ compute correlation using Pearson’s r formula ⚫ referred to as a point-biserial correlation Your data… Students who don’t check their social media during the lecture ⚫ did better in first year stats (rpb =.22, p =.015) ⚫ think statistics are more useful (rpb =.25, p =.006) ⚫ like shopping less (rpb = -.23, p =.011) ⚫ like watching tv less (rpb = -.21, p =.023) ⚫ are more satisfied with their lives (rpb =.29, p =.001) rpb and t alternatively, we could examine the relationship between a dichotomous and continuous variable using an independent groups t-test ⚫ e.g., Is pet ownership associated with happiness? Can calculate rpb or do a t-test result of test of significance of rpb and t-test will be identical today’s lecture Non-parametric version of r – Spearman’s rho Magnitude of effects Comparing correlations across independent groups Factors that influence r Regression What you should “get” from this lecture What does the spearman rank-order correlation test? How do we assess association/effect size for chi square? Why do we need to convert the value of r to compare correlations for two independent groups? What factors affect the magnitude of correlations? What is regression? What is the least squares criterion? What is the formula of the regression line? What do a, b, y, and y-hat represent? Spearman rank-order correlation Data type? Quantitative (measurement) Qualitative (categorical) Question One Two Hypothesis testing: differences about variable: variables: relationship Goodness Conting- of fit ency table chi-squared chi-squared Single sample compared to Comparison between groups population If only If pop. Two Groups Multiple Groups Form of relationship Degree of relationship pop. variance mean known: known: z-test t-test Pearson linear correlation regression & point biserial power correlation Dependent Independen Dependent Independen Groups: t Groups: Groups: t Groups: Matched Indepen- Repeated- Multiple Spearman’s One-way samples dent groups measures Comparisons rho ANOVA t-test t-test ANOVA A priori Wilcoxon’s Wilcoxon’s (planned): Kruskal- Post hoc: = parametric MP signed- Rank Sum Friedman Bonferroni t tests Wallis Scheffe Test ranks test test & Linear = non- Constrasts parametric tests Spearman rank-order correlation coefficient (or Spearman’s rho) Pearson’s correlation coefficient r is based on assumptions: ⚫ interval or ratio data, ⚫ X and Y are normally distributed, and ⚫ a linear relationship between X and Y So we assume a bivariate normal distribution based on interval or ratio scales Spearman’s rho Spearman's rho (rS) is calculated using Pearson’s r formula - the difference is that the data are ranked use Spearman’s rho if: at least one of the variables is measured on an ordinal scale there are extreme scores in your sample there is a monotonic relationship between the variables A monotonic relationship As X increases, Y increases (or decreases), but not necessarily at a constant rate 100 90 80 70 60 50 40 30 20 10 0 0 1 2 3 4 5 6 7 Spearman’s rho converted to ranks 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 overview of Spearman’s rho 1. if needed, assign ranks to the X and Y scores from lowest to highest 2. calculate SPxy, SSx and SSy 3. using Pearson’s r formula, calculate rs 4. interpret your result Example of Spearman’s rho We want to test the theory that creative people will be able to create taller tales and thus perform better in the World’s Biggest Liar competition. We give the current round of contestants (all 10 of them) a creativity questionnaire (maximum of 60 where higher scores are associated with more creativity). The judges supply us with each contestant’s rank at the end of the competition. Because the judges’ rankings are ordinal, Spearman’s rho should be used to assess the relationship between creativity and performance in the World’s Biggest Liar competition. Example of Spearman’s rho Position Creativity score 3 49 2 56 5 31 9 cov XY 22 rxy = 1 s X sY 50 10 32 8 SPXY 48 r= 4 SS X SSY 58 7 42 6 30 Example of Spearman’s rho position creativity x-xbar y-ybar (x-xbar)x SPXY (X) 3 (Y) 7 x-xbar -2.5 sq 6.25 y- ybar 1.5 sq 2.25 (y-ybar) -3.75 rs = 2 9 -3.5 12.25 3.5 12.25 -12.25 SS X SSY 5 3 -0.5 0.25 -2.5 6.25 1.25 9 1 3.5 12.25 -4.5 20.25 -15.75 1 8 -4.5 20.25 2.5 6.25 -11.25 -56.5 10 8 4 6 4.5 2.5 20.25 -1.5 6.25 0.5 2.25.25 -6.75 1.25 rs = 4 10 -1.5 2.25 4.5 20.25 -6.75 82.5 x 82.5 7 5 1.5 2.25 -0.5.25 -0.75 6 2 0.5 0.25 -3.5 12.25 -1.75 rs = -.685 - - X = 5.5 Y = 5.5 Σ= 82.5 82.5 -56.5 SSx SSy SPxy TRUE OR FALSE: There is a negative correlation between creativity scores and success in the The World’s Biggest Liar contest because higher creativity scores are associated with lower values on the judges’ rankings. Example of Spearman’s rho position creativity x-xbar y-ybar (x-xbar)x SPXY (X) 3 (Y) 7 x-xbar -2.5 sq 6.25 y- ybar 1.5 sq 2.25 (y-ybar) -3.75 rs = 2 9 -3.5 12.25 3.5 12.25 -12.25 SS X SSY 5 3 -0.5 0.25 -2.5 6.25 1.25 9 1 3.5 12.25 -4.5 20.25 -15.75 1 8 -4.5 20.25 2.5 6.25 -11.25 -56.5 10 8 4 6 4.5 2.5 20.25 -1.5 6.25 0.5 2.25.25 -6.75 1.25 rs = 4 10 -1.5 2.25 4.5 20.25 -6.75 82.5 x 82.5 7 5 1.5 2.25 -0.5.25 -0.75 6 2 0.5 0.25 -3.5 12.25 -1.75 rs = -.685 - - X = 5.5 Y = 5.5 Σ= 82.5 82.5 -56.5 SSx SSy SPxy There is a relationship between creativity scores and success in the The World’s Biggest Liar contest, such that increased creativity is associated with better success in the competition. Spearman’s rho another benefit of Spearman’s rho is that it can be used when you have two continuous variables, but one (or both) is badly skewed due to extreme scores ⚫ Rank the scores for each variable (separately) from lowest to highest ⚫ Compute Spearman’s rho using the ranked scores with Pearson’s r formula ⚫ Interpret the result ⚫ If N > 30, we can evaluate rs in the same way we do Pearson’s r (i.e., with a t-test). However, if N < 30, we do not have the necessary tables in our workbooks to evaluate the significance of the result, so we would only give the direction and strength. Your data: number of Instagram followers and screen time… Average # of Instagram followers: 621 Standard deviation: 582 Range: 0 to 3917 Average screen time each day: 329 mins Standard deviation: 175 mins Range: 8 to 800 mins 800 mins = 13+ hours per day! Students with more Instagram followers… are more likely to think they’re skilled drivers (rs =.27, p =.003) are more worried about their performance in PSYC2010 (rs =.19, p =.035) like shopping more (rs =.21, p =.021) like playing sports more (rs =.21, p =.021) People who spend more time on their phone each day… did worse in first year stats (rs =.-19, p =.037) like playing sports less (rs = -.28, p =.002) are less likely to believe aliens exist (rs = -.18, p =.045) are more likely to hate other people giving them advice (rs =.24, p =.007) measures of association For most analyses we have calculated the size of the effect ⚫ t test: Cohen’s d ⚫ ANOVA: Omega square ⚫ Correlation: r square ⚫ Chi-square: ? Chi square test of independence revisited chi-square test for independence (O − E)2 2 E= row total x column tot al 2 N   = E c =133.053 df = (r-1)(c-1) 2crit (2) = 5.99 Died Survived Total 1st Class Obs 122 203 325 (Expected) (201.767) (123.233) 2nd Class Obs 167 118 285 (Expected) (176.934) (108.066) 3rd Class Obs 528 178 706 (Expected) (438.299) (267.701) Total 817 499 1316 What’s the direction of the effect? % died: 122/325 = 37.5% 167/285 = 58.5% 528/706 = 74.7% Died Survived Total 1st Class Obs 122 203 325 (Expected) (201.767) (123.233) 2nd Class Obs 167 118 285 (Expected) (176.934) (108.066) 3rd Class Obs 528 178 706 (Expected) (438.299) (267.701) Total 817 499 1316 measures of association a significant 2 does not indicate strength of the relationship if your sample size is large enough, sometimes very small effects will be detected as significant doubling the sample size doubles 2 obt ⚫ with N = 1316, 2 obt = 133.053 ⚫ with N = 2632, 2 obt = 266.106 but we can convert 2 to a measure of association that tells you how big the effect is measure of association: c Cramer’s phi () Class of cabin and survival 2 interpreted as Pearson r c = correlation between two N (k − 1) variables, each of which is a k = smaller of r or c categorical variable c 2 obt =133.053 the value of C ranges from 0 to N = 1316 1, where higher values indicate a greater degree of association 133.053 j= =.101 =.318 between the two variables 1316(2 -1) if chi-square is significant, so is “Variance accounted for” = r2 Cramer’s phi.318 x.318 =.101 (10% of the variance in survival is explained by cabin class) c c can be interpreted in terms of Cohen’s conventions Although c is a measure of association between two variables, rather than effect size, these concepts are related k-1 jc =.318 k-1 k-1 What about men vs women/children How strong is the association? Calculate Cramer’s phi (c)  2 N = 2201, 2 = 454.520 c = N(k − 1) k-1 = ? ⚫ 2 x 2 contingency table 454.520 ⚫ k = smaller of r or c = =.454 2201(1) ⚫ 2-1 = 1 thus, there is a moderate relationship between adult males vs all other people and survival rates 21% of the variance in survival is explained by whether you’re a man vs woman or child (r2 =.21) testing the difference between independent rs Testing the difference between two independent rs Sometimesit is useful to be able to compare the size of two correlations H0:1 - 2 = 0 or 1 = 2 H1: 1 - 2  0 or 1  2 The technique we use here is only applicable if the two correlations are independent (i.e., you use different people in each group) Example A statistics lecturer wants to compare the association between liking chocolate and class attendance across two semesters. In one semester she found greater liking of chocolate was associated with higher class attendance (r =.40, N = 151). Another semester no association was shown between liking chocolate and class attendance (r =.05, N = 120). Is there a significant difference between the two correlations? H0: 1 = 2 H1: 1 ≠ 2 Testing the difference between two independent rs When we’re testing the H0:  = 0 for a large enough sample, r will be normally distributed around zero. But when our H0 is ρ1 = ρ2 or  = some number other than 0, the sampling distribution of r is skewed Testing the difference between two independent rs For this reason, we need to convert r to r’ when testing the difference between two independent rs Fischer’s r to r’ (or r to z) conversion 1+ r r = (0.5) log e 1− r Can look up r’ in a table testing the difference between two independent rs convert each r to r (using Fisher’s table) calculate the standard error of the difference between slopes sr1 − r2 = 12 12 = s + + s 1  n −r13 n − 2 r23 calculate z r1 − r2 z= sr 1 ' − r 2 ' compare with z(.05) =  1.96 convert each r to r (using Fisher’s table) r1 =.40 r1' =.424 r2 =.05 r2' =.050 Example of testing r1' =.424, r2' =.050 n1 = 151, n2 = 120 differences between rs find the standard error of differences 12 21 sr − r == s + s + n1 −1 3 nr22 − 3 r  = 1 + 1 1 2 148 117 =.007 +.009 =.016 =.126 How would you interpret the standard error of differences of.126? A. Even if there is no difference between liking chocolate and class attendance across the two semesters, we would expect a difference of.126 between the two correlation coefficients just by chance alone B. If the two correlations differ by.126 we will be likely to conclude that the relationship between liking chocolate and class attendance differs across the two semesters Example of testing r1' =.424, r2' =.050 n1 = 151, n2 = 120 differences between rs find the standard error of differences 12 21 sr − r == s + s + n1 −1 3 nr22 − 3 r  = 1 + 1 1 2 148 117 =.007 +.009 =.016 =.126 then we do a z-test comparing the two rs r1 − r2.424 -.050.374 z= z= = = 2.968 sr1− r2.126.126 Example 1 of testing differences between rs zobt = 2.968 zcrit = 1.960 zobt > zcrit - Reject H0 A z-test found that there was a significantly stronger relationship between liking chocolate and attending class when chocolate was provided in class (r (148) =.40) compared to the semester when chocolate was not provided (r (117) =.05), z = 2.97, p <.05. In which semester do students report liking chocolate more? A. The semester with the higher correlation B. The semester with the lower correlation C. We have no way of knowing based on the correlations Does the relationship between liking babies and baby animals differ across years Previous year This year ⚫ r =.154, p =.077 ⚫ r =.190, p =.036, n=137 n=122 testing the hypothesis that  equals a specific value testing the hypothesis that  equals a specific value r −  ' z= sr ' 1 sr ' = n −3 compare with z(.05) =  1.96 Example Let’s say we’re interested in the correlation between experiencing a negative emotion and expressing it example of testing the hypothesis that  equals a specific value We want to know if an obtained r of.30 between experiencing a negative emotion and expressing it in a sample of people who meditate (N =103) is significantly different from a population correlation of.50 r −  ' r =.30,  =.50, n = 103 z= Need to find r’ and ’ and calculate sr’ sr ' 1 sr ' = n −3 Need to find r’ and ’ r =.30 r’ =.310 ρ =.50 ρ’ =.549 example of testing the hypothesis that  equals a specific value r =.30,  =.50, n = 103 r’ =.310, ’ =.549, sr’ = ? 1 sr ' = r −  ' n −3 z= sr ' 1 sr ' = =.1.310 -.549 100 z= = −2.390.1 zcrit = +/- 1.96 zobt > zcrit therefore Reject H0. The sample did not come from a population where r =.50, z = -2.39, p <.05. True or False: Because people who meditate are less likely to express a negative emotion when they experience it, we can conclude that meditation is beneficial. Meditation for all – hooray! factors that influence r nonlinear relationship restriction of range presence of extreme scores presence of heterogenous subsamples 1. nonlinear relationship Pearson’s r is designed to capture linear relationships between variables but not all relationships are linear - sometimes two variables can have very clear, predictable relationships but in more complex ways … 1. nonlinear relationship e.g., curvilinear relationship, monotonic relationship 70 80 60 70 60 50 performance 50 40 Y 40 Y 30 30 20 20 10 10 0 0 0 2 4 6 8 10 0 2 4 6 8 10 X X Emotional arousal 2. restriction of range 20 15 Unhappiness 10 Y r2 = 0.7396 5 0 0 10 20 30 X Depression 2. restriction of range so what if everyone who scored higher 20 than 10 on the X variable wasn’t counted? 15 (e.g. measuring depression, what if 10 Y people who are very depressed don’t 5 leave the house, and therefore aren’t tested in your 0 sample?) 0 10 20 30 X 2. restriction of range 20 15 10 Y 5 r2 = 0.0032 0 0 10 20 30 X 2. restriction of range Can also be caused by ceiling or floor effects ⚫ what’s the relationship between social networking use and depression among adolescents ⚫ highest response category was “every day or almost every day” 3. presence of extreme scores 15 Usefulness of stats 10 Y 2 r = 0.0057 5 0 0 5 10 15 X Adorableness of Smudge 3. presence of extreme scores 2 30 r = 0.2501 25 Usefulness of stats 20 Y 15 10 5 0 0 5 10 15 X Adorableness of Smudge 3. presence of extreme scores 2 30 r = 0.4231 25 Usefulness of stats 20 Y 15 10 5 0 0 5 10 15 X Adorableness of Smudge 4. presence of heterogeneous subsamples 4. presence of heterogeneous subsamples r =.22, p <.05 r =.31, p <.05 r =.08, ns factors that affect r 1. nonlinear relationship 2. restriction of range in one or both variables 3. presence of extreme scores 4. presence of heterogeneous subsamples ⚫ data in which the sample of observations could be subdivided into distinct groups on the basis of some other variable regression Regression and prediction let’s assume there is an association (or correlation) between variables X and Y ⚫ e.g., job satisfaction and intentions to quit if you know a value on variable X (e.g., job satisfaction), then you can use it to make an educated guess about what the corresponding value on variable Y (e.g., intentions to quit) will be regression and prediction in the language of regression, ⚫ X is our predictor ⚫ Y is our criterion the predicted value of Y is referred to as ˆ (called “Y-hat”) Y Although some texts refer to Y’ (“y-prime”) regression and prediction 18 if X and Y are 16 perfectly correlated Intoxication score 14 12 (r = 1 or -1), then Y 10 prediction is easy… 8 6 just read off the 4 scatterplot 2 in this case, if a person 0 scores 6 on X, they will have 0 5 10 a score of 13 on Y X Standard drinks regression and prediction but correlations are rarely ever perfect! 9 8 i.e., r for this set of Intoxication score 7 data does NOT 6 equal 1 – so we do 5 not have a perfect 4 line representing 3 the data 2 1 0 0 3 6 9 12 Standard drinks We can calculate a line that best represents the general relationship (or trend) of the data points The “line of best fit” (aka the regression line) allows us to make predictions that are better than if we knew nothing about the relationship. Can be expressed mathematically (linear regression equation) regression and prediction predictions won’t be perfectly accurate - there will be errors these errors are called residuals and are represented as ˆ ei = Yi − Yi the error for that person’s that person’s one person or real score on Y predicted score data point (i) on Y regression and prediction the “line of best fit” (aka regression line) is the line that minimises the squared errors of prediction this is called using the least squares criterion ⚫ it ensures that the deviation of scores from the regression line are at a minimum (i.e., errors in prediction are at a minimum) ⚫ aka SSresidual least squares criterion means the line is placed where the difference between what you predict (the line) and the real scores (the actual data points) is as small as possible across scores 9 more intoxicated 8 than predicted 7 Intoxication score 6 5 less intoxicated than 4 predicted 3 2 1 e =(Yi −Yi) 2 i ˆ 2 0 Standard drinks 0 3 6 9 12 (the line is drawn so that this value is as small as possible) regression and prediction we can define the regression line mathematically using a formula - this is the formula that describes a straight line Yˆ = bx + a where: Yˆ = predicted value of Y b = slope of regression line rate at which Y changes with each 1-unit increase in X a = intercept the predicted value of Y when X = 0 regression and prediction we can define the regression line mathematically using a formula - this is the formula that describes a straight line Yˆ = bx + a Stretch question: Could this regression line represent the number of drinks and level of intoxication data shown before? HINT: Look at the intercept to decide. A. Yes B. No Other versions Yˆ = bx + a Sometimes written: Yˆ = a + bx Yˆ = c + bx some of you may have learned different formulas in school, such as y = mx + b or y = mx + c these all mean the same thing, just different symbols to represent the intercept and slope intercept slope next (and last!) lecture Regression ⚫ Calculating slopes and intercepts ⚫ Standardised regression equation ⚫ Variance components info about the final exam Please complete the Course Evaluations

Use Quizgecko on...
Browser
Browser