Chapter 9: Introduction to the t Statistic PDF
Document Details
Uploaded by DexterousPansy
Frederick J Gravetter and Larry B. Wallnau
Tags
Summary
This document is a set of lecture slides on the t-statistic. It covers topics such as when to use a t-statistic instead of a z-score, performing hypothesis tests with t-statistics, effect size calculations (like Cohen's d and r²), and confidence intervals. The presentation clarifies the use of the t-distribution in statistical inference when the population standard deviation is unknown.
Full Transcript
Chapter 9 Introduction to the t Statistic PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick J Gravetter and Larry B. Wallnau Chapter 9 Learning Outcomes Know when to use t statistic instead of z-score 1 hypothesis...
Chapter 9 Introduction to the t Statistic PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick J Gravetter and Larry B. Wallnau Chapter 9 Learning Outcomes Know when to use t statistic instead of z-score 1 hypothesis test 2 Perform hypothesis test with t-statistics Evaluate effect size by computing Cohen’s d, 3 percentage of variance accounted for (r2), and/or a confidence interval Tools You Will Need Sample standard deviation (Chapter 4) Standard error (Chapter 7) Hypothesis testing (Chapter 8) 9.1 Review Hypothesis Testing with z-Scores Sample mean (M) estimates (& approximates) population mean (μ) Standard error describes how much difference is reasonable to expect between M and μ. either or M 2 n M n z-Score Statistic Use z-score statistic to quantify inferences about the population. M obtained difference between data and hypothesis z M standard distance between M and Use unit normal table to find the critical region if z- scores form a normal distribution – When n ≥ 30 or – When the original distribution is approximately normally distributed Problem with z-Scores The z-score requires more information than researchers typically have available Requires knowledge of the population standard deviation σ Researchers usually have only the sample data available Introducing the t Statistic t statistic is an alternative to z t might be considered an “approximate” z Estimated standard error (sM) is used as in place of the real standard error when the value of σM is unknown Estimated standard error Use s2 to estimate σ2 Estimated standard error: s s2 estimated standard error sM or n n Estimated standard error is used as estimate of the real standard error when the value of σM is unknown. The t-Statistic The t-statistic uses the estimated standard error in place of σM M t sM The t statistic is used to test hypotheses about an unknown population mean μ when the value of σ is also unknown Degrees of freedom Computation of sample variance requires computation of the sample mean first. – Only n-1 scores in a sample are independent – Researchers call n-1 the degrees of freedom Degrees of freedom – Noted as df – df = n-1 Figure 9.1 Distributions of the t statistic The t Distribution Family of distributions, one for each value of degrees of freedom Approximates the shape of the normal distribution – Flatter than the normal distribution – More spread out than the normal distribution – More variability (“fatter tails”) in t distribution Use Table of Values of t in place of the Unit Normal Table for hypothesis tests Figure 9.2 The t distribution for df=3 9.2 Hypothesis tests with the t statistic The one-sample t test statistic (assuming the Null Hypothesis is true) sample mean - population mean M t 0 estimated standard error sM Figure 9.3 Basic experimental situation for t statistic Hypothesis Testing: Four Steps State the null and alternative hypotheses and select an alpha level Locate the critical region using the t distribution table and df Calculate the t test statistic Make a decision regarding H0 (null hypothesis) Figure 9.4 Critical region in the t distribution for α =.05 and df = 8 Assumptions of the t test Values in the sample are independent observations. The population sampled must be normal. – With large samples, this assumption can be violated without affecting the validity of the hypothesis test. Learning Check When n is small (less than 30), the t distribution ______ is almost identical in shape to the normal z A distribution is flatter and more spread out than the B normal z distribution is taller and narrower than the normal z C distribution cannot be specified, making hypothesis tests D impossible Learning Check - Answer When n is small (less than 30), the t distribution ______ is almost identical in shape to the normal z A distribution is flatter and more spread out than the B normal z distribution is taller and narrower than the normal z C distribution cannot be specified, making hypothesis tests D impossible Learning Check Decide if each of the following statements is True or False By chance, two samples selected from the T/F same population have the same size (n = 36) and the same mean (M = 83). That means they will also have the same t statistic. Compared to a z-score, a hypothesis test T/F with a t statistic requires less information about the population Learning Check - Answers The two t values are unlikely to be False the same; variance estimates (s2) differ between samples The t statistic does not require the True population standard deviation; the z-test does 9.3 Measuring Effect Size Hypothesis test determines whether the treatment effect is greater than chance – No measure of the size of the effect is included – A very small treatment effect can be statistically significant Therefore, results from a hypothesis test should be accompanied by a measure of effect size Cohen’s d Original equation included population parameters Estimated Cohen’s d is computed using the sample standard deviation mean difference M estimated d sample standard deviation s Figure 9.5 Distribution for Examples 9.1 & 9.2 Percentage of variance explained Determining the amount of variability in scores explained by the treatment effect is an alternative method for measuring effect size. 2 variability accounted for t r 2 2 total variability t df r2 = 0.01 small effect r2 = 0.09 medium effect r2 = 0.25 large effect Figure 9.6 Deviations with and without the treatment effect Confidence Intervals for Estimating μ Alternative technique for describing effect size Estimates μ from the sample mean (M) Based on the reasonable assumption that M should be “near” μ The interval constructed defines “near” based on the estimated standard error of the mean (sM) Can confidently estimate that μ should be located in the interval Figure 9.7 t Distribution with df = 8 Confidence Intervals for Estimating μ (Continued) Every sample mean has a corresponding t: M t sM Rearrange the equations solving for μ: M tsM Confidence Intervals for Estimating μ (continued) In any t distribution, values pile up around t = 0 For any α we know that (1 – α ) proportion of t values fall between ± t for the appropriate df E.g., with df = 9, 90% of t values fall between ±1.833 (from the t distribution table, α =.10) Therefore we can be 90% confident that a sample mean corresponds to a t in this interval Confidence Intervals for Estimating μ (continued) For any sample mean M with sM Pick the appropriate degree of confidence (80%? 90%? 95%? 99%?) 90% Use the t distribution table to find the value of t (For df = 9 and α =.10, t = 1.833) Solve the rearranged equation μ = M ± 1.833(sM) Resulting interval is centered around M Are 90% confident that μ falls within this interval Factors Affecting Width of Confidence Interval Confidence level desired More confidence desired increases interval width Less confidence acceptable decreases interval width Sample size Larger sample smaller SE smaller interval Smaller sample larger SE larger interval In the Literature Report whether (or not) the test was “significant” “Significant” H0 rejected “Not significant” failed to reject H0 Report the t statistic value including df, e.g., t(12) = 3.65 Report significance level, either p < alpha, e.g., p <.05 or Exact probability, e.g., p =.023 9.4 Directional Hypotheses and One-tailed Tests Non-directional (two-tailed) test is most commonly used However, directional test may be used for particular research situations Four steps of hypothesis test are carried out – The critical region is defined in just one tail of the t distribution. Figure 9.8 Example 9.4 One-tailed Critical Region Learning Check The results of a hypothesis test are reported as follows: t(21) = 2.38, p <.05. What was the statistical decision and how big was the sample? The null hypothesis was rejected using a A sample of n = 21 The null hypothesis was rejected using a B sample of n = 22 The null hypothesis was not rejected using a C sample of n = 21 The null hypothesis was not rejected using a D sample of n = 22 Learning Check - Answer The results of a hypothesis test are reported as follows: t(21) = 2.38, p <.05. What was the statistical decision and how big was the sample? The null hypothesis was rejected using a A sample of n = 21 The null hypothesis was rejected using a B sample of n = 22 The null hypothesis was not rejected using a C sample of n = 21 The null hypothesis was not rejected using a D sample of n = 22 Learning Check Decide if each of the following statements is True or False Sample size has a great influence T/F on measures of effect size When the value of the t statistic is T/F near 0, the null hypothesis should be rejected Learning Check - Answers Measures of effect size are not False influenced to any great extent by sample size When the value of t is near 0, the False difference between M and μ is also near 0 Equations? Concepts ? Any Questions ?