🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Lecture 1: Review 1 [29 août] 1A: What are statistics Basic terms - Population: All entities or individuals or interest - Parameter: value that describe population ex: population variance, population mean - Sample: subset of individuals from population N usually repr...

Lecture 1: Review 1 [29 août] 1A: What are statistics Basic terms - Population: All entities or individuals or interest - Parameter: value that describe population ex: population variance, population mean - Sample: subset of individuals from population N usually represent sample size - Estimate: value that describes the sample Types of statistics - Descriptive: summarize/describe sample properties or population - Inferential: draw conclusions/inferences regarding population properties, based on sample data - Appropriate analysis depend on study design and research question types of variables whether assumptions of analyses are met Variables - Characteristic that varies across observations - Independent Variable: predictor, factor in experimental design - Dependent Variable: outcome/response, predicted Types of research - Correlational: IV measured by research good for ecological validity bad for inferring causality IV and DV may have relationship due to confounding variable - Experimental: IV manipulated by researcher good for inferring causality lab settings may feel detached from real word - Statistical methods to analyze data may be the same for both - Between-subjects design: each participant only in one experimental condition - Within-subjects design: each participant does + than one experiment condition DV measured multiple times vulnerable to practice and fatigue effects Categorical variables - Low level of measurement - Nominal: classifies objects, not quantitative two observations on an attribute the same or different Dichotomous/binary: two categories/levels - Ordinal: rank data does 1 have more or less of attribute than 2 relative standing of two observation on an attribute Continuous variables - High level of measurement - Interval: rating data in equal distances assigned numbers have meaningful units with constant sizes - Ratio: interval with absolute 0 point (lack of the attribute) ex: price, height, pulse rate Lecture 2: Review 2 [3 septembre] 2A: What are statistics Measures of central tendency - Mean: average add together all values, divide by sample size vulnerable to outliers used most commonly, unless there’s outlier - Median: value in the middle if odd number, it’s middle value if even number, average the two middle values less vulnerable to outliers used if there’s extreme outliers - Mode: value that occurs most frequently not affected by extreme values may be no or many modes used for numerical or categorical data Measures of variation - Give info on the spread or variability of data values - Range: difference between largest and smallest observation simplest measure xlargest- xsmallest - Variance: average of squared deviations of value from mean s2 = Sum of Squares / N - 1 SS = sum (X - mean)2 - Standard Deviation: shows variation about the mean most commonly used same units as original data s = √s2 better interpretation if scores are converted to Z-scores Shape of distribution - Also describes how data is distributed - Left/negative skew = mean < median - Symmetric = mean=median - Right/positive skew = median < mean Normal distribution - DV often assumed to be continuous and normally distributed - Mean = median = mode - Density: height of curve at different values of X 1/√2πSD2 exp- (X - mean)2 / 2SD2 - Mean + SD contains 68% of values mean + 2*SD contains 95% mean + 3*SD contains 99.7% Lecture 2: Review 2 [3 septembre] 2A: What are statistics Hypothesis testing - Theories: systems of ideas used to describe a phenomenon - Hypothesis: empirically testable statement derived from a theory Null Hypothesis Significance Testing Mosley used in experimental research Steps of hypothesis testing - Setting up an hypothesis Null H0 : no effect Alternative H1: research/experimental hypothesis Non directional H1: some effect Directional H1: specifies effect’s direction - Choosing a-level (significance level) decide area of extreme scores that are unlikely if H0 is true proportion of times we’re willing to accidentally reject H0, even if it’s true critical value: cutoff sample score for a - Examine data and compute appropriate test statistic Chi-square: for frequency distributions comparison Z: comparing one mean if SD is known T: comparing one mean if SD is unknown or comparing two means F: comparing more than two means or ANOVA - Make the decision whether to reject or accept H0 compare calculated value of test statistic to a critical value if value greater or equal to critical value, reject H0 - Alternatively look at p-value for test statistic value p-value: proportion of data sets that’d yield a result as extreme or + extreme than observed result if H0 is true if p < a, reject H0 - If H0 rejected, can confide that there’s a statistically significant effect in the population statistically significant doesn’t mean that we have a precise estimate of the effect or that’s it’s important/meaningful Confidence interval - Gives us info about precision of our estimate - Usually form [(1 - a) * 100%] Avis ex:.05 a = 95% CI - As sample size increases, estimate gets more precise - As a decreases, CI intervals become larger/wider Effect sizes - Standardized measure of treatment effect’s magnitude Pearson’s r or correlation ratio squared Cohen’s d Omega or omega squared Eta squared - APA recommended reporting effect sizes measures Errors of hypothesis testing - Type 1: reject H0 when it’s true - Type 2: accept H0 when it’s false - Power: reject H0 when it’s false (1 - B) - Trade-off: higher a results in lower B (+ power) B can’t be controlled if we don’t know enough about H1 Z-test - Based on sample mean, we test if population mean is equal to a hypothesized value - Three assumptions are needed for z-test variable X in population is normally distributed population standard deviation must be known sample must be independent - Can convert a score (X) to a standard score (Z) Z = score - mean / SD of original scores - If distribution of scores is normal, Z-score will transform all scores to a standard normal distribution shape of distribution DOESN’T change, only the units - If scores are on a standard normal distribution, it’s easier to interpret them ex: Z = 1.22 is 1.22 SD above the mean - Z-score for sample score: X - mean / SD - Z-statistics for sample mean: X - u0 / Standard error u0: value of SD under H0 Standard error: SD / √N Sampling distribution example - Sample of 25 from the population of PSYC305 students - Sample mean IQ of 110 and SD of 15 H0: u = 100 standard error = 15 / √25 = 3 z = (110-100) / 3 = 3.33 - If a =.05 (two tailed) then our critical value is about 1.96 if H0 is true, a/2 (2.5%) of z tests will be less than -1.96 and the same amount will be greater than 1.96 2.5% * 2 = 5% (desired level for a) - H0 rejected because z statistic value (3.33) is greater than the critical value (1.96) would also be rejected if value was smaller than -1.96 Lecture 3: Review 3 [5 septembre] 3A: T-test for one mean Basics - Based on sample mean, we test if population mean is equal to a hypothesized value - Two prior assumptions for the t test variable X in the population is normally distributed sample must be independent - Obtained by replacing the population SD with the sample SD due to the replacement, t-test doesn’t follow standard normal distribution, but the t-distribution T-distribution - Discovered not William Gosset in 1908 - Varies in shape according to degrees of freedom df = N - 1 - T distribution approaches normal distribution more as df is becomes large very close for df > 30 Calculations - Calculate value of t-statistic for your sample mean - Examine table of t values and find the critical value for the desired a-level, depending on df ex: df = 24, critical t value is 2.064 at a =.05 - If the t value has an absolute value greater or equal to the critical value, reject H0 at the level of a - t = X - u0 / Sx Sx = sample SD / √N u0 = value of u under H0 (hypothesized mean) Example - Sample of 25 from the population of PSYC305 students - Sample mean IQ of 110 and sample SD of 14 H0: u = 100 critical value for t test at a =.05 with df = 24 is 2.064 - Sx = 14 / √25 = 2.8 - t = (110 - 100) / 2.8 = 3.57 - H0 rejected because t value (3.57) is greater than critical t value (2.064) can conclude that mean IQ for PSYC305 students is significantly different than general population IQ score t(24) = 3.57, p <.05 3B: T-test for two means Basics - Testing if two unknown population means are different from each other, based on their sample H0 / u1 = u2 - Independent samples: between-subjects designs ex: participant only go through one experiment condition - Correlated samples: known as repeated-measures ex: participant may go through both experiment conditions - Three prior assumptions both populations are normally distributed SD of populations are the same each subject is independent Calculation - t = D / sD D = X1 - X2, difference between two samples sD = estimated standard error of the difference D - sD = √ (s2 / N1) + (s2 / N2) s2 = [sample variance1 (N1 - 1) + sample variance2 (N2 - 1)] / df SS1 + SS2 / df1 + df2 df = (N1 - 1) + (N2 - 1) - Here, s2 is a pooled estimate of within group variance Cohen’s d - Observed mean difference (D = X1 - X2) is in the units of the original variable X - Mean differences are not damapeanek across studies using different measurement instruments - d = (X1 - X2) / s s = √ s2 Effect sizes - Can also convert t value to an r value - r = √ t2 / t2 + df Confidence intervals - Single sample z-test CI = mean +/- [z-value for confidence level * (SD / √ sample size)] Lecture 4: ANOVA part 1 [10 septembre] 3A: T-test for one mean Terminology - Independent Variable : also called factor or treatment variable levels/treatments : different values/categories of the IV variable that’s chosen/controlled by the researcher - Single-factor/one-way design: a single IV with 2 or + levels includes one-way independent group design and one-way with repeated measures design - Factorial designs: + than 1 IV with two or + levels includes two-way independent-groups design called a 2x2 when there’s two IVs with two or + levels each Basics of One-Way ANOVA - Uses to test whether the means of k (> 2) populations differ significantly H0: u1 = u2 = … H1: at least one of the means is different - Three prior requirements/assumptions population distribution of the DV is normal in each group homogeneity of variance assumption: the variance of the population distributions are equal for each group independence of observations One-way ANOVA vs several t-tests - Using several t-test would lead to an inflated familywise Type 1 error rate - Familywise type 1 error: probability of making at least one type 1 error in the family of test if H0 is true chance of at least one significant difference is > a afw = 1 - Pr (no error) = 1 - (1 - a)2 ex: 2 tests with a-level of.05 PR = (1 - a)(1 -a)(1 - a) = (1 - a)3 afw = 1 - (1 -.05)3 = 1 -.953 = 1 -.857 =.14 - ANOVA usually consists of two types of test overall F-test to see if H0 is false post-hoc tests to look at pairs of groups, only interpreted if the F-test is significant One-way ANOVA calculation - ANOVA (ANalysis Of VAriance) has three steps divides variance observed in data into different parts resulting from different sources assesses relative magnitude of the different parts of variance examines if a particular part of the variance is greater than expectation under H0 - Variance explained by the model (MSm): variance between groups that’s due to IV or its different levels/treatments M stands for Model MS = Mean Squares, “mean” of sum of squared deviations - Residual variance (MSr): variance within groups within each group, there’s some random variation in the scores for the subject - If your F value is greater/equal to the critical value, you reject H0 alternatively, if p-value <.05 (example), you reject H0 - If the ANOVA is with only two groups, either a t or a F test can be used for testing significant difference between means when the number of groups is two, F = t2 F-statistic - F = MSm / MSr if the F statistic is greater than the critical F value, we reject H0 that groupe means are equal in the population - If groups means differ from each other, MSm tends to be larger, which makes F large too - Follow an F distribution that varies in shape according to dfM and dfR dfM: between group/model degrees of freedom dfR: within group/residual degrees of freedom - The F-distribution is a right-skewed distribution used in ANOVA when referencing it, dfM is given first SS computations - Needs samples estimate of MSm and MSr sample variance/s2 = SS / df = SS / N - 1 - Total SS/total variation/SSt = SSm + SSr - SSm = variation due to the model, or between-group variation variation between the sample means across levels of the IV if there’s + variation in population means, we expect + variability between sample means SSm = Σ Ng(Xg - X)2 = sum of [size of group(group mean - grand mean)2] - SSr = residual or within-group variation variation among the observations within a particular group, not explained by the IV SSr = ΣΣ(Xig - Xg)2 = sum of (observation i in group g - group mean)2 Calculating variance estimates - MSm = SSm / DFm = SSm/k - 1 k = number of groups between - MSr = SSr / DFr = SSr / N - k N = total number of observations within - MSt = SSt / DFt = SSt / N - 1 - F = MSm / MSr then, compare F to the critical value for an F statistic at the chosen a-level Effect sizes - Pearson’s correlation coefficient: r = √(SSm / SSt) - eta squared: η2 = SSm/SSt biased high (tends to overestimate amount of variance explained in DV by the IVs) - omega squared : ω2 = SSm - DFm*MSr / SSt + MSr need to be reported, even if it’s negative small =.01, medium =.06, large =.14 - Both eta and omega squared calculates the proportion in DV explained by the IVs explained variation / total variation Lecture 5: ANOVA part 3 [19 septembre] 5A: ANOVA for more than two means Basics - One way ANOVA used to tbe st if the means significantly differ H0 = all means are the same - H0 rejected means that at least one pair of means is different Omnibus test - F ratio/test gives global effect of IV on the DP, but doesn’t tell us which pair of means is different called a omnibus/overall test - Post hoc test/comparisons: used to know which means are different decided upon after the experiment in ANOVA, used if 3/+ means are compared Sheffés’s test - Can be used if groups have different sample sizes - Less sensitive to departures from the assumption of normality and equal variances - Most conservative test (very unlikely to reject H0) good choice to avoid Type 1 error has lower power to detect differences - Use F ratio to test for a significant difference between any two means - Sheffé’s test critical value: critical value of F * (k - 1) - Sheffé’s test residual SS: SSr from the main analysis - Calculate SScomparison, MScomparison and Fcomparison, comparing Fcomparison to the critical value SScomparison = (Xg1 - Xg2)2 / (1/Ng1) + (1/Ng1) MScomparison = SScomparison / 1 = SScomparison Fcomparison = MScomparison / MSr if Fcomparison is greater, conclude the pair of means is significantly different Tukey’s HSD test - Typically used if groups have equal sample sizes and all comparisons represent simple difference between two means - Uses the studentized range statistic Q = (Xg1 - Xg2) / √MSr / Ng - If Q value is greater/equal to the critical value (Qcrit), reject H0 also reject H0 if: (Xg1 - Xg2) > Qcrit * (√MSr / Ng) HSD = Qcrit * (√MSr / Ng) -> minimum absolute difference between two means for a significant difference - Perform ANOVA, calculate differences in means, find Qcrit and calculate HSD there’s a difference between two means if it’s HSD is smaller than their mean differences APA style guidelines - 1-2 sentence overview of analyses, including ID and DV details of the study design usually show up earlier, but is included here - Description of F test overall results F statistic must have its associated df, statistic and p-value include effect size ex: F (2, 21) = 5.9, p =.01, w2 =.29 - Description of pattern of means differences among groups, and if significant differences were found should be able to read the description and understand without looking at numbers means and SDs should be reported, and show which differences are significant sometimes effect sizes can be included - A conceptual conclusion describes how you understand the results - Everything written in the active past tense (ex: we found) - Concise, no extra stuff necessary - 2 decimal places for numbers, 3 for p-values Lecture 6: ANOVA part 4 [24 septembre] 6A: ANOVA for more than two means Assumptions - H0 and assumptions determine shape of sampling distribution - We want to reject H0 or not based on whether it’s true or not if H0 is true, but some other assumption doesn’t hold, we may not know shape of sampling distribution - Sampling distribution for F ratio under H0 may follow a different shape p-value/critical F value is incorrect, low power or type 1 error rates not equal to a - Assumptions about normality/equal variances are about the population Assessing normality - > 0 and median < mean - positive/right skew - < 0 and mean < median - negative/left skew - Kolmogorov-Smirnov test: compare sample scores to a set of score generated from a normal distribution with sample mean and SD less powers than S-W - Shapiro-Wilk test: looks at normality in general, unspecified mean and variance more powerful but only for normal distributions - If test is significant, reject H0 that the distribution of variable is normal likely asumssumption of violation tests are limited because it’s easy to reject H0 with a large sample - Separate histograms for each group look for obvious visual signs of non normality and asymmetry - Evaluate a normal Q-Q plot compute percentile rank for each score, sorting from smallest to largest calculate theoretical/expected z-scores from percentile ranks calculate actual z-scores plot observed vs real z-scores Assessing homogeneity of variance - Fmax test of Hartley: calculate sample variance for each group, finding the smallest and largest Fmax = max variance / min variance easy but assumes that each group has equal amount of observations (equal N) - Levene’s test: measures how much each score deviates from its group mean Uij = Yij - Yj = score - mean if statistically significant (eg: p <.05), can conclude that variances are significantly different very easy to obtain significant results when sample size is large - Brown and Forsythe test: measures how much each score deviates from its group median Uij = Yij - Ýj = score - median if statistically significant, can conclude that variances are significantly different works better than Levene’s with nonnormal data or outliers Without normality or homogeneity - Can use data transformation to make data less skewed or make variances more homogenous √Y = weak, log(Y) = mild, 1 / Y = strong - Kruskal-Wallis ANOVA: if data transformation doesn’t help meet assumptions - Brown-Forsythe ANOVA: if only homogeneity of variance is violated Assessing independence of observations - Independent observations: value of one observation doesn’t give a clue to the others most important assumption ; F test can’t be fixed easily and results can’t be predicted - No test exist to check non independence - Knowing how data was collected can help around this assumption ex: pairs of participants may have less independent results

Tags

statistics hypothesis testing data analysis research methods
Use Quizgecko on...
Browser
Browser