Summary

This study guide provides a chapter-by-chapter overview of important concepts that students might find on their test (303-1 test 3). The study guide covers different ways to describe results (comparing percentages, correlating scores, comparing means), explains frequency distributions, and describes measures of central tendency and variability.

Full Transcript

Chapter 12 Contrast three ways of describing results: ○ Comparing group percentages This method involves analyzing the proportion of subjects in different groups that exhibits a certain characteristic, allowing for straightforward...

Chapter 12 Contrast three ways of describing results: ○ Comparing group percentages This method involves analyzing the proportion of subjects in different groups that exhibits a certain characteristic, allowing for straightforward comparisons between groups ○ Correlating scores This technique assesses the relationship between two variables, inducing how one variable may predict or relate to another, often quantified using correlation coefficients ○ Comparing group means This approach involves calculating the average scores of different grups to determine if there are significant differences, typically using t-tests or ANOVA Describe a frequency distribution, including the various ways to display a frequency distribution ○ A frequency distribution is a summary of how often each score occurs in a dataset, providing insights into the data’s shape and spread ○ Graphical representations: common ways to display frequency distributions include: Pie charts: Show proportions of categories as slices of a pie Bar graphs: Represent categorical data with rectangular bars, where the length of each bar is proportional to the value it represents Frequency polygons: A line graph that connects points representing the frequency of each score Histograms: similar to bar graphs but used for continuous data, showing the frequency of scores within specified ranges. Describe the measures of central tendency and variability ○ Central tendency: Measures that summarize a dataset with a single value representing the center of the data Mean (M): The average score, calculated by summing all scores and dividing by the number of scores Median (Mdn): THe middle score that divides the dataset in half, particularly useful for ordinal data Mode: The most frequently occurring score, applicable for nominal data. ○ Variability: Measures that describe the spread of scores in a dataset Standard Deviation (SD): Indicated the average distance of scores from the mean, providing insight into data spread Variance (s2): The square of the standard deviation, representing the degree of spread in the dataset Range: The difference between the highest and lowest scores, offering a simple measure of variability Define a correlation coefficient ○ A correlation coefficient quantifies the strength and direction of a relationship between two variables, ranging from -1.0 (perfect negative correlation) to +1.0 (perfect positive correlation) ○ Interpretation: Values close to 0 indicate weak relationships, while values near -1 or +1 indicate strong relationships Define effect size ○ Effect size quantifies the strength of a relationship between variables, providing context beyond p-values, which only indicate statistical significance. ○ Importance: Understanding effect size helps researchers assess the practical significance of their findings Describe the use of a regression equation and a multiple correlation to predict behavior ○ Used to predict the value of one variable based on another, expressed as Y=a+bX, where: Y: criterion variable (what you want to predict) X: Predictor variable (known value) a: Y-intercept, the value of Y when X is zero. b: slope, indicating how much Y changes for a unit change in X ○ Multiple correlation: Involves predicting a variable based on multiple predictors enhancing prediction accuracy. Discuss how a partial correlation addresses the third-variable problem ○ Partial correlation: Tells us what the correlation between two variables is if you control for a third variable (hold it constant) ○ Third variable problem: An uncontrolled third variable may be responsible for the relationship between two variables of interest Summarize the purpose of structural equation models ○ Describes expected pattern of relationships among quantitative non-experimental variables ○ Path diagrams: Visual representations of models being tested. Shows theoretical causal paths among the variables. Used to make “models” of relationships among variables. Arrows lead from variable to variable. Stats provide path coefficients Chapter 13 Explain how researchers use inferential statistics to evaluate sample data ○ Purpose: Inferential statistics allow researchers to make conclusions about a population based on sample data, helping to evaluate hypotheses and draw inferences. Distinguish between the null hypothesis and the research hypothesis ○ Null hypothesis (H0): posits that there is no effect or difference in the population serving as a baseline for testing ○ Research Hypothesis (H1): Suggests that there is a significant effect or difference Discuss probability in statistical inference, including the meaning of statistical significance ○ Statistical significance: determine when the probability of observing the data under the null hypothesis is very low, typically set at an alpha level of 0.05 ○ Describe the t test and explain the difference between one-tailed and two-tailed tests ○ Used to determine if there are significant differences between the means of two groups, calculating a T-value based on the difference between group means and the variability within the groups ○ One tailed: when research hypothesis specifies a direction of difference between the groups ○ Two-tailed: chosen when research hypothesis does not specify a predicted direction of difference Describe the F test, including systematic variance and error variance ○ Used when comparing more than two groups, assessing systematic variance (between group differences) against error variance (within group differences). A large F ratio indicated a higher likelihood of significant results ○ Statistical variance: deviation of the group means from each other: between group differences ○ Error variance: deviation of the individual scores in each group from their respective group means: within group difference Describe what a confidence interval tells you about your data ○ Definition: confidence intervals (CIs) provide a range of values within which the true population parameter is likely to fall, enhancing the interpretation of results. ○ Interpretation: A 95% CI suggests that is the same study were repeated multiple times, 95% of the calculated intervals would contain the true population parameter Distinguish between Type I and Type II errors ○ Type I: occurs when the null hypothesis is incorrectly rejected, suggesting a false positive result ○ Type II: happens when the null hypothesis is not rejected despite it being false, indicating a missed opportunity to detect a true effect Discuss the factors that influence the probability of a Type II error ○ Significance (alpha) level: higher alpha = more power ○ Sample size: bigger sample size = more power ○ Effect size: bigger effect size = more power Discuss the reasons a researcher may obtain nonsignificant results ○ Results of a single study can be nonsignificant even when a relationship between variables in the population exist (type II error) Sample size should be large enough to find a real effect Evidence of non related variables should come from multiple studies ○ Upshot: you only fail to reject the null, you don't say you accept or prove the null Define power of a statistical test ○ Definition: the power of a statistical test is the probability of correctly rejecting the null hypothesis when it is false, desired to be at least 0.80 ○ Influencing factors: Sample size, effect size, and significance level (alpha) all influence the power of a test Describe the criteria for selecting an appropriate statistical test ○ 2 groups with interval/ratio = t-test ○ 3 groups with interval/ratio - One way analysis of variance ○ Interval/ratio with interval/ratio = pearson correlation ○ 2 or more variables with interval/ratio = analysis of variance (factorial design) ○ Interval/ratio (2 or more variables) with interval/ratio = multiple regression Cohen’s d and r ○ Cohen's d: an effect size estimate used when comparing two means, calculated as (M1-M2)/SD, providing insight into the magnitude of difference ○ Pearson r: serves as an effect size indicator for correlation, with squared values (r2) representing shared variance between variables

Use Quizgecko on...
Browser
Browser