PSY 201: Research Methods in Psychology

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is a variable?

A variable is any attribute of objects, people, or events that, within the context of a particular investigation, can take on different values.

What is a constant?

A constant is any attribute of objects, people, or events that, within the context of a particular investigation, has a fixed value.

Which of these are types of variables based on Measurement NOT Design?

  • Dependent
  • Quantitative (correct)
  • Qualitative (correct)
  • Independent

Which measurement Levels are typically associated with Categorical (Qualitative Variables)?

<p>Nominal and Ordinal (D)</p> Signup and view all the answers

It is impossible to make conclusions on ratios when using an Interval level of measurement.

<p>True (A)</p> Signup and view all the answers

The ______ variable is manipulated or changed by the experimenter.

<p>independent</p> Signup and view all the answers

The ______ variable is measured under each condition / level of the independent variable.

<p>dependent</p> Signup and view all the answers

What are the key steps involved in organizing data for analysis?

<p>The key steps involved in organizing data for analysis include data entry, data coding, and data cleaning.</p> Signup and view all the answers

Statistics can be used to provide insights into the past and future.

<p>True (A)</p> Signup and view all the answers

What is the difference between descriptive and inferential statistics?

<p>Descriptive statistics summarize and describe data sets, while inferential statistics go beyond the data to make inferences or predictions about a larger population.</p> Signup and view all the answers

What are the three most commonly used measures of central tendency?

<p>The three most commonly used measures of central tendency are the mean, the median, and the mode.</p> Signup and view all the answers

What is the mode?

<p>The mode is the most frequently occurring score in a distribution.</p> Signup and view all the answers

Match the measures of central tendency with their descriptions:

<p>Mean = The most frequent score in a distribution. Median = The middle score in a distribution when the scores are ranked in order of magnitude. Mode = The sum of all scores divided by the number of scores.</p> Signup and view all the answers

What is the interquartile range (IQR)?

<p>The interquartile range is the difference between the third quartile (Q3) and the first quartile (Q1) in a distribution.</p> Signup and view all the answers

What is the mean absolute deviation?

<p>The mean absolute deviation is the average of the absolute deviations of each score from the mean.</p> Signup and view all the answers

What is the sum of squares?

<p>The sum of squares is the sum of the squared deviations of each score from the mean.</p> Signup and view all the answers

What is the standard deviation?

<p>The standard deviation is the positive square root of the variance.</p> Signup and view all the answers

The standard deviation is always greater than or equal to zero.

<p>True (A)</p> Signup and view all the answers

What are the characteristics of a distribution with positive kurtosis?

<p>A distribution with positive kurtosis has many scores in the tails (a so-called heavy-tailed distribution) and is pointy (known as leptokurtic distribution).</p> Signup and view all the answers

What are the factors that influence the size of the standard error?

<p>The size of the standard error is determined by the variability of the scores and the size of the sample.</p> Signup and view all the answers

What are the characteristics of a normal distribution?

<p>A normal distribution is symmetrical around its mean, has a 'bell curve' shape, and the mean, median, and mode are all equal.</p> Signup and view all the answers

The standard deviation of a sampling distribution is always smaller than the standard deviation of the population.

<p>True (A)</p> Signup and view all the answers

The size of the standard error is influenced by the variability of the scores, but not the sample size.

<p>False (B)</p> Signup and view all the answers

What is the purpose of hypothesis testing?

<p>The purpose of hypothesis testing is to determine whether there is enough evidence to reject a null hypothesis or conclude that it is not supported by the data.</p> Signup and view all the answers

The alpha (α) level defines the percent of the most unlikely outcomes in a hypothesis test.

<p>True (A)</p> Signup and view all the answers

What are the critical z-values when using a 95 percent confidence level?

<p>The critical z-values when using a 95 percent confidence level are -1.96 and +1.96 standard deviations.</p> Signup and view all the answers

A score that falls within the 5% rejection region for a one-sided test will always fall within the rejection region for the corresponding two-sided test.

<p>False (B)</p> Signup and view all the answers

What differentiates a one-sample z-test from other statistical tests?

<p>The one-sample z-test is unique in that the values of the parameters for the null model are known, and only one sample is needed to perform the test.</p> Signup and view all the answers

What are the three approaches to hypothesis testing?

<p>The three approaches to hypothesis testing are the test statistic and p value approach, the critical z value approach, and the confidence limits approach.</p> Signup and view all the answers

The confidence interval approach is identical to the critical value approach, but the center of the interval is based on the sample mean rather than the population mean.

<p>True (A)</p> Signup and view all the answers

If a one-sample t-test is conducted and the confidence interval includes 0, then it is safe to conclude that there is no difference between the sample mean and the population mean.

<p>True (A)</p> Signup and view all the answers

What are the four possible outcomes of a hypothesis test?

<p>The four possible outcomes of a hypothesis test are a correct decision (rejecting the null hypothesis when it is false), a correct decision (failing to reject the null hypothesis), a Type I error (rejecting the null hypothesis when it is true), and a Type II error (failing to reject the null hypothesis when it is false).</p> Signup and view all the answers

What is the difference between a Type I error and a Type II error?

<p>A Type I error occurs when the null hypothesis is rejected when it is actually true, while a Type II error occurs when the null hypothesis is not rejected when it is actually false.</p> Signup and view all the answers

What is the power of a test?

<p>The power of a test is the probability of correctly rejecting the null hypothesis when it is false, which is represented by 1-β.</p> Signup and view all the answers

The power of a test is inversely proportional to the probability of a Type II error.

<p>True (A)</p> Signup and view all the answers

The effect size is independent of the sample size.

<p>False (B)</p> Signup and view all the answers

The power of a test can be increased by increasing the effect size or increasing the sample size.

<p>True (A)</p> Signup and view all the answers

What distinguishes a t-test from a z-test?

<p>The t-test is used when the population standard deviation is unknown and must be estimated from the sample data, while the z-test requires knowledge of the population standard deviation.</p> Signup and view all the answers

The degrees of freedom for a one-sample t-test is calculated as N-1.

<p>True (A)</p> Signup and view all the answers

The shape of the t-distribution is unaffected by the number of degrees of freedom.

<p>False (B)</p> Signup and view all the answers

The z-table can be used to find the critical t-value for a one-sample t-test.

<p>False (B)</p> Signup and view all the answers

What is the difference between an independent samples t-test and a paired samples t-test?

<p>An independent samples t-test compares the means of two independent groups, while a paired samples t-test compares the means of two related groups.</p> Signup and view all the answers

The standard error of the difference for an independent samples t-test is calculated by taking the square root of the sum of the variances of each group.

<p>False (B)</p> Signup and view all the answers

The formula for calculating a t-statistic in an independent samples t-test is the same regardless of whether the population standard deviation is known or unknown.

<p>False (B)</p> Signup and view all the answers

The degrees of freedom for an independent samples t-test is calculated as the sum of the degrees of freedom for each group minus 1.

<p>False (B)</p> Signup and view all the answers

The confidence interval for an independent samples t-test is constructed around the difference between the two sample means, and it is used to determine if the population means are equal.

<p>True (A)</p> Signup and view all the answers

Flashcards

Basic Probability

The probability of any event is a number between 0 and 1, where 0 represents an impossible event and 1 represents a certain event.

Elementary Event

The simplest outcome in a probability experiment that cannot be broken down further. For example, rolling a 3 on a six-sided die is an elementary event.

Sample Space

The set of all possible elementary events or outcomes for an experiment. For example, the sample space for rolling a die is {1, 2, 3, 4, 5, 6}.

Law of Total Probability

If we add up the probabilities of all possible elementary events in the sample space, the total must equal 1 (or 100%). This ensures that we cover all possible outcomes.

Signup and view all the flashcards

Non-Elementary Events

A combination of two or more elementary events. For example, rolling an even number on a die is a non-elementary event because it includes multiple elementary events (2, 4, 6).

Signup and view all the flashcards

Probability Distribution

Assigns probabilities to each event in the sample space. This allows us to understand the likelihood of each possible outcome.

Signup and view all the flashcards

Binomial Distribution

A probability distribution that describes the number of successes in a fixed number of independent trials, each with the same probability of success.

Signup and view all the flashcards

Size Parameter (N)

The total number of trials or events in the experiment.

Signup and view all the flashcards

Success Probability (θ)

The probability of achieving success on any single trial.

Signup and view all the flashcards

Random Variable (X)

The number of successes observed in the experiment.

Signup and view all the flashcards

Probability Mass Function (PMF)

The function that assigns probabilities to the possible values of the random variable X.

Signup and view all the flashcards

Normal Distribution

Often referred to as the "bell curve" or "Gaussian distribution", this is a continuous probability distribution that describes the distribution of many natural phenomena like height, IQ scores, or reaction times.

Signup and view all the flashcards

Mean (μ)

The mean of the distribution (μ) defines the center or peak of the normal curve, indicating the most likely value of measurements in the distribution.

Signup and view all the flashcards

Standard Deviation (σ)

The standard deviation (σ) determines how spread out values are around the mean. A smaller standard deviation indicates that values are clustered closely around the mean, while a larger standard deviation indicates that values are more dispersed.

Signup and view all the flashcards

Symmetry of the Normal Curve

The normal curve is perfectly symmetrical around the mean, meaning that the left and right sides of the curve are mirror images.

Signup and view all the flashcards

Total Area Under the Normal Curve

The total area under the normal curve represents 1 (or 100%) of the total probability, representing all possible outcomes.

Signup and view all the flashcards

68-95-99.7 Rule

About 68% of values fall within 1 standard deviation of the mean, 95% within 2 standard deviations, and 99.7% within 3 standard deviations.

Signup and view all the flashcards

Mean, Median, and Mode in Normal Distribution

For a normal distribution, the mean, median, and mode all have the same value. This is because the distribution is perfectly symmetrical.

Signup and view all the flashcards

Probability Density

A measure that describes how likely values are to be found within a certain range. For continuous data, it's not possible to calculate the probability of a specific value, so we look at probability density over a range.

Signup and view all the flashcards

Bias

Describes the tendency for measurements (observations) to differ systematically from their true value. It reflects systematic errors in measurement.

Signup and view all the flashcards

Precision

Indicates how close independent test results are to one another, obtained under stipulated conditions. It reflects the amount of random variability in measurement.

Signup and view all the flashcards

Variability

The extent to which scores in a distribution are alike or different.

Signup and view all the flashcards

Range

The difference between the highest and lowest scores in a distribution.

Signup and view all the flashcards

Interquartile Range (IQR)

The distance between the first and third quartiles, which represent the 25th and 75th percentiles of the data.

Signup and view all the flashcards

Mean Absolute Deviation (MAD)

The average of the absolute deviations of all scores from the mean. It represents the average distance of scores from the center of the distribution.

Signup and view all the flashcards

Sum of Squares (SS)

The sum of squared deviations of all scores from the mean.

Signup and view all the flashcards

Variance (s2)

A measure of variability that considers the total variability of scores in a distribution. Usually, it’s the sum of squares divided by N or N-1.

Signup and view all the flashcards

Standard Deviation (s)

The positive square root of the variance. It represents the average deviation of scores from the mean. The standard deviation is expressed in the same units as the original data.

Signup and view all the flashcards

Sample Variance (s2)

Represents the variance calculated from a sample of data, and denoted as 's^2' or 's2'.

Signup and view all the flashcards

Population Variance (σ2)

Represents the variance calculated from the entire population, denoted as 'σ^2' or 'sigma squared'.

Signup and view all the flashcards

Sample Standard Deviation (s)

The standard deviation calculated from a sample of data, denoted as 's' or 'SD'.

Signup and view all the flashcards

Population Standard Deviation (σ)

The standard deviation calculated from the entire population, denoted as 'σ' or 'sigma'.

Signup and view all the flashcards

Boxplots

A graphical representation of data that captures both the location and spread of scores in a distribution. It includes a box representing the middle 50% of the data, whiskers extending to the minimum and maximum values, and outliers plotted individually.

Signup and view all the flashcards

Kurtosis

The degree to which scores cluster at the ends of the distribution (tails) and how pointy a distribution is. A leptokurtic distribution is pointy with a large kurtosis value, while a platykurtic distribution is flat with a small kurtosis value.

Signup and view all the flashcards

Study Notes

Course Introduction

  • Course name: PSY 201, Introduction to Statistics for Psychology I
  • Instructor: Dr. Nihan Albayrak-Aydemir

Research Methods in Psychology

  • Focuses on research methodologies within psychology.

Understanding Data in Psychological Research

  • A lecture on variables, measurement, and data organization in psychological research.
  • Teaching Assistant: Saliha Erman, MA
  • Adapted from Assoc. Prof. GüneÅŸ Ãœnal, BoÄŸaziçi University, 2022

Variable

  • A variable is any attribute of objects, people, or events that can take on different values within a particular investigation.
  • Examples: height, reaction time, test scores, eye color.
  • A constant is the opposite of a variable, having a fixed value within a particular investigation.
  • An attribute can be a variable in one context and a constant in another.

Within-Subject vs. Between-Subject Variance

  • Changes and variations can be seen within and between individuals.
  • Example: Mood

Types of Variables

  • Qualitative (categorical) variables take a value that is one of several possible categories.
  • The values of qualitative variables are categories (though they sometimes can be represented by numbers, but these numbers serve as labels only).
  • They express differences in kind, not amount.
  • Examples: College attended, fruit types, nationality, blood type.
  • Quantitative (continuous) variables are numerical variables such as height, decision time, GPA, age, or proportion of items correctly answered.
  • The values of quantitative variables are numbers.
  • They express differences in amount.

Levels of Measurement

  • Nominal scale- a measurement scale used to categorize or label variables.
  • It does not have any quantitative value or order.
  • The categories on a nominal scale represent different groups or types.
  • There is no inherent ranking or hierarchy among them.
  • Examples: Left-handed, right-handed. Favorite ice cream flavour: vanilla, chocolate, strawberry, etc.
  • Ordinal scale- has the properties of a nominal scale, but the observations can be ranked in order of magnitude.
  • Example: Position finished in a race.
  • Interval scale- has all the properties of an ordinal scale, and a given distance between measures has the same meaning anywhere on the scale.
  • Also called an equal-interval scale.
  • Degrees of temperature, calendars years.
  • Ratio scale- has all the properties of an interval scale plus an absolute zero point.
  • This allows for the comparison of ratios.
  • Examples: length, weight, reaction time, dollars.

Caution on Ratio Scale

  • On the interval level of measurement, you cannot make conclusions on ratios.
  • Example: 20°C is not twice as hot than 10°C.
  • Likert type scaling is considered an ordinal scale because the intervals between points are not necessarily equal.

Independent and Dependent Variables

  • Independent variable (IV): The variable that is manipulated or changed by the experimenter.
  • Dependent variable (DV): The variable that is measured under each condition or level of the independent variable.
  • Also called explanatory variable or predictor variable.
  • Expected to explain the changes in the dependent variable.
  • Also called response variable or outcome variable.

Research Design: Natural and Manipulated Independent Variables

  • Natural IVs: The experimenter does not have complete control.
  • Manipulated IVs: The experimenter has complete control.
  • Quasi Experiments: Observing a natural variable and comparing it to another variable.
  • True Experiments: Observing a manipulated variable and comparing it with another variable.

Correlations Between Variables

  • In some studies, there is no clear cause-and-effect relationship between the variables.
  • i.e. the relationship between measures of depression and anxiety.
  • In correlational designs, neither variable can be considered a predictor or outcome variable.

Data Organization

  • Data entry: Entering raw data into a software tool.
  • Data coding: Assigning numerical or categorical codes to responses.
  • Cleaning data: Removing or correcting errors.
  • Tools: Excel, SPSS, Jamovi

Data Organization (Properly organized data ensures...)

  • Efficiency: Efficiency in the analysis process, saving time and effort.
  • Accuracy: Reduces the risk of errors that could distort results
  • Reproducibility: Well-organized data can be more easily verified or reanalyzed by other researchers

Descriptive Statistics and Inferential Statistics

  • Procedures used to summarise, organize and make sense of a set of scores or observations.
  • Procedures used to allow researchers to infer from or generalise observations made within smaller samples to the larger population.

Measures of Central Tendency

  • Mean: Average of a set of scores
  • Median: The middle score in a sorted set of scores
  • Mode: Most frequent score

Measures of Variability

  • Range: Highest - Lowest Score
  • Interquartile range (IQR): Distance between the values of the third and the first quartiles.
  • Mean absolute deviation (MAD): average of the absolute deviations from the mean.
  • Sum of Squares (SS): sum of squared deviation scores.
  • Variance (s²): Sum of Squares (SS) divided by the number of cases (N)
  • Standard Deviation (s): the positive square root of the variance.

Sample vs. Population Notation

  • Sample Variance (s²): Σ(Y - Y)² / N
  • Population Variance (o²): Σ(Y - μ)² / N (μ — the population mean)

Boxplots

  • A simple graphical representation of data that captures features of both the location and spread of scores in a distribution

Z-Scores

  • Transformed scores to a common scale.
  • Have a mean of zero (0) and a standard deviation of one (1).
  • Allow the comparison of different distributions.

Confidence Interval (CI)

  • A range of values that's used to estimate an unknown population parameter.
  • It gives a certain degree of confidence that the true population parameter lies within the interval.

Hypothesis Testing

  • A process to test an idea (or a theory) by using data collected from a sample of the population.
  • Begin with a Null and Alternative Hypothesis.
  • Set an alpha level.
  • Perform an appropriate statistical test.
  • Calculate the p-value from the test statistic.
  • Compare the p-value to the alpha level to decide whether to reject the null hypothesis.

One Sample z-test

  • A test used to compare the mean of a sample with the mean of a population, when the population's standard deviation (σ) is known.

One Sample t-test

  • A test used to compare the mean of a sample with the mean of a population, when the population's standard deviation (σ) is not known or suspected to be different from the sample's standard deviation (s).

Independent Samples t-Test

  • A test used to determine if there is a statistically significant difference between the means of two independent groups or samples.
  • The groups are sampled from mutually exclusive subgroups (i.e. participants belong only to one group).
  • The dependent variable is always quantitative or measured on an interval or ratio level.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

More Like This

Use Quizgecko on...
Browser
Browser