PSYB70 Unit 4 Study Guide PDF

Summary

This is a study guide for a psychology course, covering evaluating association claims and statistical validity. The guide examines margin of error, significance, effect size, power, and replication in research methods. It also discusses descriptive and inferential statistics, and how to interpret results.

Full Transcript

iUnit 4. Evaluating association claims Overview Readings and preparation ◻ This Unit 4 Study Guide ◻ From the free online textbook: Research Methods in Psychology. Focus on: ◻ Section 57. Understanding null hypothesis significance testing. ◻ Section 59. Additional considerations ◻ Secti...

iUnit 4. Evaluating association claims Overview Readings and preparation ◻ This Unit 4 Study Guide ◻ From the free online textbook: Research Methods in Psychology. Focus on: ◻ Section 57. Understanding null hypothesis significance testing. ◻ Section 59. Additional considerations ◻ Section 28. Overview of non-experimental research ◻ Section 29. Correlational research ◻ Section 30. Complex correlations – focus on how to read a correlation matrix. Infographic: Evaluating statistical validity In this unit, we will introduce a framework for evaluating statistical validity. This framework focuses on asking questions about: Margin of error. In acknowledging that data from samples can only provide a rough point estimate of the true population parameter, what is the estimated margin of error / confidence interval around the estimate? Statistical significance. Can an effect still be detected, even once this margin of error has been accounted for? Effect size. If the effects are “statistically significant”, what is the general size of the effect? How does the size of the effect impact interpretation? Statistical power. Is the sample size large enough to trust the indicators of statistical significance / effect size? What are the risks of Type 1 and Type 2 error? Direct replication. Have these results been directly replicated with independent samples? Do the results replicate? Can we trust the overall pattern of results? Lesson 4A. Understanding statistical inference 3B.5. Understanding margin of error Describing data versus making inferences Descriptive statistics help organize and summarize the data that come from samples. Inferential statistics use sample data to make inferences about the population of interest. Moving from descriptive statistics to inferential statistics 1. Data from samples can only provide a rough point estimate of the true population parameter. 2. Point estimates are associated with a certain margin of error and that margin of error must be taken into consideration when making inferences. 3. Researchers must decide in advance how confident they want to be in their estimates (i.e., how much error they are willing to accept). 1. Data from sample provide point estimates Data from samples can only provide a rough point estimate of the true population parameter. Point estimate – The estimate of the effect calculated from the data. Population parameter – The “true” value of what you are trying to estimate. Samples versus populations Population: All the people / cases that you would like to understand and make generalizations about. Sample: The people / cases from whom the data is actually collected. Each sample of data is pulled from a larger population of interest. Every sample will produce a slightly different estimate of the effect. Most will be close to the “true” population parameter. But a few of them might be way off in its estimate. 2. Point estimates are associated with a certain margin of error Sampling error – Recognizes that any given sample is unlikely to produce an estimate of the population parameter that is exactly the same as the population. Margin of error – A statistical estimate of the amount of error that is likely to exist around the estimate of a given parameter. Confidence interval = The range of values in which the researcher can feel “relatively confident” that the true population parameter falls. A confidence interval = point estimate +/- the margin of error. PSYB70 Study Guide, Page 2 ▪ Lower limit: Point estimate - the margin of error. ▪ Upper limit: Point estimate + the margin of error. Margin of error and sample size. Larger samples decrease the margin of error (leading to a more precise estimate). Smaller samples lead to larger margins of error (and a greater chance of statistical error!) 3. Researcher’s must decide in advance the accepted levels of confidence / risk A confidence interval is the range of values in which the researcher can feel “relatively confident” that the true population parameter falls. The confidence level defines what is meant by “relatively confident”. ▪ With a 95% confidence level, the margin of error is calculated so that in 95 out of 100 samples, the confidence interval will capture the “true” population parameter. Of course, the flip side of this is that in 5% of the cases, the confidence interval will not include the “true” population parameter (i.e., the sample will be in error up to 5% of the time). ▪ With a 99% confidence level, the margin of error is calculated so that in 99 out of 100 samples, the confidence interval will capture the “true” population parameter. But in 1% of the cases, the confidence interval will not include the “true” population parameter (i.e., the sample will be in error up to 1% of the time). Trade-off between confidence and precision. Greater confidence levels result in larger margins of error (i.e., less precision). Greater precision (i.e., smaller margins of error) comes with a greater risk of error (i.e., lower confidence level). Statistical validity: The extent to which statistical conclusions derived from a study are accurate and reasonable What is the sample size? (Bigger is better) What is the margin of error? (Smaller is better) What is the confidence level? (Usually 95%) Increasing the sample size (N) reduces the margin of error. 4A.1. Understanding statistical significance Illustrative article: Mehl et al. (2007) To help us learn about statistical interpretation, we will be focusing on the statistical results presented in Mehl et al.'s (2007) article focused on testing the research question, "Are women really more talkative than men?" “Significance testing” Prior to analyzing their data, researchers must decide what decision rules they will use to test their hypotheses. PSYB70 Study Guide, Page 3 Null hypothesis significance testing (NHST) includes a set of decision rules that can help a researcher use the margin of error to determine if an observed effect is extreme enough to “reject the null hypothesis” and conclude that the researcher’s alternative hypothesis is supported. Is it “statistically significant”? An effect is considered “statistically significant” if an effect can be detected (effect ≠ 0), even when the margin of error is factored in (the effect shines through the “noise”). An effect is considered “not statistically significant” if the margin of error is so great that it calls into question whether an effect exists or not (the “noise” drowns out any effects). What do we mean by an “effect”? An effect is the specific outcome that you are testing. Group comparisons: A type of effect that compares two or more groups. Correlation: A type of effect that examines the association between variables. It all starts with the assumption that there is “no effect” Significance testing starts with an assumption that there is no effect. This is called the null hypothesis (or null effect). It is the opposite of the researcher’s hypothesis that there is an effect. The null hypothesis is the starting assumption that there is no effect (i.e., there are no group differences, this is no association), i.e. effect = 0. o Women and men do not differ in the total words spoken per day. o Small talk is not associated with well-being. The researcher’s hypothesis is that there is an effect, i.e., there are group differences or an association, i.e., effect ≠ 0. o Women and men do differ in the total words spoken per day. o Small talk is associated with well-being. Approaches In our course, we are going to use two general approaches for assessing statistical significance: the confidence interval approach the p-values approach. Follow along in-class or with the video to see explanations and examples of both approaches. The confidence interval approach The researcher starts by using hand calculations or a computer program to construct a confidence interval around an effect (e.g., the mean difference or a correlation coefficient). The researcher then assesses if the confidence interval around the effect include zero? ◻ If no, the results are statistically significant (effect ≠ 0) ◻ If yes, the results are not statistically significant (effect = 0) If the confidence interval of the effect does not include “0”: The results are “statistically significant” Reject the null hypothesis PSYB70 Study Guide, Page 4 The researcher can take the next steps to explore their hypothesis. If the confidence interval of the effect does include “0”: The results are “not statistically significant” Fail to reject the null hypothesis The researcher’s hypothesis is not supported The p-values approach Step 1. Set the significance level Similar to the confidence interval approach, the researcher starts by identifying the level of confidence (e.g., 95% confidence level; 99% confidence level – see the lesson on margin of error for a discussion of confidence levels). This confidence level is used to identify the “significance level” (expressed using alpha, α). If a researcher adopts a 95% confidence level, this produces a 5% significance level (α =.05). If a researcher adopts a 99% confidence level, this produces a 1% significance level (α =.01), as outlined below. 95% confidence level (α =.05) A 95% confidence interval defines a range of values likely to capture the “true” value in 95 out 100 samples. 5% risk of error Significance level = 5% α =.05 PSYB70 Study Guide, Page 5 99% confidence level (α =.01) A 99% confidence interval allows us to feel more confident in our sample, but at the cost of widening the confidence interval around each point estimate. 1% risk of error Significance level = 1% α =.01 Most psychologists adopt a 95% confidence level (α =.05) A 95% confidence level strikes the balance between precision and likelihood of error. Step 2. Calculate the effect and p-value The researcher can then use hand-calculations or a computer software program to calculate the effect (e.g., the mean difference or a correlation coefficient, etc.) and the probability value that an effect this large would emerge if the effect were actually zero. This probability value is called the p-value. P-value: The calculated probability that an effect this large would emerge if the effect were actually zero. Step 3. Compare the p-value to the statistical significance If the probability value that an effect this large would emerge if the effect were actually zero (i.e., the p-value) is lower than the significance level, the results are “statistically significant”. If the p-value is less than alpha (e.g., α =.05 and p <.05) o The results are “statistically significant” o Reject the null hypothesis o The researcher can take the next steps to explore their hypothesis. If the p-value is greater than alpha (e.g., α =.05 and p >.05) o The results are “not statistically significant” o Fail to reject the null hypothesis o The researcher’s hypothesis is not supported Two approaches; same foundation Although we have presented two approaches for assessing statistical significance, they are both rooted in the same foundation. As such, we can use one (i.e., p-values) to make inferences about the other (confidence intervals) and vice versa. If p < α, then the confidence interval of the effect does not include “0”. If the confidence interval of the effect does not include “0”, then p < α. The 95% confidence interval of the effect does not include “0”. The probability of observing an effect this large if the effect were “0” is < α. The results are “statistically significant” The researcher can take the next steps to explore their hypothesis. PSYB70 Study Guide, Page 6 If p > α, then the confidence interval of the effect does include “0” If the confidence interval of the effect does include “0”, then p > α. The 95% confidence interval of the effect does not include “0”. The probability of observing an effect this large if the effect were “0” is > α. The results are “not statistically significant” Fail to reject the null hypothesis The researcher’s hypothesis is not supported 4A.2. Try it! Interpreting statistical significance Follow along in-class or online to test your understanding of the article published by Mehl et al. (2007). (See Quercus to access the article). Mehl, M. R., Vazire, S., Ramírez-Esparza, N., Slatcher, R. B., & Pennebaker, J. W. (2007). Are women really more talkative than men? Science, 317(5834), 82-82. Understanding effect size Statistical significance can tell you if an effect is likely to be different from zero. However, it does not tell a researcher the size of the effect. As such, researchers need to also consider the size of an effect. An effect-size considers the size of the group difference and/or the strength of the association. Cohen’s d is an effect size used to compare the means across two groups. Cohen's d Small | 0.20 | Medium | 0.50 | PSYB70 Study Guide, Page 7 Large | 0.80 | When people here about the difference in two groups being “statistically significant”, they likely picture the image in the lower right below (labeled Cohen’s d = 3.0). But in reality, most effects in psychology are small (Cohen’s d =.20) to medium (Cohen’s d =.50) effects, and very rarely large effects (Cohen’s d =.80). Therefore, for most findings in psychology, the differences between groups are subtle, not huge. 4A.3. Statistical significance in context Cautions about statistical significance Statistical significance only tells you if the effect is likely to different from “0” in the population for which the sample represents. Statistical significance tells us nothing about the size of the effect. (With large enough sample sizes, even very small effects might be statistically significant). Tests of statistical significance are not at all reliable when sample sizes are low. Because the data from samples are merely estimates of the true population parameters, we are always at risk of making an error. Two types of statistical error In real life: The null is TRUE In real life: The null is FALSE (there is no effect) (there is an effect) The data suggests that Correct conclusion Type 2 error there is no effect The data suggests that Type 1 error Correct conclusion there is an effect PSYB70 Study Guide, Page 8 Type 1 error (false positive): This occurs when the researcher concludes that there is an effect, when in reality there is not one. Type 2 error (false negative): This occurs when the researcher concludes that there is not an effect, but the reality is that there is one. Statistical significance testing controls for Type 1 error Statistical significance testing is designed to prevent researchers from making Type 1 errors. The stricter that you are in trying to avoid Type 1 error, the greater your chance of making a Type 2 error (and vice versa). Adopting a 99% confidence level decreases the risk of Type 1 error (false positive), but increases the risk of Type 2 error (false negative). Adopting a 90% confidence level decreases the risk of Type 2 error (false negative), but increases the risk of Type 1 error (false positive). A 95% confidence interval strikes the balance between Type 1 and Type 2 error. Increasing sample size also helps to control for both Type 1 and Type 2 error. Interpreting significant effects An effect different from the null hypothesis can be detected BUT, there is always a possibility of Type 1 error due to: o Sampling error: The sampling method could be biased. o Low power: Low sample size could result in an inflated estimate. o Measurement error: Unreliable measurements could inflated effects. o p-hacking: Explorative or biased analysis could inflate the effects. Interpreting null effects An effect different from the null hypothesis cannot be detected BUT, there is always a possibility of Type 2 error due to: o Sampling error: The sampling method could be biased. o Low power: Low sample size could mask real effects. o Measurement error: Unreliable measurements could mask real effects. o High variability: A lot of variability could mask real effects. Statistical power What about Type 2 error? Statistical power: The probability a study will detect an effect if that effect actually exists. Power analysis: The size of a sample that would be needed to statistically detect an effect at a given significance level and desired level of power. Understanding statistical power Statistical power is affected by sample size and effect size. Larger effects are easier to detect than smaller ones. Larger sample sizes make it easier to detect an effect. Smaller sample sizes increase the risk of both Type 1 and Type 2 errors. The importance of sample size PSYB70 Study Guide, Page 9 Increasing sample size is a key way to reduce statistical errors! NHST cautions An over-reliance on null hypothesis significance testing led to: o Questionable data mining and data exploration practices. o An increase in false positive results being published. o A failure to replicate some core findings in the literature. Psychology's replication crisis The replication crisis in psychology (and other fields of study) arose when key scientific findings could not be independently replicated by other research teams. o Researchers ignored important rules and assumptions around null hypothesis significance testing (NHST). o Researchers failed to interpret their findings within the context of important caveats and limitations to NHST. Limits to NHST Not reliable at all when sample sizes are low. Tells one nothing about the size of the effect. Lesson 4B. Interpreting correlations 4B.1. Understanding correlation coefficients Correlation: A type of effect that examines the association between two variables. Illustrative article: Mehl, M. R., Vazire, S., Holleran, S. E., & Clark, C. S. (2010). Eavesdropping on happiness: Well-being is related to having less small talk and more substantive conversations. Psychological science, 21(4), 539-541. Describing and visualizing association Interpreting association Bar graph – A visualization of the differences between groups (often expressed as mean differences). PSYB70 Study Guide, Page 10 Scatterplot – A visualization of the correlation between variables. Positive correlation – the two variables co-vary in the same direction o as one variable increases, the other variable increases o as one variable decreases, the other variable decreases Negative correlation – the two variables co-vary in opposite directions o as one variable increases, the other variable decreases o as one variable decreases, the other variable increases Interpreting correlation coefficients Correlation coefficient – A numerical representation of the correlation that varies between -1 and +1. Direction of the correlation – the sign of the correlation indicates its direction o Positive sign (0 to +1) indicates a positive correlation o Negative sign (-1 to 0) indicates a negative correlation Strength of the correlation – the size of correlation indicates its strength o Correlations closer to |0| are weaker correlations o Correlations closer to |1| are strong correlations Reading correlation tables Line up the row and the column of the correlational table to find the correlation. What can we infer about these results? Inferential statistics Inferential statistics help researchers determine if an effect is strong enough to be detected above and beyond this assumed amount of sampling error. The asterisk as a short hand for significance The asterisk (*) is commonly used as a short-hand for presenting p-values. Look at table notes for details, but often the following short hand is used: o * p <.05 o ** p <.01 o *** p <.001 o No asterisk = not statistically significant PSYB70 Study Guide, Page 11 The p-values for correlation coefficients are interpreted the same way: If the p-value is less than alpha (e.g., p <.05): o The 95% confidence interval of the effect does not include “0”. o The researcher can reject the null hypothesis o The results are “statistically significant”. o The researcher can take the next steps to explore their hypothesis. If the p-value is greater than alpha (e.g., α = 05): p <.05 o The 95% confidence interval of the effect does include “0”. o The researcher cannot reject the null hypothesis o The results are not “statistically significant”. o The researcher must conclude their hypothesis is not supported. 4B.2. Try it! Interpreting correlation coefficients Use this practice article critique to assess your understanding of the illustrative article by Mehl et al. (2010). 4B.3. Effect sizes and interpretation How big is the effect? Effect size: considers the strength of the relationship or effect between two or more variables. Cohen's d Correlation r Small | 0.20 | |.10 | Medium | 0.50 | |.30 | Large | 0.80 | |.50 | Are the conclusions valid? Do women talk more than men? o Mehl et al. (2007) found a mean difference of 546 words spoken between the 186 female and 210 male students, but concluded that “the data fail to reveal a reliable sex difference in daily word use” (p. 82). Is talking linked to greater wellbeing? o Mehl et al. (2010) found that the correlation between well-being and substantial talk was, r =.28, and the correlation with small talk was r = -.33. They concluded that “higher well-being was associated with having less small talk and having more substantive conversation” (p. 539-540). Do the effects replicate? Within any given study it is impossible to know if the results are “true” or due to Type 1 or Type 2 error. Too many factors influence the results: o The size of the “true” effect, BUT ALSO: o Sampling error (the sampling method may be biased). o Low power (low sample size may exaggerate the estimate). o Measurement error (unreliable or insensitive measurements). PSYB70 Study Guide, Page 12 o *p-hacking (explorative or biased analysis of the data). Replication. When an effect is independently replicated across multiple studies, one can feel more confident that it is a “true” effect. Meta-analysis A meta-analysis averages the effect size for each study () to calculate an overall effect size (⧫). Calculating the overall effect ▪ A meta-analysis averages the effect size for each study () to calculate an overall effect size, ⧫. ▪ Sometimes different studies will be given different weights in the calculation based on the sample size, quality of the design, etc. o = lower weight o = higher weight ▪ The distribution of the effect sizes, along with the overall effect size can be compared to a line of ‘no effect’ |. This line defines the null hypothesis. o Values to the right of the line of no effect represent studies with a positive effect size (+r, +d, etc.) o Values to the left of the line of no effect represent studies with a negative effect size (-r, -d, etc.) Interpreting Forest Plots Example: Milek, A., Butler, E. A., Tackman, A. M., Kaplan, D. M., Raison, C. L., Sbarra, D. A., … Mehl, M. R. (2018). “Eavesdropping on Happiness” Revisited: A Pooled, Multisample Replication of the Association Between Life Satisfaction and Observed Daily Conversation Quantity and Quality. Psychological Science, 29(9), 1451–1462. DOI PSYB70 Study Guide, Page 13 Lesson 4C. Reading and understanding research articles Library Lab (due Dec. 3) and Midterm Test 1 (Oct. 5) Question: Do we need to know the components of the library lab for Midterm Test 1? Answer: Midterm Test 1 will assess your knowledge of the content that overlaps with Unit 2 (i.e., scientific integrity and the plagiarism prevention tutorial) and the content that overlaps with Lesson 4C (scientific integrity, types of articles, parts of an article). 4C.1. Scientific integrity Integrity Integrity: Strive to be accurate, truthful, and honest in one's role as researcher, teacher, or practitioner. Transparency: A defining feature of science that ensures that researchers are held accountable for reporting their research methods accurately, truthfully, and honestly. Forms of research misconduct Data fabrication – A form of research misconduct in which a researcher invents data that fit the hypothesis. PSYB70 Study Guide, Page 14 Data falsification – A form of research misconduct in which a researcher influences a study's results, perhaps by deleting observations from a data set of by influencing participants to act in the hypothesized way. Plagiarism – A form of research misconduct in which a researcher represents the ideas or words of others as one's own. (The library lab in Week 5 includes a tutorial focused on Plagiarism Prevention). Principles of Research Transparency Pre-registration A hypothesis is a specific, testable statement of the result that the researcher expects to observe from the data if the proposed theory / premise is true. HARKing is a type of data falsification that occurs when a researcher “Hypothesizes After the Results are Known” by claiming that an unexpected effect had been predicted all along. Pre-registration. To prevent HARKing, researchers should publicly register their hypothesis, research design, and data analysis plan prior to collecting and analyzing any data. APA Publication Standards Manuscript preparation. Manuscripts are prepared according to the Publication Manual of the American Psychological Association (called “APA Style” as a shorthand). APA style ensures that research articles in psychology are formatted consistently, adhere to high quality standards, and are complete in reporting all aspects of the research process. See Section 48 and Section 49 in Chapter XI. Presenting Your Research to gain a broad overview of the different parts of an APA-style paper, including the title page, abstract, introduction, methods, results, and reference section. See the Plagiarism Prevention Tutorial to learn how to quote, paraphrase, cite, and reference sources in APA style. Citing means to indicate the source of information when that information is used within a paper (i.e., “in-text”). You should cite the work of any individual whose ideas, theories, or research have directly influenced your work. Psychology uses the citing style of the American Psychological Association (APA). Authors In-text as part of the narrative (In parentheses at the end). 1 Wilson (2005)… …(Wilson, 2005). 2 Wilson and Brekke (1994) … …(Wilson & Brekke, 1994). PSYB70 Study Guide, Page 15 3+ Wilson et al. (2002, p. 90) “…” “…” (Wilson et al., 2002, p. 90). A reference list appears at the end of a paper. It provides information about each source cited in a paper, including the authors, the publication date, the title of the work, and where to access the source. Author1, F. M., Author2, F. M., & Author3, F. M. (DATE). Title of the work. [Type of work, if applicable]. Publication outlet. DOI, URL, or other access Information. The Peer Review Process Refereed journals (aka peer-reviewed journals): Journals that publish articles only after they have undergone a rigourous peer review process designed to uphold the values of science. Peer review. Prior to publication, the manuscript is reviewed by 3 or 4 experts in the field. o Reviewers are kept anonymous to increase honesty. o Double masked reviews: The reviewers do not know the identity of the author and the author does not know the identity of the reviewers. Publication. The end result of the rigorous peer review process is the publication of a peer-reviewed article that has gone through several checks to ensure they meet the high standards of the scientific process. Three common types of peer reviewed articles in psychology include: o Empirical article. An article that publishes the purpose, methods, and results of one or more original research studies. o Review article. An article that summarizes key theoretical trends across multiple studies on the same topic. Open science principles Table. Open science principles and their corresponding badges. Badge Open science principles Preregistration: The preregistration badge is awarded to articles that have publicly preregistered their study rationale, hypothesis, research design, and data analysis plan prior to collecting and analyzing any data. (Hong & Moran, 2019). Preregistrations are date and time stamped and cannot be changed once they are locked in. Open materials: The open materials badge is awarded to articles that publish their study materials and make them freely available. This includes software programs, survey materials, questionnaires, blue prints for recreating apparatus, and experimental stimulus materials (such as videos, audio clips, text descriptions, or images). (Association for Psychological Science, n.d.) PSYB70 Study Guide, Page 16 Open data: The open data badge is given to articles that make one’s data and the code used to analyze that data freely available to others. This allows others to verify one’s findings, catch data analytic errors, conduct additional analyzes, and/or compile findings when conducting meta-analyses. (Hong & Moran, 2019). Replications: The replications badge is awarded to articles that have conducted a direct replication of another research study for the purpose of contributing to the verifiability, accuracy, and reproducibility of that original research finding (Psi Chi, n.d.). Because this badge is new, you can also look for the words “registered replication” in the title if a badge is not present. Open access: Open access allows the content of scientific journals to be made available to the general public for free, rather than requiring a paid subscription, institutional account, or other form of preferential access. (Hong & Moran, 2019). Open science badges in practice View examples of articles that have been awarded various open science badges: The Psi Chi Journal [badges explained, with links to examples] Psychological Science [badges explained] [view articles awarded badges] List of other journals that award open science badges. 4C.2. Types of peer reviewed articles Three types of peer reviewed articles Follow along with the lecture to identify the term that goes with each description. Then use the 'try it' exercise to test your understanding: __________________ An article that publishes the purpose, methods, and results of one or more original research studies. __________________ An article that summarizes key theoretical trends across multiple studies on the same topic. __________________ An article that averages the statistical results of multiple studies on a topic to get an overall effect. PSYB70 Study Guide, Page 17 4C.3. Reading empirical journal articles Parts of a research article Overview. Within the field of psychology, empirical research reports are typically prepared in "APA-style". Having a consistent manuscript style for empirical reports makes it easier for journal editors to quickly adapt a manuscript into an empirical article that fits the style of that particular journal. In this video, I discuss the key components of a typical research article within psychology. See Chapter XI in your textbook for more information about APA-style ▪ Section 48. APA style ▪ Section 49. Writing a research report in APA style Indexing information Authors: Who wrote the article? Which organizations are they affiliated with? ▪ The authors of a paper are listed in a specific order, usually with the principle investigator (or lead researcher) listed first and collaborators and other authors/members of the team listed afterwards. Publication year: What year was the article published? Article title: The title of the paper communicates the main topic area of the research. Journal: In what journal is the article published? ▪ Volume number: Most journals will publish multiple editions of the article each year. The volume number keeps track of each edition. ▪ Page numbers: Lists the pages of the journal on which the article appears. ▪ Digital object identifier: Each article is assigned a unique alpha-numeric number which makes it easier to track each article. Abstract Abstract: concise summary of an article, about 120-150 words long. ▪ Topic and focus ▪ Key research methods PSYB70 Study Guide, Page 18 ▪ Major results Useful for making decisions about which articles to read. Introduction Discusses the theoretical foundation for the research. Describes what is currently known about a topic. Offers an explanation for the existing evidence. Identifies “gaps” in the evidence and makes predictions. Discusses how the researcher will test the theory. The introduction typically ends with a clear statement of the research question and/or research hypotheses. ▪ A hypothesis is a specific, testable statement of the result that the researcher expects to observe from the data if the proposed theory is true. Method Describes the procedures, participants, design, and variables of a study. Discusses what special materials or apparatus were used to conduct the study. Results Reports the results of the study. Discussion Focuses on providing an interpretation of the results, a critical evaluation of the strengths and weaknesses of the method, and ideas for future research. The critical evaluation of the study often discusses the choices and trade-offs that had to be made between the different types of validity (construct validity, external validity, internal validity, and statistical validity). References Citing means to indicate the source of information when that information is used within a paper (i.e., “in-text”). References all of the sources cited in the text must appear in a list of references at the end of your paper. When pre-registering research proposals, researchers have to provide… Introduction Methods Predicted results PSYB70 Study Guide, Page 19

Use Quizgecko on...
Browser
Browser