1- PSY350 Test 1 Review Sheet 2024.docx
Document Details
2024
Tags
Related
Full Transcript
**[Test 1 Review Sheet (9-6-2024)]** **This review sheet is a tentative list of topics that will be covered on test 1. It may be revised as we get closer to the actual test date.** [**Chapter 1**\ ]Importance of research methods (both direct and indirect reflections of importance) Include specific...
**[Test 1 Review Sheet (9-6-2024)]** **This review sheet is a tentative list of topics that will be covered on test 1. It may be revised as we get closer to the actual test date.** [**Chapter 1**\ ]Importance of research methods (both direct and indirect reflections of importance) Include specific examples (e.g., Clark and Clark's research with children) Importance: - Provides systematic way to understand behaviour - Generate knowledge - Improve interventions - Direct (eg., controlled) and indirect reflections (societal impact) Researches Done: **DIRECT:** - Clark & Clark's Doll study (1940s) - Black children presented dolls of difference races - White dolls were chosen - Evidence that segregation caused psychological harm to black children - Milgram's Obedience Experiment (1960s) - To what extent people would obey authority figure - Inflict pain by shock machine - Well-controlled experimental method - High % of people willing to administer dangerous electric shocks - Bandura's Bodo Doll Experiment (1961) - Aggressive behaviour demonstrated to children to see if they would act the same - Children who observed aggressive behaviour more likely to act aggressively themselves **INDIRECT:** - Terman's Longitudinal Study of Gifted Children (1921) - Longitudinal study of intellectually gifted children, tracking them over decades to study their development - Higher IQ grow to be adults with greater socioeconomic success and advanced educational achievements - Indirect reflection included new perspectives on how intelligence interacts with life experiences - Stereotypes debunked (eg., gifted children would necessarily face social and emotional difficulties) - Zimbardo's Stanford Prison Experiment (1971) - Direct reflection - Immediate observation of how situational factors could lead to abusive behaviour - Indirect reflection - Involved a broader societal and academic discussion about ethical standards in research - Led to a more stringent ethical guidelines in human subjects research - Harlow's Attachment Studies (1950s) - Rhesus monkey - Importance of emotional bonds in infant development - Soft comforting surrogate mothers preferred over a wired one that provided food - Direct reflection - Importance of comfort and security for attachment - Indirect reflection - Influenced theories about human attachment (e.g Bowlby's attachment theory) and the practices surrounding infant care in both homes and hospitals Ways of knowing Intuition, authority figures, observation - Behavioural science relies on empirical methods to ensure accuracy 1. Intuition a. Relying on personal feelings or gut instinct b. Guides decision but prone to bias and error 2. Authority c. Accepting information from authority figures (eg., teachers, experts, parents) helpful but doesn't guarantee accuracy 3. Observation d. Direct observation of phenomena allows for objective data collection, critical for scientific inquiry Important properties of scientific approach (e.g., objective rules, falsifiability) 1. Objective rules a. Research follows a set of rules and methodologies that ensure consistent, unbiased data collection and interpretation 2. Falsifiability b. Theories must be tested and capable of being proven wrong. c. Hypothesis that cannot be proven wrong is not considered scientific 3. Replicability d. Other researchers should be able to repeat studies and get similar results, ensuring reliability Pseudoscience: definition and examples of it Pseudoscience: Claims or beliefs that masquerade as scientific but lack empirical support and so not follow the scientific method - Example: Astrology, homeopathy, some fringe psychological therapies that aren't based on evidence. These areas may appear scientific but lack rigor, falsifiability and replicability required in true science Goals of behavioral science: describe, predict, identify causes, explain includes three factors involved with identifying causes (e.g., covariation of factors) Goals of Behavioural Science: 1. Describe -- accurately record and categorize behaviour 2. Predict -- determine when and under what circumstances certain behaviours are likely to occur 3. Identify causes -- establish casual relationships between variables 4. Explain -- provide a comprehensive understand of why behaviours occur 3 Factors Involved in Identifying Causes: 1. Temporal Precedence -- the cause must precde the effect 2. Covariation of Cause & Effect -- when the cause is present, the effect should occur; when the cause is absent, the effect should not occur 3. Elimination of Alternative Explanations -- other potential causes should be ruled out to confirm the casual relationship Basic and applied research includes qualitative difference and ability to recognize examples of the two types Basic & Applied Research - Focused on expanding fundamental knowledge, applied research aims to solve practical problems - Qualitative Difference - Basic research = theoretical concepts such as memory processes - Applied research = focus on improving educational techniques based on those memory processes [**Chapter 2**\ ]Research questions, hypotheses, and predictions. Appreciate the differences among each category and recognize examples of each Research Questions, Hypotheses, and Predictions 1. Research questions a. Definition -- a research question is the broad question a study seeks to answer. It is typically open-ended and helps guide the direction of the research b. Example: "Does social media usage affect levels of anxiety in teenagers?" c. Difference: A research question doesn't assume an answer but rather seeks to explore a relationship or phenomenon 2. Hypotheses d. Definition -- a hypothesis is a more specific statement that makes a tentative answer to the research question. It reflects what the researcher expects based on previous theory or research e. Example: "Teenagers who spend more time on social media will report higher levels of anxiety compared to those who spend less time on social media." f. Difference: A hypothesis is a testable prediction that includes an assumed relationship between variables 3. Predictions g. Definitions -- a prediction is more precise expectations of what the outcome of a specific study will be, often based on the hypothesis h. Example: "Teenagers who use social media for more than 3 hours per day will score 10 points higher on the anxiety scale compared to those who use it for less than 1 hour per day.: i. Difference: A prediction specifies a quantifiable outcome and provides more detail than a hypothesis Sources of ideas: observations, practical problems, theory, and previous research 1. Observations: a. Everyday observations of behavior or phenomena can spark research ideas b. Eg., a researcher notices that people seem more distracted when multitasking on their phones and develops a research question around this observation 2. Practical Problems: c. Real-world problems often drive research , particularly applied research d. Eg., a school psychologist may observe a rise in anxiety among students and develop a study to explore effective interventions 3. Theory: e. Existing theories can guide new research by generating hypotheses that test specific aspects of the theory f. Eg., a researcher may use *Bandura's social learning theory* to hypothesize how children imitate aggressive behaviour after watching violent media 4. Previous research g. Past studies often highlight gaps or open questions, providing a basis for further investigation h. Eg., after reading studies on cognitive decline, a researcher might ecplore whether certain cognitive exercises can mitigate memory loss in older adults Types of research reports: literature reviews, theory articles, and empirical articles 1. Literature Reviews a. Summarises past research on a specific topic and identifies trends, gaps, and future directions b. Purpose: i. Helps researchers understand what is already known and what needs further exploration c. Example: a literature review on the impact of mindfulness on anxiety might compile studies showing both positive and mixed results 2. Theory Articles d. Focuses on developing, expanding, or refining theories. They do not present new empirical data but instead offer a conceptual framework e. Purpose: ii. To propose or clarify theoretica ideas f. Example: an article might propose a new model of attention based on existing cognitive theories 3. Empirical Articles g. Reports original research findings, including the study's methodology, data analysis, and results h. Purpose: iii. To present new research data collected through observation, experimentation, or surveys i. Example: reports the results of an experiment on how sleep deprivation affects memory performance Exploring past research: PsycInfo and Google scholar including advanced search functions 1. PsycInfo a. Comprehensive database of psychological literature, including journal articles, books, and dissertations b. Advanced search functions i. Boolean operators (AND, OR, NOT) to refine searches ii. Filters to narrow results by publication type, year, population studied, etc iii. Thesaurus feature to find subject terms related to specific topics c. Example: A researcher studying social anxiety could use PsycInfo to find peer-reviewed articles by entering keywords like "social anxiety" and "adolescents". 2. Google Scholar d. Freely accessible search engine for scholarly articles across disciplines e. Advanced search functions iv. Use quotation marks to search for exact phrases (eg., "social anxiety disorder") v. Filter results by publication year or sort by relevance or citation count vi. Use "cited by" feature to find articles that cite a specific study, which helps track how ideas evolve over him f. Example: A researcher might search for articles related to the impact of "mindfulness" on "stress reduction' and filter results to the last 5 years to get the most recent studies. [**Chapter 3**\ ]Historical context of current ethical standards\ Nuremberg medical trials and resulting Nuremberg Code Nuremberg Medical Trials (1946-1947) - **Held after WWII** to hold Nazi physicians accountable for atrocities commited in the same of research - Evidence of gross human rights violence in medical experimentation led to the development of Nuremberg Code in 1947 Nuremberg Code - Ethical principles for conducting research - Voluntary participation and informed consent - Stressed the importance of: - Voluntary consent of the human subject - Right to withdraw from study - Requirement that experiments should have meaningful scientific basis and minimize risk - Problems with the Nuremberg Code - Only medical research - No oversight or review board - Too vague - What constitutes medical research? - When is informed consent of the subject necessary and what information must be provided? - What makes scientist qualified? - Unclear how to gauge value of research - No guide for weighing risks against benefits Declaration of Helsinki (benefit over Nuremberg Code) - Developed by World Medical Association - In-depth version on the Nuremberg Code - Medical research - **Why Helsinki \> Nuremberg** - Emphasizes on well-being of the research participants over the interest of science - Introduced the concept of independent review through ethics committees - Addressed vulnerable populations, highlighting the need for additional safeguards when working with groups like children or prisoners Belmont Report (1979) - After the unethical **Tuskegee Syphilis Study**, US government commissioned the Belmont Reoprt to established a framework for ethical conduct in research - **3 key principles** - Respect for persons -- recognizing individuals autonomy and protecting those with diminished autonomy (eg., obtaining informed consent) - Beneficence -- minimizing potential harm and maximizing possible benefits - Justice -- ensuring fair distribution of the benefits and risks of research APA ethics code (recognize examples of each of the principles) - American Psychological Association (APA) to guide psychologists in research and practice - 5 general principles: - Beneficence and Nonmaleficence - Prioritise the welfare of participants and avoid harm - Fidelity and Responsibility - Establish trust with participants and adhere to professional conduct - Integrity - Promote honesty and transparency in research - Justice - Ensure all individuals have access to and benefit from research - Respect for Person's Rights and Dignity - Respect the dignity and rights of individuals, including privacy and autonomy - Example: in a study involving deception, participants must be debriefed afterward to upload the principle of integrity and ensure that participants understand the true nature of the research Milgram's obedience experiment unethical components of this work (and other unethical research that we covered) Milgram's Obedience Experiment: a. Unethical components i. Lack of fully informed consent: participants were deceived about the true nature of the study ii. Psychological harm: many participants experienced emotional distress, believing they were causing real harm to another person iii. Right to withdraw: although participants technically could leave, they were verbally pressured to continue 1. Tuskegee Syphilis Study (1932-1972) b. Overview: iv. US public health conducted a study on African American men with syphilis, without treating them or informing them of their condition, even after penicillin became available as a treatment c. Unethical components: v. Deception and lack of informed consent vi. Failure to provide proper medical care vii. Exploitation of a vulnerable population (poor, african American men) 2. Radioactive Nutrition Experiment (1940s-1950s) d. Overview viii. Harvard University and MIT fed radioactive isotopes to disabled children at Fernald State School. ix. Experiment aimed to study how the body absorbs nutrients, specifically calcium and iron. e. Unethical components x. Lack of informed consent xi. Exploitation of a vulnerable population xii. Use of deception (children were lied to saying they were part of "sci club" and given special privileges, such as extra food or trips, to encourage participation; violated principles of autonomy) xiii. Exposure to risk without clear benefit (no clear medical or therapeutic benefits to the kids) xiv. Lack of oversight - Potential benefits of psychological research - Educational (new skill acquisition, or treatment for a psychological or medical condition) - Material benefits - EC, money - Personal satisfaction - Application of the research findings - Eg., desegregation (Clarke&Clarke Study) - Modified teaching style - Improved management style - If potential benefits of the study outweighs the risk involved with the procedure, research may be carried out, otherwise alternative procedures must be found - Identification of Risk - Physical harm - Research that could cause injury or discomfort (eg., exposure to hazardous substances) - Psychological harm - Stress, anxiety, or emotional distress that might arise during an experiment - Loss of privacy or confidentiality - Sensitive data is exposed or misused - Benefits - Direct benefits - Participants might experience improved well-being (eg., new therapy) - Societal benefits - Research findings might lead to societal improvements (eg., educational interventions, healthcare advancements) - Knowledge gain - Even if participants don't directly benefit, the study may provide valuable insights - Function - Review and approve research proposals to ensure they comply with ethical standards and protect participants - Research Needs: - Informed Consent: - Participants must be provided with information about the study, including risks and benefits, and must voluntarily agree to participate - Confidentiality - Researchers must take steps to protect participants' data - Debriefing - If deception is used, participants must be informed of the true nature of the research afterward - Types of IRB review - Exempt review: - Research with minimal risk (eg., anonymous surveys) may be exempt from full review - Expedited review: - For research that involves only minimal risk but is not exempt (eg., non-invasive prodecures) - Full review: - For studies that pose more than minimal risk (eg., invasive medical procedures, studies involving vulnerable populations) - Example: - A studying using deception (eg., confederate in social psychology experiment) would require a **full review**, while a simple anonymous survey on voting behavior might qualify for an **exempt review** Types of unethical behavior (e.g., fraud) 1. Fraud a. Fabricating or falsifying data, or mispresenting research findings b. Example; i. The case of Diederik Stapel (Dutch psychologist) who fabricated data in several high-profile social psychology studies 2. Plagiarism c. Using someone else's work or ideas without proper acknowledgement d. Example: ii. Copying sections of another researcher's paper without citation 3. Deception without proper safeguards e. Deception must be used judiciously and should always be followed by debriefing f. Example: iii. Failing to debrief participants after using deception would violate ethical principles of transparency and respect **[\ ]** [**Chapter 4**\ ]Construct validity, internal validity, external validity including ability to identify strong and weak examples of each **Construct Validity**: Definition: the extent to which the operational definitions of variables accurately capture the concepts they are intended to measure Strong example - A well-constructed intelligence test that accurately measures cognitive ability (IQ) and not other unrelated factors like motivation or cultural bias has high construct validity Weak example - A personality test that claims to measure extroversion but actually captures unrelated traits, such as assertiveness or anxiety, lacks construct validity - Ensures that the variables accurately reflect the concepts they are meant to measure **Internal Validity:** Definition: the extent to which a study can establish a cause-and-effect relationship between the independent and dependent variables by ruling out confounding variables Strong example: - In a tightly controlled experiment where random assignment is used, and other factors are held constant, it can be confidently stated that changes in the independent variable (eg., treatment or intervention) caused the changes in the dependent variable (eg., behaviour or outcomes) Weak example: - In an observational study where multiple variables (eg., socioeconomic status, education) could influence the outcome, its difficult to conclude that one specific variable caused the observed effect) - Allows researchers to determine cause-and-effect relationships by controlling for confounding variables **External Validity:** Definition: - The extent to which the results of a study can be generalized to other settings, populations or time periods Strong example: - A study conducted on a large, diverse sample that produces similar results in different contexts and across different age groups has high external validity Weak example: - A study conducted on a small homogenous group (eg., only college student from one university) may not generalize well to broader populations, leading to low external validity Operational definitions of variables including ability to recognize good examples and purpose of these definitions Definition: - Operational definitions specify how variables are measured or manipulated in a study. This clarity is essential for replicating studies and ensuring that abstract concepts are properly measured Purpose: - Provides precision and clarity about how variables are defined - Allows for replication of research by other scientist - Facilitates communication between researchers by standardizing terms Good examples: - Operational definition of **stress** - Stress can be defined as the participant's self-reported score on a validated stress questionnaire, such as the Perceived Stress Scale (PSS) - Operational definition of **aggression** - Aggression might be measured by counting the number of times a participant engages in hostile behaviours (eg., verbal insults, physical altercations) during an observation period Weak example: - Defining "intelligence" simply as "how smart someone is" without specifying how it will be measured (eg., through IQ tests or academic achievement) would be a poor operational definition, as it lacks clarity and precision Relationships between variables: Four possibilities A graph of money and money Description automatically generated 1. Positive linear a. One variable increase, other increase 2. Negative linear b. One variable increase, the other decrease 3. Curvilinear c. Not a straight line; changes direction at some point i. Relationship between stress and performance; moderate stress can improve performance, but too much or too little stress leads to poorer performance 4. No relationship Experimental methods: Importance of IV and DV and controls 1. Independent variable (IV): a. Variable that is **manipulated** i. Eg., amount of sleep on memory performance 2. Dependent variable (DV) b. Variable that is **measured** ii. Eg., memory test scores would be dependent, as it is expected to change based on the amount of sleep 3. Importance of Controls c. Control Variables iii. Those that are kept constant across experimental groups to ensure that the effect on dependent variable is due to the independent variable and not to other factors d. Example: iv. In a study investigating the effect of caffeine on attention, participant's diet, exercise and sleep should be controlled to prevent factors from influencing the results e. Control Group: v. A group that does not receive the experimental treatment, used as a baseline to compare the effects of the independent variable [**Chapter 5** ] [\ ]Reliability of measures: Be able to contrast it with validity include types (e.g., test-retest) and relationship with measurement accuracy (You do not need to know the statistical tests mentioned in the text) Reliability refers to the **consistency** or **stability** of a measure. If the same measurement is repeated under identical conditions, a reliable measure will yield the same results **Types of Reliability:** 1. **Test-Retest Reliability**: - **Definition**: **Consistency of a measure over time**. If you give the same test to the same people on two different occasions, a high correlation between the two sets of scores would indicate high test-retest reliability. - **Example**: Administering an intelligence test to a group of students in January and then again in June to see if their scores remain consistent. 2. **Internal Consistency Reliability**: - **Definition**: Consistency of results across items **within a single test**. This type of reliability is relevant when multiple items are used to measure the same construct. - **Example**: A personality test where several questions assess \"extroversion\" should show consistency across those questions (i.e., people who agree with one extroversion-related statement should generally agree with others). 3. **Inter-Rater Reliability**: - **Definition**: The level of agreement between **two or more observers** (raters) who independently assess or rate the same phenomenon. - **Example**: Two judges rating a gymnastics performance. High inter-rater reliability is achieved when both judges give similar scores to the same performance. **Relationship with Measurement Accuracy:** - **Reliability** is necessary but **not sufficient** for accuracy (or validity). A measure can be reliable (consistent) but still be **invalid** (not accurately measuring what it\'s supposed to). For example, a bathroom scale may reliably give you the same weight every time (reliable), but if it\'s incorrectly calibrated, it won't show your actual weight (not valid). Construct validity types (e.g., face validity) (Again, actual tests to compute these is not going to be tested) Understand the conceptual ideas behind each type Validity refers to the extent to which a test measures what it claims to measure. While **reliability is about consistency**, **validity** is about **accuracy.** **Types of Construct Validity:** 1. **Face Validity**: - **Definition**: The extent to which a measure **appears** on the surface to measure what it's supposed to. This is a subjective judgment and not based on empirical testing. - **Example**: A math test that includes questions about solving equations seems like it has face validity for measuring mathematical ability. 2. **Content Validity**: - **Definition**: The extent to which the measure covers the **full range** of the construct it is intended to assess. It looks at whether the measure includes all the relevant components of the concept. - **Example**: A job knowledge test for an accountant should cover all relevant areas of accounting (e.g., tax law, auditing, and financial statements) to ensure content validity. 3. **Criterion Validity**: - **Definition**: The extent to which a measure is related to an **outcome**. There are two types of criterion validity: - **Concurrent Validity**: How well a new measure correlates with an established measure taken at the same time. - **Predictive Validity**: How well a measure predicts future outcomes. - **Example**: A college entrance exam (like the SAT) has predictive validity if it can accurately predict a student's success in college. 4. **Convergent and Discriminant Validity**: - **Convergent Validity**: The extent to which a measure is related to other measures that it theoretically should be related to. - **Discriminant Validity**: The extent to which a measure is **not related** to other variables it should not be related to. - **Example**: A depression scale should correlate with other measures of depression (convergent) and not correlate with measures of unrelated constructs like physical health (discriminant). Reactivity of measures: What it is, how it's problematic, and ways to avoid it - **Definition**: Reactivity refers to the phenomenon in which participants **alter their behavior** simply because they know they are being observed or measured **Why It's Problematic:** - Reactivity can **bias** results, leading to invalid conclusions. Participants may perform better or worse than they would naturally, or they might alter their responses to appear more socially desirable. **Ways to Avoid Reactivity:** 1. **Use of unobtrusive measures**: Measures that don't make it obvious that participants are being observed. - **Example**: Observing people\'s behavior through hidden cameras in public spaces where they wouldn't expect to be measured. 2. **Disguised observations**: Participants are not aware that their behavior is being observed for research purposes. - **Example**: Using a \"one-way mirror\" or pretending the researcher is part of the setting (e.g., a teacher observing students' natural behavior). Variables and measurement scales; vTypes of data (e.g., nominal, ratio) and examples of each **Types of Data:** 1. **Nominal Scale**: - **Definition**: Categorizes data into discrete, non-ordered categories or groups. - **Example**: Gender (male/female), types of pets (dogs, cats, birds). 2. **Ordinal Scale**: - **Definition**: Data is categorized into **ordered** categories, but the differences between the categories are not meaningful or consistent. - **Example**: Rankings in a race (1st, 2nd, 3rd), socioeconomic status (low, middle, high). 3. **Interval Scale**: - **Definition**: Data is ordered, and the intervals between the values are consistent, but there is **no true zero point**. - **Example**: Temperature in Celsius or Fahrenheit. The difference between 10°C and 20°C is the same as between 20°C and 30°C, but 0°C does not mean \"no temperature.\" 4. **Ratio Scale**: - **Definition**: Like the interval scale, but it has a **true zero point**, meaning zero represents the absence of the variable. - **Example**: Height, weight, reaction time, or number of correct answers on a test.