PSY2041 Summarised Notes PDF
Document Details
Tags
Summary
This document summarises key concepts in psychological testing, covering the definition, characteristics, history, cultural impact, and differences between psychological testing and assessment. It's intended for an undergraduate psychology course.
Full Transcript
WEEK 1 - Introduction to psychological testing Lecture Learning Outcomes 1. Define ʻpsychological testʼ and explain the defining characteristics of a psychological test. 2. Explain the history of psychological testing and how psychological tests have developed over time. 3....
WEEK 1 - Introduction to psychological testing Lecture Learning Outcomes 1. Define ʻpsychological testʼ and explain the defining characteristics of a psychological test. 2. Explain the history of psychological testing and how psychological tests have developed over time. 3. Identify the key differences between psychological testing and psychological assessment. 4. Discuss the ways in which culture, ethnicity, and differences in ability impact upon psychological testing and assessment. 1. Definition of ʻpsychological testʼ and its defining characteristics Definition: A psychological test is an objective and standardized procedure used to sample and quantify human behaviour, allowing inferences about psychological constructs (e.g., intelligence, personality) through standardised stimuli, administration, and scoring. Key Characteristics: Behavior sampling: Tests measure observable actions that are indicative of underlying psychological traits or states. For example, a personality test might ask individuals to respond to scenarios that reveal their levels of introversion or extroversion. Objective procedures: Standardization ensures consistent administration across individuals (e.g., test questions are read exactly the same way to every participant). Quantitative results: Test results are expressed as numerical scores. For example, standardized exams like VCE or ATAR provide quantitative measurements of performance. Objective reference points: Scores are compared against established norms or criteria (e.g., IQ tests use norms to compare an individual’s score to a population average). Psychometric properties: Two essential properties of a good test are: ○ Reliability: The test must produce consistent results over time. ○ Validity: The test must measure what it claims to measure. 2. History of Psychological Testing Ancient China (Han Dynasty): Early test batteries were developed to assess individuals' suitability for public service roles. 19th Century Britain: Class distinctions were associated with inherited intelligence, with the belief that individuals from higher social classes were more intelligent. France (Alfred Binet, late 19th century): Developed the first modern intelligence tests to identify children needing special education. This introduced the concepts of "mental age" and "IQ." World Wars I & II: Group tests like the Army Alpha and Beta were developed for military purposes, marking the rise of large-scale psychological testing. Post-WWII developments: ○ Wechsler’s non-verbal scales expanded intelligence testing beyond verbal abilities, including cognitive skills such as problem-solving and spatial reasoning. ○ Personality testing also became prominent (e.g., the Minnesota Multiphasic Personality Inventory, which is widely used to assess psychological conditions and personality traits). 3. Key differences between psychological testing and psychological assessment Testing: Focuses on quantifying ability or behaviour through numerical scores or categorisation. Assessment: A broader process that uses multiple tools (e.g., tests, interviews, behavioural observations) to answer specific questions or solve problems. Processes: Testing: Emphasizes scoring based on rules and standardized administration. Assessment: Involves exploring the "why" behind scores, using various methods to gain a deeper understanding. Evaluator roles: In testing, the evaluator’s role is to administer the test and ensure standardization. In assessment, the evaluator interprets the results and provides conclusions or recommendations. Outcomes: Testing provides scores that quantify performance. Assessment offers insights, recommendations, and a more comprehensive understanding of the individual. 4. Impact of culture, ethnicity, and ability on testing and assessment Biases in Testing: Psychological tests have historically been linked with eugenics and discriminatory practices. For example, intelligence tests in the US have been culturally biased, disadvantaging minority groups. Equity Considerations: Tests must be administered fairly, and this sometimes requires accommodations for individuals with different abilities (e.g., using Braille for visually impaired participants). Cultural Sensitivity: Most psychological tests are developed in WEIRD (Western, Educated, Industrialized, Rich, Democratic) societies, which may limit their validity across diverse cultural contexts. It is important to assess whether a test is appropriate for individuals from different cultural or ethnic groups, beyond mere translation, as cultural relevance may be more complex. WEEK 2 - Reliability and Validity Lecture Learning Outcomes: 1. Define 'reliability' as a psychometric property and articulate the importance of test reliability in psychological testing and assessment. 2. List, describe, and differentiate the different ways that a test's reliability can be assessed. 3. Define 'validity' as a psychometric property, describe its importance in psychological testing and assessment, and explain how it is related to test reliability. 4. List, describe, and differentiate the different kinds of test validity. 1. Reliability as a Psychometric Property Definition: Reliability refers to how consistently a test measures something. A reliable test will give similar results across different times, raters, or versions of the test. Importance: Without reliability, the test results cannot be trusted to measure the intended construct consistently. In psychological testing, reliability is crucial because it ensures that the test measures attributes like intelligence, personality traits, or cognitive abilities in a stable and reproducible way. 2. Ways to Assess Test Reliability Test-Retest Reliability: Measures consistency over time by administering the same test to the same group at different points in time. If the scores are similar, the test has high test-retest reliability. Interrater Reliability: Assesses if different observers or raters provide consistent scores for the same test. This is essential for tests involving subjective judgment, such as essay grading or behavioural observations. Internal Consistency: Evaluates whether different items within a test that are supposed to assess the same construct produce similar results. Split-half reliability and Cronbach’s alpha are common methods to assess internal consistency. Equivalent Forms Reliability: Determines if different versions of the same test (e.g., different question sets) yield consistent results. This is important for ensuring test equivalence, such as in alternate versions of standardized tests. 3. Validity as a Psychometric Property Definition: Validity assesses whether a test measures what it is supposed to measure. It ensures that test results are meaningful and can be used to make accurate predictions or evaluations. Relation to Reliability: A test must be reliable to be valid—consistent measurements are necessary for validity. However, a reliable test is not necessarily valid; it could consistently measure the wrong construct. 4. Kinds of Test Validity Face Validity: The test appears, at face value, to measure what it claims to. Though it’s the weakest form of validity, it is essential in gaining the confidence of test-takers. Content Validity: Assesses whether the test covers all aspects of the construct it is supposed to measure. For example, a test measuring generalized anxiety must address a range of symptoms associated with the disorder. Predictive Validity: Evaluates whether the test’s scores can predict future performance or behavior. For instance, IQ tests or driving tests should correlate with future success in related areas. Construct Validity: Refers to whether the test aligns with theoretical assumptions about the construct being measured. It is typically evaluated through: ○ Convergent evidence: Correlation with related constructs. ○ Discriminant evidence: Lack of correlation with unrelated constructs. WEEK 3 - Test Development Lecture Objectives: 1. Define and understand test conceptualisation, construction, tryout, item analysis, and test revision. 2. Compare and contrast the different types of test item formats. 3. Articulate the criteria that assess whether an item is a ‘good item.’ 4. Understand different types of quantitative and qualitative item analysis techniques. 5. Recognize how the goals of a test will determine the ideal item analysis criteria. 1. Test Conceptualization, Construction, Tryout, Item Analysis, and Test Revision Test Conceptualization: This stage involves identifying the need for the test, defining the psychological construct, reviewing existing literature and tests, and articulating the purpose, population, and format of the test. Test Construction: Focuses on developing a pool of items that represent all relevant aspects of the construct. Items undergo expert review for content validity. Test Tryout: The test is administered to a representative sample (typically 100+ participants). Responses are analysed to identify the most effective items and improve the test. Item Analysis: Evaluates item difficulty, dimensionality, reliability, validity, and discrimination. The goal is to refine the test by keeping good items and discarding poor ones. Test Revision: 2. Test Item Formats Likert Scale: Provides ordinal-level data (e.g., from "strongly disagree" to "strongly agree"). The number of response options (e.g., 5 vs. 7) and balance (odd vs. even) need careful consideration. Binary Choice Scale: Involves true/false or yes/no responses. It's quick to administer but limits the richness of responses and is prone to guessing. Paired Comparisons: Respondents choose between two options, each of which has a pre-assigned value. It requires careful construction to ensure fairness and clarity. Multiple-Choice Questions (MCQs): Efficient for covering large amounts of content and are easy to score. However, they may restrict creativity and require careful item design to avoid ambiguity. Essay: Allows for a deeper assessment of complex knowledge, but is time-consuming to score and can suffer from interrater reliability issues if not standardized. 3. Criteria for Assessing a 'Good Item' Item Difficulty: An ideal item has a moderate difficulty level, typically with a p-value between 0.3 and 0.8. Item Discrimination: A good item effectively differentiates between high and low scorers, often measured using point-biserial correlation. Reliability and Validity: Items must consistently measure the construct and be valid indicators of the test’s purpose. Item Distribution: Items should avoid skewed distributions as they provide little variability and weak correlation with other items. 4. Item Analysis Techniques Quantitative Techniques: Include measuring item difficulty, reliability (e.g., Cronbach’s alpha for internal consistency), item discrimination (e.g., point-biserial correlation), and dimensionality (e.g., factor analysis). Qualitative Techniques: Include gathering feedback from experts or the target population on clarity, relevance, and appropriateness of the items. 5. Impact of Test Goals on Item Analysis Criteria The criteria used to analyze items should align with the test's goals: If the test is broad (e.g., measures a general construct like intelligence), factor analysis might be crucial to ensure all dimensions are represented. If the test aims to predict specific outcomes, criterion-related validity becomes more important. WEEK 4 - Quantitative Methods I Lecture Learning Outcomes: 1. Define what is meant by 'sample' and 'population' in the context of statistics, and explain how the two concepts relate to each other. 2. Explain what is meant by a probability distribution, and interpret percentile norms in a normal distribution. 3. Define the different descriptive statistics that measure a distribution's central tendency: the mean, the median, and the mode. 4. Define the different descriptive statistics that measure the spread of a distribution: variance, standard deviation, range, and interquartile range. 5. Define the four different types of measurement scales (nominal, ordinal, interval, ratio) and identify the types of data that belong to each. 1. Sample and Population Population: Refers to the entire group you are interested in studying (e.g., all humans, all PSY2041 students). Sample: A subset drawn from the population (e.g., 30 randomly selected PSY2041 students). Relation: Samples are used to make inferences about the larger population, as studying the entire population is usually impractical. 2. Probability Distribution and Percentiles Probability Distribution: Describes how likely different outcomes are. Probabilities range between 0 and 1, where 0 means no chance, and 1 means certainty. Normal Distribution: A bell-shaped curve where most values cluster around the mean. It is fully described by its mean and standard deviation. Percentiles: In a normal distribution, approximately 68% of values fall within one standard deviation of the mean, and 95% fall within two standard deviations. 3. Measures of Central Tendency Mean: The arithmetic average of a set of numbers. Median: The middle value, with 50% of observations above and 50% below. Mode: The value that occurs most frequently. 4. Measures of Spread Variance: The average of the squared differences from the mean. Standard Deviation: The square root of variance, representing the average distance from the mean. Range: The difference between the highest and lowest values. Interquartile Range (IQR): The difference between the first and third quartiles (the 25th and 75th percentiles). 5. Measurement Scales Nominal: Categories without a specific order (e.g., types of transport: car, bus, bike). Ordinal: Categories with a meaningful order but no equal intervals (e.g., survey responses: bad, good, excellent). Interval: Numerical values with equal intervals but no true zero (e.g., temperature in Celsius). Ratio: Like interval scales, but with a meaningful zero point, allowing for meaningful ratios (e.g., height, weight). WEEK 5 - Quantitative Methods II Lecture Learning Outcomes: 1. Explain Null Hypothesis Significance Testing as a conceptual framework for inferential statistics. 2. Describe the one-sample z-test at a conceptual level and identify the kinds of research questions that call for a one-sample z-test. 3. Describe the one-sample t-test at a conceptual level and identify the kinds of research questions that call for a one-sample t-test. 4. Describe the paired-samples t-test at a conceptual level and identify the kinds of research questions that call for a paired-samples t-test. 5. Describe the independent-samples t-test at a conceptual level and identify the kinds of research questions that call for an independent-samples t-test. 6. Describe the Pearson correlation analysis at a conceptual level and identify the kinds of research questions that call for a Pearson correlation analysis. 1. Null Hypothesis Significance Testing (NHST) Concept: NHST starts with the assumption of no effect (the null hypothesis) and tests how surprising the observed data are under this assumption. If the data are very surprising (low p-value), we reject the null hypothesis and suggest there is evidence for the alternative hypothesis. The p-value quantifies the probability of seeing data as extreme or more extreme than observed, assuming the null hypothesis is true. A p-value < 0.05 is typically considered statistically significant. 2. One-Sample Z-Test Purpose: Used when comparing the mean of a sample to a known population mean, and the population standard deviation is known. Conditions: ○ You have data on an interval or ratio scale. ○ The data are normally distributed. ○ You know the population’s standard deviation. Example: "Is the mean IQ of PSY2041 students higher than the general population's IQ?" 3. One-Sample T-Test Purpose: Used when comparing the mean of a sample to a known population mean, but the population standard deviation is unknown. Conditions: ○ You have data on an interval or ratio scale. ○ The data are normally distributed. ○ You don’t know the population’s standard deviation. Example: "Do students score above chance on a multiple-choice quiz?" 4. Paired-Samples T-Test Purpose: Used when you measure the same sample twice (e.g., before and after treatment) and want to compare the means of the two measurements. Conditions: ○ You have paired data (two measurements on the same subjects). ○ The differences between the paired measurements are normally distributed. Example: "Is there a difference in depression symptoms before and after CBT treatment?" 5. Independent-Samples T-Test Purpose: Used when comparing the means of two different samples (e.g., treatment vs. control groups). Conditions: ○ You have data from two independent samples. ○ The data are measured on an interval or ratio scale. ○ Both samples are normally distributed. Example: "Do students who receive a new drug to increase extraversion score higher than those who receive a placebo?" 6. Pearson Correlation Analysis Purpose: Measures the strength and direction of the relationship between two continuous variables. Conditions: ○ You have two variables within the same sample. ○ The data are measured on an interval or ratio scale. ○ Both variables are normally distributed. Example: "Is there a correlation between anxiety and depression scores?" WEEK 6 - Intelligence Testing Lecture Learning Objectives: 1. Understand the various theories of intelligence over the history of intelligence research. 2. Compare and contrast the various theories and approaches of intelligence and intelligence assessment. 3. Understand the evidence for the genetic and environmental basis of intelligence. 4. Know the various types of intelligence tests (Standford-Binet vs WAIS) and their characteristics and features. 5. Describe the WAIS indices, subtests, and interpretation. 1. Theories of Intelligence Spearman (1904): Proposed a single general factor (g) underlying all cognitive abilities, with specific factors (s) for different tasks. Binet (1916): Defined intelligence as the ability to direct, adapt, and self-criticize, emphasizing reasoning, judgment, memory, and abstraction. Thurstone (1921): Proposed seven primary abilities (e.g., verbal comprehension, numerical ability), rejecting a single g factor. Cattell (1963): Differentiated between fluid intelligence (problem-solving and adaptability) and crystallized intelligence (knowledge from experience). Sternberg (1985): Developed the Triarchic Theory, which includes analytical, creative, and practical intelligence. 2. Comparing Theories of Intelligence Spearman’s g suggests a single, underlying mental ability, while Thurstone emphasizes multiple distinct abilities. Cattell distinguishes between fluid intelligence (innate problem-solving) and crystallized intelligence (acquired knowledge), while Sternberg includes practical and creative skills in addition to analytical abilities. 3. Genetic and Environmental Influences on Intelligence Genetic Factors: Intelligence heritability increases with age, from 20% in infancy to about 80% in adulthood (Plomin & Deary, 2015). Environmental Factors: Diet, education, socioeconomic status, and environmental stimulation all influence intelligence. Environmental factors can moderate genetic influences. 4. Types of Intelligence Tests Stanford-Binet (SB): ○ Founded on Binet’s early work, updated in the Stanford-Binet 5 (2003), and based on the Cattell-Horn-Carroll model. ○ Uses a mean IQ of 100 and SD of 15, with both verbal and non-verbal components. Wechsler Adult Intelligence Scale (WAIS): ○ Introduced in 1939 and frequently revised, the WAIS-IV (2008) includes 10 core and 5 supplemental subtests. ○ Designed to measure full-scale IQ with indices for verbal comprehension, perceptual reasoning, working memory, and processing speed. 5. WAIS Indices, Subtests, and Interpretation Verbal Comprehension Index (VCI): Measures verbal reasoning and acquired knowledge. Perceptual Reasoning Index (PRI): Assesses visual-motor coordination, non-verbal reasoning, and fluid reasoning. Working Memory Index (WMI): Evaluates the ability to temporarily retain and manipulate information. Processing Speed Index (PSI): Assesses visual perception, organization, and the speed of visual processing. WEEK 7 - Educational Testing Lecture Learning Objectives: 1. Understand the role of testing and assessment in education. 2. Be familiar with the different types of educational tests and the purposes they serve. 3. Be familiar with other tools of assessment in education and vocation. 4. Be familiar with aspects of a psychoeducational assessment report. 1. Role of Testing and Assessment in Education Testing in Education: Assesses how much learning has occurred and the extent of mastery. It compares a student's knowledge to peers or benchmarks to identify learning difficulties and suggest potential interventions. Key Roles: ○ Identifying prerequisites for learning. ○ Diagnosing learning issues. ○ Evaluating the effectiveness of interventions. 2. Types of Educational Tests and Their Purposes Achievement Tests: Measure how much a student has learned in a defined setting (e.g., specific school subjects). ○ Formative: Provide feedback for learning improvements. ○ Summative: Assess overall learning at the end of a period. Aptitude Tests: Measure potential to learn or adapt, often predicting future success. These tests assess general or specific aptitudes like problem-solving, language, or mechanical skills. Group vs. Individual Tests: ○ Group Tests: Cost-effective, easy to score, and suitable for assessing many students simultaneously. ○ Individual Tests: Provide deeper insights, allowing flexibility in responses and personalized interpretation. 3. Other Tools of Assessment in Education and Vocation Performance Assessments: Assess practical knowledge and skills relevant to real-world tasks (e.g., portfolios). Authentic Assessments: Evaluate tasks with real-world relevance, such as writing samples or role-play. Checklists and Rating Scales: Used to observe specific behaviors or attributes. Examples include the Achenbach Child Behaviour Checklist (CBCL) and the Vineland Adaptive Behavior Scales. 4. Aspects of a Psychoeducational Assessment Report Referral Questions: Identify specific concerns, such as difficulties concentrating or learning. Background Information: Include developmental, educational, family history, and previous interventions. Assessment Results: Summarize test scores and observations, followed by interpretations based on the student's context. Recommendations: Provide individualized suggestions, such as further assessments or classroom accommodations. WEEK 8 - Quantitative Methods III Lecture Learning Outcomes: 1. Explain the concept of an effect size in inferential statistics, and identify the appropriate effect size to report for t-test and correlation analyses. 2. Define what is meant by 'degrees of freedom' in inferential statistics, and identify the difference between a t distribution and a normal distribution. 3. Define covariance and explain how covariance is related to the calculation of the Pearson correlation. 4. Explain what is meant by the 'assumptions' of a statistical test, and list the assumptions of t-tests and correlation analyses. 1. Effect Size in Inferential Statistics Concept: Effect size quantifies the magnitude of a result in the population, beyond just statistical significance. Even a small effect can be statistically significant with a large enough sample size. Reporting Effect Size: ○ For t-tests, the appropriate effect size is Cohen’s d, which measures the difference between two means in terms of standard deviations. ○ For correlation analyses, the Pearson correlation coefficient (r) itself serves as the effect size. 2. Degrees of Freedom and t Distribution Degrees of Freedom (df): Refers to the number of independent data points that are free to vary. It is calculated differently depending on the type of test (e.g., df = N-1 for one-sample t-tests, df = N-2 for independent-samples t-tests). t Distribution vs. Normal Distribution: ○ Both are bell-shaped, but the t distribution has "fatter tails," meaning it accounts for more variability, especially with smaller sample sizes. The shape of the t distribution depends on the degrees of freedom. 3. Covariance and Pearson Correlation Covariance: Measures how two variables change together. If both variables tend to increase together, the covariance is positive; if one increases while the other decreases, it is negative. Pearson Correlation: A standardized measure derived from covariance by dividing it by the product of the standard deviations of the two variables. It ranges from -1 to 1, where 1 represents a perfect positive correlation and -1 represents a perfect negative correlation. 4. Assumptions of Statistical Tests Assumptions of T-tests: ○ Data should be on an interval or ratio scale. ○ The data should follow a normal distribution. ○ For independent t-tests, homogeneity of variance (equal variance across groups) is assumed. Assumptions of Pearson Correlation: ○ Data for both variables should be measured on an interval or ratio scale. ○ Both variables should be normally distributed. WEEK 9 - Quantitative Methods IV Lecture Learning Outcomes: 1. Describe the difference between parametric statistical tests and non-parametric statistical tests, and explain the circumstances under which non-parametric tests are suitable. 2. Describe the Spearman correlation analysis at a conceptual level and identify the kinds of research questions that call for this test. 3. Describe the Wilcoxon signed-rank test at a conceptual level and identify the kinds of research questions that call for this test. 4. Describe the Wilcoxon rank-sum test at a conceptual level and identify the kinds of research questions that call for this test. 1. Difference Between Parametric and Non-Parametric Tests Parametric Tests (e.g., Pearson correlation, t-tests) assume that the data are normally distributed and measured on an interval or ratio scale. These tests tend to have more statistical power when assumptions are met, meaning they are more likely to detect an effect when one exists. Non-Parametric Tests (e.g., Spearman correlation, Wilcoxon tests) do not assume normal distribution and are useful when data violate parametric test assumptions, such as non-normal distribution or ordinal-scale data. These tests are generally less powerful but more flexible. When are Non-Parametric Tests Suitable? ○ When data are non-normally distributed. ○ When data are measured on ordinal scales. ○ When there are outliers that might skew results in parametric tests. 2. Spearman Correlation Analysis Purpose: The Spearman correlation is a non-parametric alternative to the Pearson correlation. It assesses the monotonic relationship between two variables, meaning as one variable increases, the other consistently increases or decreases (not necessarily linearly). When is it Used? ○ Data are ordinal, not normally distributed, or contain outliers. ○ The research question involves understanding ranked data or when the assumption of linearity in Pearson’s correlation is violated. 3. Wilcoxon Signed-Rank Test Purpose: The Wilcoxon signed-rank test is a non-parametric equivalent of the paired-samples t-test and the one-sample t-test. It compares two related samples or repeated measurements to test the null hypothesis that their population mean ranks differ. When is it Used? ○ The research involves comparing medians from two related groups. ○ Data are non-normally distributed, or the sample size is small. 4. Wilcoxon Rank-Sum Test (Mann-Whitney U Test) Purpose: The Wilcoxon rank-sum test is A non-parametric alternative to the independent-samples t-test. It is used to determine whether there is a significant difference between the ranks of two independent groups. When is it Used? ○ The research question involves comparing medians between two independent samples. ○ Data do not meet the normality assumption required for a parametric t-test. WEEK 10 - Organisational Testing Lecture Learning Outcomes: 1. Understand the concept of organisational (I-O) testing, specifically noting the areas that organisational psychology is attempting to improve. 2. Understand the nature and measurement of career choice, and specifically how these measures examine career options. 3. Look at the various selection techniques and the value and disadvantages of each technique. 4. Examine the other areas that are measured in organisational testing, including work sample tests and personality. 5. Explore the various future trends in testing, specifically noting developments in technology and ethics. 1. Concept of Organisational (I-O) Testing Definition: I-O testing is part of industrial-organisational psychology, focusing on human behavior and performance in the workplace. Main Areas of Improvement: ○ Selection and Placement: Predicting job performance through tests to ensure the best fit for roles. ○ Performance Appraisal: Objectively assessing employee performance. ○ Training and Development: Identifying gaps in skills and evaluating the effectiveness of training. ○ Job Analysis: Understanding the requirements for specific roles. ○ Employee Engagement and Satisfaction: Enhancing job satisfaction and reducing turnover. ○ Leadership Development: Identifying and improving leadership skills. 2. Nature and Measurement of Career Choice Career Choice: Evaluated through assessments of personality traits, cognitive abilities, skills, and other relevant factors. Common Tools: Tests like the Hogan Personnel Selection Series and 16 Personality Factors (16 PF) are used to explore career options based on an individual’s strengths and preferences. 3. Selection Techniques: Advantages and Disadvantages Application Forms: Useful for initial screening but may not provide a comprehensive view of a candidate’s skills or personality. Resume and CV Review: Provides an overview but can be embellished. Interviews: Good for assessing communication and personality but prone to interviewer bias. Panel Interviews: Reduce bias by involving multiple perspectives but can be intimidating for candidates. Cognitive Ability Tests: Strong predictors of job performance but can be biased by socioeconomic and cultural factors. Aptitude Tests: Measure specific skills but are job-specific and resource-intensive to develop. Personality Assessments: Provide insight into personality traits but can be misused or biased if not properly administered. Work Sample Tests: Offer practical demonstrations of skills but are costly and time-consuming to create. 4. Other Areas Measured in Organisational Testing Skill Tests: Evaluate specific job-related abilities. Integrity Tests: Assess honesty and ethical behavior, though they may encourage socially desirable responses. Job Knowledge Tests: Measure technical skills and industry-specific knowledge but may not predict future performance. Situational Judgment Tests (SJT): Test decision-making skills in real-life scenarios but are resource-intensive to develop. Work Sample Tests: Candidates perform actual job tasks to demonstrate competence. 5. Future Trends in Organisational Testing Technology Integration: AI-driven assessments and online platforms are becoming more prevalent. Continuous Feedback Systems: Real-time evaluation and feedback. Diversity and Inclusion Assessments: A focus on eliminating bias in hiring and ensuring equity. Personalized Assessments: Adaptive testing tailored to individual profiles and job requirements. Virtual Reality (VR) and Augmented Reality (AR): Used to simulate job environments for immersive skill evaluations. Ethics: Increasing concerns around privacy, data security, and fairness. WEEK 11 - Clinical Testing Lecture Objectives: 1. Understand the role of a clinical psychologist in clinical assessment, diagnoses, and case formulation. 2. Understand the nature, purpose, and steps of a clinical interview. 3. Become familiar with and describe a mental status examination and list the areas covered by such an examination. 4. Describe the purpose, structure, and main components of a psychological report. 1. Role of a Clinical Psychologist in Clinical Assessment, Diagnosis, and Case Formulation Clinical Psychologists: Focus on assessing, diagnosing, and treating mental health disorders. Specializations: May specialize in complex mental health conditions like personality disorders or psychotic disorders. Role: ○ Conduct clinical assessments (e.g., interviews, behavioral observations, and psychological tests). ○ Use diagnostic tools like the DSM-5. ○ Develop case formulations integrating clinical data to inform treatment plans. Case Formulation: Uses the 4P framework (Predisposing, Precipitating, Perpetuating, and Protective factors) to understand a client’s condition. 2. Nature, Purpose, and Steps of a Clinical Interview Purpose: Gather comprehensive information about a client’s issues, mental health history, and psychological state. Steps: ○ Referral: Begins with a referral for assessment. ○ Open-Ended Questions: Start with open-ended questions to explore broad topics, then move to more focused inquiries. ○ Gathering Information: Collect demographic data, medical and psychological history, and family background. ○ Mutual Interaction: Both the psychologist and client influence the interaction through verbal and non-verbal communication (e.g., paraphrasing, empathy). 3. Mental Status Examination (MSE) Purpose: Evaluates how the client presents during the session, focusing on appearance, behavior, thoughts, and cognitive abilities. Areas Covered: ○ Appearance: Grooming, clothing, general presentation. ○ Speech: Rate, quantity, and quality. ○ Thought Processes: Delusions, flight of ideas, disturbances. ○ Mood and Affect: Emotional state, including depression, anxiety. ○ Cognition and Behavior: Orientation to time and place, attention, memory. ○ Insight and Judgment: Understanding of condition and decision-making abilities. 4. Psychological Report: Purpose, Structure, and Components Purpose: Answers referral questions and documents assessment results to inform treatment plans. Structure: ○ Demographic Information: Basic details like name, date of birth, testing dates. ○ Reason for Referral: A paragraph detailing the referral question. ○ Tests Administered: List of tests used (e.g., BDI-II, SCID-5). ○ Findings: Results of assessments, including observations and variables affecting test results. ○ Recommendations: Suggestions for treatment or further assessments (e.g., psychotherapy, accommodations). ○ Summary: A concise overview of key findings and next steps. Key Aspects: Clear, prompt writing without jargon, respectful presentation of findings, and relevant information only. WEEK 12 - Neuropsychological Testing Lecture Learning Outcomes: 1. Understand the role of a clinical neuropsychologist. 2. Evaluate the relevance of the referral question to inform the neuropsychological approach to assessment. 3. Develop familiarity with the types of neuropsychological assessment tools and the cognitive constructs they assess. 4. Compare and contrast the advantages and disadvantages of quantitative, qualitative, and mixed approaches to interpretation of neuropsychological tests. 1. Role of a Clinical Neuropsychologist Definition: Clinical neuropsychologists assess and treat individuals with brain disorders that affect memory, learning, attention, language, problem-solving, and decision-making. Key Roles: ○ Cognitive Characterization: Identifying cognitive strengths and weaknesses. ○ Diagnostic Opinion: Contributing to diagnosis and prognosis. ○ Rehabilitation: Providing interventions such as cognitive strategies. ○ Behavior Management: Assisting with behavior management plans for acquired brain injuries (ABI). ○ Psychoeducation: Educating clients, caregivers, and professionals about brain-related conditions. ○ Monitoring Change: Tracking illness progression or treatment outcomes. 2. Relevance of the Referral Question to Inform the Neuropsychological Approach to Assessment Referral Questions: Guide the assessment process, determining which cognitive functions to focus on. ○ Common referral reasons include brain injury (e.g., trauma, stroke), neurological conditions (e.g., epilepsy, multiple sclerosis), dementia (e.g., Alzheimer's), psychiatric disorders (e.g., schizophrenia, major depression), and developmental disorders (e.g., ADHD, autism). The relevance of the referral question tailors the assessment to focus on areas like memory, attention, or executive function. 3. Types of Neuropsychological Assessment Tools and the Cognitive Constructs They Assess Intellectual Ability: Measures general cognitive functioning. Executive Function: Assesses planning, problem-solving, and cognitive flexibility (e.g., Wisconsin Card Sorting Test, Stroop Test). Memory: Verbal memory (e.g., Word Lists) and visual memory (e.g., Rey Complex Figure). Attention and Processing Speed: Measures sustained and divided attention (e.g., Trail Making Test, Coding Test). Language: Tests perception, comprehension, and production (e.g., Naming Tests). Perceptual and Motor Function: Measures visual perception and motor dexterity (e.g., Clock Drawing, Grooved Pegboard). 4. Quantitative, Qualitative, and Mixed Approaches to Interpretation of Neuropsychological Tests Quantitative Approach: ○ Advantages: Provides objective, numerical data for comparison with normative scores, facilitating standardized interpretation. ○ Disadvantages: May miss qualitative aspects such as patient behavior, effort, or anxiety during testing. Qualitative Approach: ○ Advantages: Offers rich, descriptive data about how tasks are approached and errors are made, providing context often missed in quantitative measures. ○ Disadvantages: Less standardized and more subjective; difficult to compare results with normative data. Mixed Approach: ○ Advantages: Combines the objectivity of quantitative data with the depth of qualitative insights, offering a more comprehensive view of cognitive strengths and weaknesses. ○ Disadvantages: Requires more time and expertise to integrate both data types effectively. WEEK 2 - Reliability and validity Lecture learning outcomes 1. Define 'reliability' as a psychometric property and articulate the importance of test reliability in psychological testing and assessment. 2. List, describe, and differentiate the different ways that a test's reliability can be assessed. 3. Define 'validity' as a psychometric property, describe its importance in psychological testing and assessment, and explain how it is related to test reliability. 4. List, describe, and differentiate the different kinds of test validity. 1. Reliability as a Psychometric Property Definition: Reliability refers to how consistently a test measures something. A reliable test will give the same results across different times, raters, or versions of the test. Importance: Without reliability, the test results cannot be trusted to measure the intended construct consistently. In psychological testing, reliability is crucial because it ensures that the test measures attributes like intelligence or personality traits in a stable and reproducible way. 2. Ways to Assess Test Reliability Test-Retest Reliability: Measures consistency over time by administering the same test to the same group at different points. If scores are similar, the test has high test-retest reliability. Interrater Reliability: Checks if different observers or raters provide consistent scores for the same test. It's crucial for tests where judgment is subjective. Internal Consistency: Measures whether the different items in a test that are supposed to assess the same construct produce similar results. Split-half reliability and Cronbach’s alpha are ways to assess internal consistency. Equivalent Forms Reliability: Determines if different versions of the same test (e.g., different question sets) provide consistent results. 3. Validity as a Psychometric Property Definition: Validity assesses whether a test measures what it is supposed to measure. It ensures the test results are meaningful and can be used to make accurate predictions or evaluations. Relation to Reliability: A test must be reliable to be valid, as inconsistent measurements cannot accurately reflect the intended construct. However, a reliable test is not necessarily valid if it consistently measures the wrong construct. 4. Kinds of Test Validity Face Validity: The test appears, at face value, to measure what it claims to. It’s the weakest form of validity but helps in gaining test-taker confidence. Content Validity: The test covers all aspects of the construct it is supposed to measure. For instance, a test measuring generalised anxiety must address all symptoms associated with the disorder. Predictive Validity: The test’s scores can predict future performance or behaviour. For example, a good driving test should predict safe driving behaviour. Construct Validity: The test aligns with theoretical assumptions about the construct. It is evaluated by checking convergent evidence (correlation with related constructs) and discriminant evidence (lack of correlation with unrelated constructs). WEEK 3 - Test Development Lecture Objectives 1. Define and understand test conceptualisation, construction, tryout, item analysis and test revision 2. Compare and contrast the different types of test item formats 3. Articulate the criteria that assess whether an item is a ‘good item’ 4. Understand different types of quantitative and qualitative item analysis techniques 5. Recognise how the goals of a test will determine the ideal item analysis criteria 1. Define and understand test conceptualisation, construction, tryout, item analysis, and test revision: Test conceptualisation: This stage involves identifying the need for the test, defining the psychological construct, reviewing existing literature and tests, and articulating the purpose, population, and format of the test. Test construction: Focuses on developing a pool of items relevant to the construct. Items should represent all aspects of the construct and undergo review by experts for content validity. Test tryout: The test is administered to a representative sample (typically 100+ participants). Responses are analysed to identify the best items and improve the test. Item analysis: This includes evaluating item difficulty, dimensionality, reliability, validity, and discrimination. The goal is to refine the test by selecting good items and discarding poor ones. Test revision: Involves revisiting the earlier steps to refine the test based on item analysis, ensuring reliability, validity, and relevance. For existing tests, revision is also needed to account for changing norms and interpretations. 2. Compare and contrast the different types of test item formats: Likert scale: Easy to construct and widely used in psychology, providing ordinal-level data that approximates interval-level data for analysis. However, the number of response options and their balance (odd vs. even) must be considered. Binary choice scale: Consists of true/false or yes/no responses. It’s quick to administer but limited in content richness and susceptible to guessing. Paired comparisons: Requires the respondent to choose between two options and each option is pre-assigned a value based on judges' assessments. Multiple-choice questions (MCQs): Efficient for covering a lot of content and easy to score but may limit creativity and require careful development. Essay: Allows for the assessment of complex knowledge but is time-consuming to score and prone to inter-rater reliability issues. 3. Articulate the criteria that assess whether an item is a ‘good item’: Item difficulty: An item is considered "good" if it has an average difficulty level, typically between 0.3 and 0.8. Item discrimination: A good item discriminates between high and low scorers. This is often measured using a point-biserial correlation. Reliability and validity: Items should consistently measure the construct and be valid indicators of the test’s goal. Item distributions: Avoid items with skewed distributions, which provide little variability and weak correlation with other items. 4. Understand different types of quantitative and qualitative item analysis techniques: Quantitative techniques: These include measuring item difficulty, reliability (e.g., Cronbach’s alpha for internal consistency), item discrimination (e.g., point-biserial correlation), and dimensionality (e.g., factor analysis) Qualitative techniques: Involves obtaining feedback from experts or the target sample on the clarity, conciseness, and relevance of the items. 5. Recognize how the goals of a test will determine the ideal item analysis criteria: The choice of item analysis depends on the test's purpose. For example, if the goal is to assess a broad construct, items need to reflect all aspects of the construct, and the analysis may emphasise factor analysis to ensure unidimensionality. In contrast, for tests with a specific predictive goal, criterion-related validity becomes more important. WEEK 4 - Quantitative Methods I Lecture learning outcomes 1. Define what is meant by 'sample' and 'population' in the context of statistics, and explain how the two concepts relate to each other. 2. Explain what is meant by a probability distribution, and interpret percentile norms in a normal distribution 3. Define the different descriptive statistics that measure a distribution's central tendency: the mean, the median, and the mode. 4. Define the different descriptive statistics that measure the spread of a distribution: variance, standard deviation, range, and interquartile range. 5. Define the four different types of measurement scales (nominal, ordinal, interval, ratio) and identify the types of data that belong to each. 1. Sample and Population Population: The entire group you're interested in studying (e.g., all humans, all PSY2041 students). Sample: A subset drawn from the population (e.g., 30 randomly selected PSY2041 students). Relation: Samples are used to make inferences about the larger population, as we usually can't study the entire population 2. Probability Distribution and Percentiles Probability Distribution: Shows how likely different outcomes are. Probabilities range between 0 and 1, where 0 means no chance, and 1 means certainty. Normal Distribution: A bell-shaped curve where most values cluster around the mean. It is fully described by its mean and standard deviation. Percentiles: In a normal distribution, approximately 68% of values fall within one standard deviation of the mean, and 95% fall within two standard deviations 3. Measures of Central Tendency Mean: The arithmetic average of a set of numbers. Median: The middle value, with 50% of observations above and 50% below. Mode: The value that occurs most frequently 4. Measures of Spread Variance: The average of the squared differences from the mean. Standard Deviation: The square root of variance, representing the average distance from the mean. Range: The difference between the highest and lowest values. Interquartile Range (IQR): The difference between the first and third quartiles (the 25th and 75th percentiles) 5. Measurement Scales Nominal: Categories without a specific order (e.g., types of transport: car, bus, bike). Ordinal: Categories with a meaningful order but no equal intervals (e.g., survey responses: bad, good, excellent). Interval: Numerical values with equal intervals but no true zero (e.g., temperature in Celsius). Ratio: Like interval scales, but with a meaningful zero point, allowing for meaningful ratios (e.g., height, weight) WEEK 5 - Quantitative methods II Lecture learning outcomes 1. Explain Null Hypothesis Significance Testing as a conceptual framework for inferential statistics 2. Describe the one-sample z-test at a conceptual level and identify the kinds of research questions that call for a one-sample z-test 3. Describe the one-sample t-test at a conceptual level and identify the kinds of research questions that call for a one-sample t-test 4. Describe the paired-sample t-test at a conceptual level and identify the kinds of research questions that call for a paired-samples t-test 5. Describe the independent-samples t-test at a conceptual level and identify the kinds of research questions that call for an independent-samples t-test 6. Describe the Pearson correlation analysis at a conceptual level and identify the kinds of research questions that call for a Pearson correlation analysis 1. Null Hypothesis Significance Testing (NHST) Concept: NHST starts with assuming no effect (the null hypothesis) and tests how surprising the observed data are under this assumption. If the data are very surprising (low p-value), we reject the null hypothesis and suggest there is evidence for the alternative hypothesis. The p-value quantifies the probability of seeing data as extreme or more extreme than observed, assuming the null hypothesis is true. A p-value < 0.05 is typically considered statistically significant 2. One-Sample Z-Test Purpose: Used when comparing the mean of a sample to a known population mean, and the population standard deviation is known. Conditions: ○ You have data on an interval or ratio scale. ○ The data are normally distributed. ○ You know the population’s standard deviation. Example question: "Is the mean IQ of PSY2041 students higher than the general population's IQ?" 3. One-Sample T-Test Purpose: Used when comparing the mean of a sample to a known population mean, but the population standard deviation is unknown. Conditions: ○ You have data on an interval or ratio scale. ○ The data are normally distributed. ○ You don’t know the population’s standard deviation. Example question: "Do students score above chance on a multiple-choice quiz? 4. Paired-Samples T-Test Purpose: Used when you measure the same sample twice (e.g., before and after treatment) and want to compare the means of the two measurements. Conditions: ○ You have paired data (two measurements on the same subjects). ○ The differences between the paired measurements are normally distributed. Example question: "Is there a difference in depression symptoms before and after CBT treatment? 5. Independent-Samples T-Test Purpose: Used when comparing the means of two different samples (e.g., treatment vs. control groups). Conditions: ○ You have data from two independent samples. ○ The data are measured on an interval or ratio scale. ○ Both samples are normally distributed. Example question: "Do students who receive a new drug to increase extraversion score higher than those who receive a placebo?" 6. Pearson Correlation Analysis Purpose: Measures the strength and direction of the relationship between two continuous variables. Conditions: ○ You have two variables within the same sample. ○ The data are measured on an interval or ratio scale. ○ Both variables are normally distributed. Example question: "Is there a correlation between anxiety and depression scores?" WEEK 6 - Intelligence Testing Learning Objectives: ▪ Understand the various theories of intelligence over the history of intelligence research ▪ Compare and contrast the various theories and approaches of intelligence and intelligence assessment ▪ Understand the evidence for the genetic and environmental basis of intelligence ▪ Know the various types of intelligence tests (Standford Binet vs WAIS) and their characteristics and features ▪ Describe the WAIS indices, subtests, interpretation 1. Theories of Intelligence Spearman (1904): Proposed a single general factor (g) underlying all cognitive abilities, with specific factors (s) for different tasks. Binet (1916): Defined intelligence as the ability to direct, adapt, and self-criticize, emphasizing reasoning, judgment, memory, and abstraction. Thurstone (1921): Proposed seven primary abilities (e.g., verbal comprehension, numerical ability), rejecting a single g factor. Cattell (1963): Differentiated between fluid intelligence (problem-solving and adaptability) and crystallized intelligence (knowledge from experience). Sternberg (1985): Developed the Triarchic Theory, which includes analytical, creative, and practical intelligence 2. Comparing Theories of Intelligence Spearman’s g suggests a single, underlying mental ability, while Thurstone emphasizes multiple distinct abilities. Cattell distinguishes between fluid (innate problem-solving) and crystallized (acquired knowledge) intelligence, while Sternberg includes practical and creative skills in addition to analytical abilities 3. Genetic and Environmental Influences on Intelligence Genetic Factors: Intelligence heritability increases with age, from 20% in infancy to about 80% in adulthood (Plomin & Deary, 2015). Environmental Factors: Diet, education, socioeconomic status, and environmental stimulation all influence intelligence. Environmental factors can moderate genetic influences 4. Types of Intelligence Tests Stanford-Binet (SB): ○ Founded on Binet’s early work, updated in the Stanford-Binet 5 (2003), and based on the Cattell-Horn-Carroll model. ○ Uses a mean IQ of 100 and SD of 15, with both verbal and non-verbal components. Wechsler Adult Intelligence Scale (WAIS): ○ Introduced in 1939 and frequently revised, the WAIS-IV (2008) includes 10 core and 5 supplemental subtests. ○ Designed to measure full-scale IQ with indices for verbal comprehension, perceptual reasoning, working memory, and processing speed 5. WAIS Indices, Subtests, and Interpretation Verbal Comprehension Index (VCI): Measures verbal reasoning and acquired knowledge. Perceptual Reasoning Index (PRI): Assesses visual-motor coordination, non-verbal reasoning, and fluid reasoning. Working Memory Index (WMI): Evaluates the ability to temporarily retain and manipulate information. Processing Speed Index (PSI): Assesses visual perception, organization, and the speed of visual processing WEEK 7 - Educational Testing Learning Objectives: - Understand the role of testing and assessment in education - Be familiar with the different types of educational tests and the purposes they serve - Be familiar with other tools of assessment in education and Vocation - Be familiar with aspects of a psychoeducational assessment report 1. Role of Testing and Assessment in Education Testing in education assesses how much learning has occurred and the extent of mastery. It compares a student's knowledge to peers or benchmarks to identify difficulties in learning and suggest potential interventions. Key roles include identifying prerequisites for learning, diagnosing learning issues, and evaluating the effectiveness of interventions 2. Types of Educational Tests and Their Purposes Achievement Tests: Measure how much a student has learned in a defined setting (e.g., specific school subjects). ○ Formative: Provide feedback for learning improvements. ○ Summative: Assess overall learning at the end of a period. Aptitude Tests: Measure potential to learn or adapt, often predicting future success. These tests assess general or specific aptitudes like problem-solving, language, or mechanical skills Group vs. Individual Tests: ○ Group Tests: Cost-effective, easy to score, and suitable for assessing many students simultaneously. ○ Individual Tests: Provide deeper insights, allowing flexibility in responses and personalized interpretation 3. Other Tools of Assessment in Education and Vocation Performance Assessments: Assess practical knowledge and skills relevant to real-world tasks (e.g., portfolios). Authentic Assessments: Evaluate tasks with real-world relevance, such as writing samples or role-play. Checklists and Rating Scales: Used to observe specific behaviors or attributes. Examples include the Achenbach Child Behaviour Checklist (CBCL) and the Vineland Adaptive Behavior Scales. 4. Aspects of a Psychoeducational Assessment Report Referral Questions: Identify specific concerns, such as difficulties concentrating or learning. Background Information: Include developmental, educational, family history, and previous interventions. Assessment Results: Summarize test scores and observations, followed by interpretations based on the student's context. Recommendations: Provide individualized suggestions, such as further assessments or classroom accommodations. WEEK 8 - Quantitative methods III Lecture learning outcomes 1. Explain the concept of an effect size in inferential statistics, and identify the appropriate effect size to report for t-test and correlation analyses 2. Define what is meant by ʻdegrees of freedomʼ in inferential statistics, and identify the difference between a t distribution and a normal distribution 3. Define covariance and explain how covariance is related to the calculation of the Pearson correlation 4. Explain what is meant by the ʻassumptionsʼ of a statistical test, and list the assumptions of t-tests and correlation analyses 1. Effect Size in Inferential Statistics Concept of Effect Size: It quantifies the magnitude of a result in the population, beyond just statistical significance. Even a small effect can be statistically significant with a large enough sample size. Reporting Effect Size: ○ For t-tests, the appropriate effect size is Cohen’s d, which measures the difference between two means in terms of standard deviations. ○ For correlation analyses, the Pearson correlation coefficient (r) itself serves as the effect size(Week 8 - Quantitative m…). 2. Degrees of Freedom and t Distribution Degrees of Freedom (df): Refers to the number of independent data points that are free to vary. It is calculated differently depending on the type of test (e.g., df = N-1 for one-sample t-tests, df = N-2 for independent-samples t-tests). t Distribution vs. Normal Distribution: ○ Both are bell-shaped, but the t distribution has "fatter tails," meaning it accounts for more variability, especially with smaller sample sizes. The shape of the t distribution depends on the degrees of freedom(Week 8 - Quantitative m…). 3. Covariance and Pearson Correlation Covariance: It measures how two variables change together. If both variables tend to increase together, the covariance is positive; if one increases while the other decreases, it is negative. Pearson Correlation: This is a standardised measure derived from covariance by dividing it by the product of the standard deviations of the two variables. It ranges from -1 to 1, where 1 represents a perfect positive correlation and -1 represents a perfect negative correlation(Week 8 - Quantitative m…). 4. Assumptions of Statistical Tests Assumptions of t-tests: ○ Data should be on an interval or ratio scale. ○ The data should follow a normal distribution. ○ For independent t-tests, homogeneity of variance (equal variance across groups) is assumed. Assumptions of Pearson Correlation: ○ Data for both variables should be measured on an interval or ratio scale. ○ Both variables should be normally distributed(Week 8 - Quantitative m…).. WEEK 9 - Quantitative methods IV Lecture learning outcomes 1. Describe the difference between parametric statistical test and a non-parametric statistical test, and explain the circumstances under which non-parametric test are suitable 2. Describe the Spearman correlation analysis at a conceptual level and identify the kinds of research questions that call for this test 3. Describe the Wilcoxon signed-rank test at a conceptual level and identify the kinds of research questions that call for this test 4. Describe the Wilcoxon rank-sum test at a conceptual level and identify the kinds of research questions that call for this test 1. Difference Between Parametric and Non-Parametric Tests Parametric tests (e.g., Pearson correlation, t-tests) assume the data is normally distributed and measured on an interval or ratio scale. They tend to have more statistical power when assumptions are met, meaning they are more likely to detect an effect when one exists. Non-parametric tests (e.g., Spearman correlation, Wilcoxon tests) do not assume normal distribution and are useful when data violates parametric test assumptions, such as non-normal distribution or ordinal scale data. These tests are generally less powerful but more flexible. When are non-parametric tests suitable? When data are non-normally distributed. When data are measured on ordinal scales. When there are outliers that might skew results in parametric tests(Week 9 - Quantitative m…). 2. Spearman Correlation Analysis The Spearman correlation is a non-parametric alternative to the Pearson correlation. It assesses the monotonic relationship between two variables, meaning as one variable increases, the other consistently increases or decreases (not necessarily linearly). This test is appropriate when: ○ Data are ordinal, not normally distributed, or contain outliers. ○ The research question involves understanding ranked data or when the assumption of linearity in Pearson’s correlation is violated(Week 9 - Quantitative m…). 3. Wilcoxon Signed-Rank Test The Wilcoxon signed-rank test is a non-parametric equivalent of the paired-samples t-test and the one-sample t-test. It compares two related samples or repeated measurements to test the null hypothesis that their population mean ranks differ. This test is appropriate when: ○ The research involves comparing medians from two related groups. ○ Data are non-normally distributed, or the sample size is small(Week 9 - Quantitative m…). 4. Wilcoxon Rank-Sum Test (Mann-Whitney U Test) The Wilcoxon rank-sum test is a non-parametric alternative to the independent-samples t-test. It is used to determine whether there is a significant difference between the ranks of two independent groups. This test is appropriate when: ○ The research question involves comparing medians between two independent samples. ○ Data do not meet the normality assumption required for a parametric t-test(Week 9 - Quantitative m…). WEEK 10 - Organisational Testing Learning outcomes Understand the concept of organisational (I-O) testing, specifically noting the areas that organisational psychology is attempting to improve. Understand the nature and measurement of career choice, and specifically how these measures examine career options. Look at the various selection techniques and the value and disadvantages of each technique. Examine the other areas that are measured in organisational testing including work sample tests and personality. Explore the various future trends in testing, specifically noting developments in technology and ethics. 1. Understand the concept of organisational (I-O) testing I-O testing is part of industrial-organisational psychology, focusing on human behavior and performance in the workplace. The main areas of improvement in organisational psychology include: ○ Selection and placement: Predicting job performance through tests to ensure the best fit for roles. ○ Performance appraisal: Objectively assessing employee performance. ○ Training and development: Identifying gaps in skills and evaluating the effectiveness of training. ○ Job analysis: Understanding the requirements for specific roles. ○ Employee engagement and satisfaction: Enhancing job satisfaction and reducing turnover. ○ Leadership development: Identifying and improving leadership skills. 2. Understand the nature and measurement of career choice Career choice is evaluated through assessments of personality traits, cognitive abilities, skills, and other relevant factors. Measures such as personality assessments and aptitude tests help individuals explore suitable career paths based on their inherent strengths and preferences. Tests like the Hogan Personnel Selection Series and 16 Personality Factors (16 PF) are common tools. 3. Selection techniques: advantages and disadvantages Application forms: Useful for initial screening but may not provide a comprehensive view of a candidate’s skills or personality. Resume and CV review: Provides an overview but can be embellished. Interviews: Good for assessing communication and personality but prone to interviewer bias. Panel interviews: Reduces bias by involving multiple perspectives but can be intimidating for candidates. Cognitive ability tests: Strong predictors of job performance but can be biased by socioeconomic and cultural factors. Aptitude tests: Measure specific skills but are job-specific and resource-intensive to develop. Personality assessments: Provide insight into personality traits but can be misused or biased if not properly administered. Work sample tests: Offer practical demonstrations of skills but are costly and time-consuming to create. 4. Other areas measured in organisational testing Skill tests: Evaluate specific job-related abilities. Integrity tests: Assess honesty and ethical behavior, though they may encourage socially desirable responses. Job knowledge tests: Measure technical skills and industry-specific knowledge but may not predict future performance. Situational Judgment Tests (SJT): Test decision-making skills in real-life scenarios but are resource-intensive to develop. Work sample tests: Candidates perform actual job tasks to demonstrate competence. 5. Future trends in organisational testing Technology integration: AI-driven assessments and online platforms are becoming more prevalent. Continuous feedback systems: Real-time evaluation and feedback. Diversity and inclusion assessments: A focus on eliminating bias in hiring and ensuring equity. Personalized assessments: Adaptive testing tailored to individual profiles and job requirements. Virtual reality (VR) and augmented reality (AR): Used to simulate job environments for immersive skill evaluations. Ethics: Increasing concerns around privacy, data security, and fairness. WEEK 11 - Clinical Testing Lecture objectives 1. Understand the role of a clinical psychologist in clinical assessment, diagnoses and case formulation 2. Understand the nature, purpose and steps of a clinical interview 3. Become familiar with and describe a mental status examination and list the areas covered by such an examination 4. Describe the purpose, structure and main components of a psychological report 1. Role of a Clinical Psychologist in Clinical Assessment, Diagnosis, and Case Formulation Clinical Psychologists focus on assessing, diagnosing, and treating mental health disorders. They may specialize in complex mental health conditions like personality or psychotic disorders. Their work involves clinical assessment (e.g., clinical interviews, behavioral observations, and tests), diagnosis (using tools like the DSM-5), and case formulation (integrating clinical data to develop treatment plans). Case formulation uses the 4P framework (Predisposing, Precipitating, Perpetuating, and Protective factors) to understand the client's condition and guide treatment. 2. Nature, Purpose, and Steps of a Clinical Interview The clinical interview aims to gather comprehensive information about the client’s issues, mental health history, and current psychological state. The interview is used to: 1. Identify the presenting problems. 2. Determine a possible diagnosis. 3. Assess whether further assessments or services are necessary. 4. Establish a therapeutic contract with agreed goals and mutual obligations. Steps include: 1. Referral: The process starts with a referral for assessment. 2. Questions: The interviewer asks open-ended and then closed questions to gather specific data. 3. Gathering Information: Collecting demographic data, medical and psychological history, and family history. 4. Mutual Interaction: Both the client and psychologist influence each other through verbal and non-verbal communication (e.g., transitional phrases, paraphrasing, empathy). 3. Mental Status Examination (MSE) Mental Status Examination evaluates how the client presents, focusing on their appearance, behavior, thoughts, and cognitive abilities. Areas covered include: ○ Appearance: Grooming, clothing, general presentation. ○ Speech: Rate, quantity, and quality. ○ Thought Processes: Delusions, flight of ideas, thought disturbances. ○ Mood and Affect: Emotional state, including depression, anxiety, etc. ○ Cognition and Behavior: Orientation to time and place, attention, memory. ○ Insight and Judgement: The client’s understanding of their condition and decision-making ability. 4. Psychological Report: Purpose, Structure, and Components The purpose of a psychological report is to answer referral questions and document assessment results to inform treatment plans. Structure of the report: ○ Demographic Information: Basic details such as the client’s name, date of birth, and testing dates. ○ Reason for Referral: A sentence or paragraph detailing the referral question. ○ Tests Administered: A list of tests and dates of administration (e.g., BDI-II, SCID-5). ○ Findings: Results of the assessments, including observations and variables affecting test results. ○ Recommendations: Suggestions for treatment or further assessment (e.g., psychotherapy, specialized education). ○ Summary: A concise statement summarizing key findings and next steps. Key aspects of a good report: ○ Written promptly and clearly. ○ Free from jargon or technical language. ○ Presents findings respectfully and only includes relevant information. WEEK 12 - Neuropsychological testing Learning Outcomes - Understand the role of a clinical neuropsychologist - Evaluate the relevance of the referral question to inform the neuropsychological approach to assessment. - Develop familiarity with the types of neuropsychological assessment tools and the cognitive constructs they assess - Compare and contrast the advantages and disadvantages of quantitative, qualitative and mixed approaches to interpretation of neuropsychological tests 1. Understand the Role of a Clinical Neuropsychologist Definition: A clinical neuropsychologist assesses and treats individuals with brain disorders affecting memory, learning, attention, language, problem-solving, and decision-making. Key Roles: ○ Cognitive Characterization: Identifying strengths and weaknesses. ○ Diagnostic Opinion: Contributing to diagnosis and prognosis. ○ Rehabilitation: Providing interventions such as cognitive strategies. ○ Behavior Management: Assisting with behavior management plans for acquired brain injuries (ABI). ○ Psychoeducation: Educating clients, caregivers, and professionals about brain-related conditions. ○ Monitoring Change: Tracking illness progression or treatment outcomes. 2. Evaluate the Relevance of the Referral Question to Inform the Neuropsychological Approach to Assessment Referral Questions: Guide the assessment process, determining what cognitive functions to focus on. Common referral reasons include: ○ Brain Injury (e.g., trauma, stroke). ○ Neurological Conditions (e.g., epilepsy, multiple sclerosis). ○ Dementia (e.g., Alzheimer's). ○ Psychiatric Disorders (e.g., schizophrenia, major depression). ○ Developmental Disorders (e.g., ADHD, autism). The relevance of the referral question helps to tailor the assessment to focus on areas like memory, attention, or executive function. Neuropsychological assessments help diagnose cognitive impairments, inform treatment plans, and monitor changes over time. 3. Develop Familiarity with the Types of Neuropsychological Assessment Tools and the Cognitive Constructs They Assess Types of Assessment Tools: ○ Intellectual Ability: Measures general cognitive functioning. ○ Executive Function: Assesses planning, problem-solving, and cognitive flexibility (e.g., Wisconsin Card Sorting Test, Stroop Test). ○ Memory: Verbal memory (e.g., Word Lists) and visual memory (e.g., Rey Complex Figure). ○ Attention and Processing Speed: Measures sustained and divided attention (e.g., Trail Making Test, Coding Test). ○ Language: Tests perception, comprehension, and production (e.g., Naming Tests). ○ Perceptual and Motor Function: Measures visual perception and motor dexterity (e.g., Clock Drawing, Grooved Pegboard). 4. Compare and Contrast the Advantages and Disadvantages of Quantitative, Qualitative, and Mixed Approaches to Interpretation of Neuropsychological Tests Quantitative Approach: ○ Advantages: Provides objective, numerical data for comparison with normative scores. Facilitates standardized interpretation of results. ○ Disadvantages: May miss qualitative aspects such as patient behaviour, effort, or anxiety during testing. Limited in capturing individual variability or context. Qualitative Approach: ○ Advantages: Offers rich, descriptive data about how tasks are approached and errors are made. Provides context that may be missed by purely quantitative measures. ○ Disadvantages: Less standardized and more subjective. Difficult to compare results with normative data. Mixed Approach: ○ Advantages: Combines the objectivity of quantitative data with the depth of qualitative insights. Offers a more comprehensive view of the patient’s cognitive strengths and weaknesses. ○ Disadvantages: Requires more time and expertise to integrate both data types effectively.