Psychological Assessment PDF
Document Details
Uploaded by StableTulsa
Tags
Summary
These notes cover psychological assessment, including key concepts, various assessment types, and the historical context. They also include ethical considerations and resources for further research.
Full Transcript
PSYCHOLOGICAL ASSESSMENT scedequina Parties Involved Chapter 1: Psychological Testing and Assessment Test Developer: Creates tests following stand...
PSYCHOLOGICAL ASSESSMENT scedequina Parties Involved Chapter 1: Psychological Testing and Assessment Test Developer: Creates tests following standards for Key Concepts educational and psychological testing. Test User: Professionals across various fields utilize Historical Context: The roots of psychological testing testing methodologies. trace back to early 20th-century France, notably Test Taker: Individuals undergoing assessment. Alfred Binet’s work in 1905 to place Paris Society at Large: Influences the development and schoolchildren in appropriate classes. His test was application of tests based on evolving needs. adapted for use in the U.S. for military screening during WWI. Settings for Assessment Definitions Educational: Early identification of special needs. Clinical: Diagnosing behavior problems. Testing: Encompasses the administration and Counseling: Aimed at improving adjustment and interpretation of test scores. productivity. Psychological Testing: Measures Geriatric: Assessing cognitive and psychological psychology-related variables using devices or functions in older adults. procedures to obtain behavioral samples. Business/Military: Using various tests for hiring and Psychological Assessment: Involves gathering and promotion decisions. integrating data for evaluations using tests, Governmental: Credentialing and licensing interviews, case studies, and observations. professionals. Varieties of Assessment Ethical Considerations 1. Therapeutic Psychological Assessment: Has a Ethical guidelines mandate appropriate test selection therapeutic element. and proper administration. 2. Educational Assessment: Evaluates skills and Importance of maintaining rapport with test takers and abilities relevant to academic success. safeguarding test results. 3. Retrospective Assessment: Analyzes psychological aspects at a previous point in time. Sources for Authoritative Information 4. Remote Assessment: Collects data from a distance. 5. Ecological Momentary Assessment: Evaluates Test Catalogues and Manuals: Offer descriptions cognitive and behavioral variables in real-time. and technical details. Professional Books: Supplement and enhance Assessment Process information on tests. Journal Articles: Review tests and studies on their Starts with a referral from various professionals (e.g., reliability. teachers, psychologists). Online Databases: Various databases provide The assessor selects tools and conducts the test-related information. assessment. A report is generated to address the referral question. Chapter 2: A Historical Perspective Collaborative Psychological Assessment: Assessor and assess collaborate throughout the 1. Antiquity to the 19th Century process. ○ China (2200 B.C.E.): Testing began as a Dynamic Assessment: Involves a cycle of means for selecting government officials, evaluation, intervention, and reevaluation. evolving in content with cultural values and dynasties (e.g., Song Dynasty emphasized Testing classical literature). ○ Greco-Roman Era: Early attempts to Test: A measuring device or procedure. categorize personality types, influenced by Psychological Test: Specifically measures bodily fluids. psychology-related variables. ○ Renaissance Onset: Psychological Key elements include content, format, administrative assessment started to take shape. procedures, scoring, and interpretation. 2. Key Figures in Testing Development ○ Christian von Wolff: Anticipated psychology Interview Techniques as a science in the 18th century. ○ Charles Darwin (1859): Introduced natural Interviewer Considerations: Note both verbal and selection principles affecting psychological nonverbal cues (e.g., body language, facial measurement. expressions). ○ Francis Galton: Developed tools for Types of Interviews: psychological assessment; pioneered ○ Panel Interview: Multiple interviewers statistical concepts. assess together. ○ Wilhelm Wundt (1879): Established the first ○ Motivational Interview: Combines listening experimental psychology lab, focusing on skills with techniques to enhance motivation. human abilities. ○ James Cattell: Coined "mental test" and Assessment Tools promoted applied psychology. ○ Charles Spearman: Originated test reliability concepts. Portfolio: A collection of works used for evaluation. ○ Others: Emil Kraepelin, Lightner Witmer, Case History Data: Archives that preserve relevant and Victor Henri contributed to the information about the individual. foundation of psychological testing. Behavioral Observation: Monitoring actions to gather quantitative or qualitative data. Role Play Test: Assesses responses in simulated The 20th Century situations. Computer-Assisted Assessment: Involves test Shift towards measuring intelligence and personality. administration and scoring using computers. Alfred Binet: Collaborated with Theodor Simon to create the first formal intelligence test. David Wechsler (1939): Introduced adult intelligence Chapter 3: A Statistics Refresher measures (WAIS). The demand for various psychological tests surged, Test Scores particularly during WWI for screening recruits. Test scores are numerical representations of Measurement of Personality performance. Public interest in intelligence testing led to a boom in Statistical Tools psychological assessments, including personality tests. Used for description, inference, and conclusion from Robert Woodworth: Developed the personal data numerical data. sheet and Woodworth Psychoneurotic Inventory. Scales of Measurement Culture and Assessment Measurement Culture Defined: Socially transmitted behavior patterns and beliefs. The process of assigning numbers or symbols to Henri Goddard's Work: Highlighted issues with characteristics according to specific rules. testing across cultural and language backgrounds, leading to concerns over biased interpretations. Scale Communication in Assessment: Language barriers and nonverbal behaviors complicate the assessment process. A set of numbers modeling the empirical properties of assigned objects. Standards of Evaluation Types of Scales Distinctions between individualist and collectivist 1. Continuous Scale cultures, affecting evaluation criteria and outcomes. ○ Measures continuous variables (e.g., height, weight). Legal and Ethical Considerations 2. Discrete Scale ○ Measures discrete variables (e.g., number of Philippine Laws: RA 10029 (Psychology Act), RA children). 11036 (Mental Health Act), RA 10173 (Data Privacy Act). Nominal Scale Ethics Defined: Principles guiding right conduct in testing. The simplest measurement form involving Public Concerns: Legislation like No Child Left classification based on distinguishing characteristics Behind and Common Core reflect society's (e.g., yes/no responses). expectations for testing standards. Ordinal Scale Testing and Discrimination Allows for classification and rank ordering but does Concepts of quota systems, reverse discrimination, not imply equal intervals between ranks (e.g., and disparate treatment highlight the complexities of Rokeach Survey). It lacks an absolute zero. fair assessment practices. Interval Scale Test-User Qualifications Features equal intervals between numbers but no Levels of Testing: absolute zero (e.g., temperature). ○ Level A: Basic tests, minimal training required. Ratio Scale ○ Level B: Intermediate knowledge of test construction needed. Contains equal intervals and a true zero point (e.g., ○ Level C: Advanced understanding required weight). for complex tests. Measurement Scales in Psychology Testing People with Disabilities Ordinal scales are frequently used to measure Adjustments necessary for fair testing and meaningful psychological traits like intelligence and personality. interpretation of data. Describing Data Computer-Assisted Psychological Assessment (CAPA) Distribution Addressing access issues, comparability of formats, and the regulation of online testing. An arrangement of test scores for study. Rights of Test Takers Raw Score 1. Informed Consent: Test-takers must understand An unmodified numerical account of performance. testing purposes and processes. 2. Right to Findings: Clarity about test results and Frequency Distributions recommendations. 3. Privacy and Confidentiality: Protecting personal Frequency Distribution: Lists scores with their information from disclosure. frequencies. 4. Least Stigmatizing Labels: Using appropriate Grouped Frequency Distribution: Summarizes data terminology in reporting results. using class intervals based on convenience. Graph Types Other Standard Scores Histogram: Series of contiguous rectangles. Stanine: A contraction of "standard" and "nine." Bar Graph: Rectangles that are not contiguous. Frequency Polygon: Continuous lines connecting Transformations: points of scores and frequencies. Linear Transformations: Retain direct numerical Measures of Central Tendency relationships. Nonlinear Transformations: Used when data aren't Measures of Central Tendency normally distributed. Indicate the average score within a distribution. Correlation and Inference Arithmetic Mean: The sum of observations divided by the number of observations. Coefficient of Correlation Median: The middle score in a distribution. Mode: The most frequently occurring score. Indicates the strength of the relationship between two variables. Bimodal Distribution The Concept of Correlation Contains two scores with the highest frequency. Correlation: Degree and direction of correspondence Measures of Variability between two variables. Coefficient of Correlation: Expresses how two Variability variables are related. ○ Positively Correlated: Both variables Indicates how scores are dispersed within a increase or decrease together. distribution. ○ Negatively Correlated: One variable increases while the other decreases. Measures of Variability The Pearson R Range: Difference between the highest and lowest scores. The preferred method for linear relationships with Interquartile Range: Difference between the third continuous variables. (Q3) and first (Q1) quartiles. Semi-Interquartile Range: Half of the interquartile The Spearman Rho range. Average Deviation: Describes variability amount. Used for small samples and ordinal measurements. Standard Deviation: Square root of the average squared deviations about the mean. Graphic Representations of Correlation Variance: Mean of the squares of differences from the mean. Scatterplot Skewness A graphical representation of the relationship between two variables. Describes the asymmetry of a distribution. ○ Positive Skew: Few scores are high. Curvilinearity ○ Negative Skew: Few scores are low. Indicates how curved a graph is. Kurtosis Outlier Refers to the steepness of a distribution. ○ Platykurtic: Relatively flat. An atypical point far from the others in a scatterplot. ○ Leptokurtic: Relatively peaked. ○ Mesokurtic: Intermediate steepness. Meta-Analysis The Normal Curve A statistical technique for combining data across studies for more reliable conclusions. A bell-shaped curve highest at its center and tapering off toward the tails, representing a normal distribution. Effect Size Area Under the Normal Curve Estimates derived from meta-analyses. The normal curve has two tails, with the area between Advantages of Meta-Analyses: 2 and 3 standard deviations above the mean considered a tail. Replicable, reliable conclusions, focus on effect size, promotes evidence-based practice. Standard Scores Evidence-Based Practice Standard Score Professional practice grounded in clinical and A converted raw score that is more interpretable than research findings. raw scores. Z-Scores: Indicates how many standard deviations a score is from the mean. Chapter 4: Of Tests and Testing T-Scores: A standardized score system with a mean of 50 and a standard deviation of 10. Some Assumptions About Psychological Testing and Assessment 1. Assumption 1: Psychological Traits and States ○ Stratified Sampling: Including diverse Exist subgroups. ○ Trait: A distinguishable, relatively enduring ○ Stratified-Random Sampling: Equal way in which one individual varies from chances for all members. another (e.g., personality traits). ○ Purposive Sample: Select based on ○ State: Temporary distinctions between perceived representativeness. individuals (e.g., mood). ○ Incidental Sample: Convenience-based ○ Psychological Trait: A broad range of selection. characteristics (e.g., intelligence, interests). ○ Construct: A scientific concept developed to Developing Norms for a Standardized Test describe behavior. ○ Overt Behavior: Observable actions or 1. Administer the test under standard instructions. products of actions. 2. Analyze data using descriptive statistics. 2. Assumption 2: Psychological Traits and States 3. Summarize the standardization sample. Can Be Quantified and Measured 4. Provide evidence supporting the interpretations of ○ Cumulative Scoring: Higher scores indicate results. stronger presence of the targeted trait. 3. Assumption 3: Test-Related Behavior Predicts Non-Test Related Behavior Types of Norms ○ Tests indicate potential future behavior and can postdict past behavior. Percentiles: Percentage of individuals scoring below 4. Assumption 4: Tests Have Strengths and a raw score. Weaknesses Age Norms: Average performance based on age. ○ Competent test users understand the tests’ Grade Norms: Average performance per school development, administration, and grade. interpretation, along with their limitations. National Norms: Derived from a nationally 5. Assumption 5: Various Sources of Error Are Part representative sample. of the Assessment Process National Anchor Norms: Compare scores across ○ Error: Anything that exceeds expected different tests. outcomes. Subgroup Norms: Normative samples segmented by ○ Error Variance: Test score components due selection criteria. to non-targeted traits. Local Norms: Performance info specific to local ○ Classical Test Theory: Each test-taker has populations. a true score that would be measured without error. Norm-Reference vs. Criterion-Reference Evaluations 6. Assumption 6: Testing and Assessment Can Be Conducted Fairly and Unbiased Norm-Referenced: Compare scores to others on the ○ Major test publishers aim for fairness, same test. following guidelines to ensure proper use of Criterion-Referenced: Evaluate individual scores tests. against a set standard. 7. Assumption 7: Testing and Assessment Benefit Society ○ Highlighting the necessity of good tests in Culture and Inference critical decision-making. Responsible test users should consider cultural What’s a Good Test? factors in administration and interpretation. Understanding the test-taker's culture and background is crucial for accurate assessment. Reliability: Consistency and precision of the measurement tool; reliable tests yield the same scores under the same conditions. Chapter 5: Reliability Validity: A test is valid if it measures what it purports to measure, including item relevance and score Overview of Reliability interpretation. Definition: Reliability is synonymous with Other Considerations dependability and consistency. In psychometrics, it refers to the consistency of measurement. Trained Examiners: Able to administer, score, and Reliability Coefficient: This is an index of reliability, interpret tests effectively. representing the ratio of true score variance to total Useful Tests: Yield actionable results that benefit variance. individuals or society. Norms: Test performance data used for reference in Understanding Reliability evaluating individual scores. Variance (σ²): A statistical measure of variability. It is Norms in Testing squared standard deviation. ○ True Variance: Variance due to actual differences. Norm-Referenced Testing: Evaluation by comparing ○ Error Variance: Variance due to irrelevant or individual scores to a group. random factors. Norm: Performance data from a group used as a Measurement Error: Refers to all factors affecting reference. the measurement process, excluding the variable Normative Sample: Group analyzed for reference in being measured. It can be categorized as: evaluating performance. ○ Random Error: Caused by unpredictable Norming: Deriving norms from test results. fluctuations. ○ Systematic Error: Constant errors that are Sampling to Develop Norms proportionate to the true value being measured. Standardization: Administering tests to a representative sample to establish norms. Sources of Error Variance Sampling: Selecting a representative portion of the population. 1. Test Construction: ○ Item/Content Sampling: Variations among 4. Item Response Theory (IRT): test items affect scores. ○ Examines how items differentiate among ○ Test creators aim to maximize true variance varying trait levels; includes models for and minimize error variance. dichotomous and polytomous items. 2. Test Administration: ○ Factors influencing attention and motivation Reliability and Individual Scores can create error variance: Test Environment: Temperature, Standard Error of Measurement (SEM): Estimates lighting, noise. the precision of an observed score. Higher reliability Testtaker Variables: Emotional correlates with lower SEM. state, physical health. Confidence Interval: A range likely containing the Examiner Variables: Examiner's true score. demeanor and presence. Standard Error of the Difference: Determines the 3. Test Scoring and Interpretation: significance of differences between two scores. ○ Scorers and scoring systems can introduce error variance. Objective tests can also be affected by technical issues. Chapter 6: Validity 4. Other Sources of Error: ○ Nonsystematic errors such as forgetting or 1. Definition of Validity: misunderstanding instructions can occur. ○ A measure of how well a test measures what Studies suggest underreporting or it claims to measure, based on evidence overreporting of behaviors contributes to about the appropriateness of inferences systematic errors. drawn from test scores. 2. Types of Validity: Reliability Estimates ○ Content Validity: Assesses whether a test adequately samples the behavior it is intended to measure. 1. Test-Retest Reliability: ○ Criterion-Related Validity: Evaluates how ○ Correlates scores from the same individuals well test scores correlate with other relevant across two test administrations. measures. ○ Important for stable traits; longer intervals Concurrent Validity: Relationship may reduce reliability. of test scores with criterion 2. Parallel-Forms and Alternate-Forms Reliability: measures taken at the same time. ○ Assesses the relationship between different Predictive Validity: Ability of test forms of a test, ensuring they measure the scores to predict future same construct equivalently. performance on a criterion 3. Split-Half Reliability: measure. ○ Correlates scores from two halves of a test ○ Construct Validity: Evaluates the administered once. appropriateness of inferences drawn from ○ Can be split randomly or by odd/even items. test scores concerning a theoretical 4. Spearman-Brown Formula: construct. ○ Estimates reliability when a test's length is 3. Additional Concepts: altered by any number of items. ○ Face Validity: Relates to how relevant test 5. Other Internal Consistency Estimates: items appear to the test-taker. ○ Inter-item Consistency: Measures ○ Ecological Validity: Measures how well a correlation among all test items. test reflects real-world scenarios. ○ Kuder-Richardson Formula 20 (KR-20): ○ Incremental Validity: The added value of Used for dichotomous items; lower reliability including additional predictors. for heterogeneous items. 4. Evidence for Construct Validity: ○ Coefficient Alpha: Average of all possible ○ Homogeneity of the test. split-half correlations; suitable for ○ Changes in test scores aligned with age or nondichotomous items. other theoretical predictions. ○ Average Proportional Distance (APD): ○ Variability in scores among distinct groups. Evaluates internal consistency by examining ○ Correlation with established tests item score differences. (convergent evidence) and lack of correlation 6. Inter-Scorer Reliability: with unrelated variables (discriminant ○ Consistency among multiple scorers, evidence). measured through correlation coefficients. 5. Validity Coefficient: ○ A correlation coefficient that measures the Using and Interpreting Reliability Coefficients relationship between test scores and the criterion measure. Reliability coefficients reflect different sources of error. The nature of the test affects reliability: Errors and Bias in Assessment ○ Homogeneity vs. Heterogeneity ○ Dynamic vs. Static characteristics ○ Range of test scores 1. Test Bias: Systematic factors in a test that lead to ○ Speed vs. Power tests inaccurate measurements. ○ Criterion-referenced vs. Norm-referenced 2. Rating Errors: tests ○ Leniency Error: Overly generous scoring. ○ Severity Error: Harsh scoring. ○ Central Tendency Error: Ratings clustered Measurement Models around the middle of the scale. ○ Halo Effect: Rater biases affecting scores 1. Classical Test Theory (CTT): based on unrelated attributes of the ratee. ○ Widely used, positing that a true score 3. Fairness: The extent to which a test is administered reflects an individual’s ability level. and interpreted impartially. 2. Domain Sampling Theory: ○ Estimates reliability based on specific sources of variation affecting test scores. Practical Applications 3. Generalizability Theory: ○ Emphasizes variations in test scores due to Validation Studies: Users should conduct validation different testing situations, using coefficients studies when adapting tests for specific populations. of generalizability. Use of Ratings: To mitigate rating errors, consider ○ Involves arranging items in a histogram to using rankings or structured formats. determine cut scores, like the bookmark method. Chapter 7: Utility 4. Predictive Yield Method: ○ Takes into account the number of positions to fill and score distributions. 1. Definition of Utility: 5. Discriminant Analysis: ○ Refers to the usefulness or practical value of ○ Analyzes the relationship between variables a test or assessment in aiding and success in specific groups. decision-making, improving efficiency, and the value of training programs or interventions. Chapter 8: Test Development 2. Factors Affecting Utility: ○ Psychometric Soundness: Includes Stages of Test Development: reliability (consistency of measurements) and validity (accuracy of measurements). A 1. Test Conceptualization: Generating the idea for the test is considered useful if it has high test. reliability and validity coefficients. 2. Test Construction: Writing, formatting, and designing ○ Costs: Refers to disadvantages, losses, or the test. expenses (both economic and 3. Test Tryout: Administering the test to a non-economic) associated with using a test. representative sample. ○ Benefits: Refers to profits, gains, or 4. Item Analysis: Evaluating each item for reliability, advantages derived from using a test. validity, discrimination, and difficulty. 5. Test Revision: Modifying the test based on analysis. Utility Analysis I. Test Conceptualization: 1. Utility Analysis Definition: ○ A set of techniques for conducting a Questions to consider during conceptualization cost-benefit analysis to determine the include: practical value of an assessment tool. 1. Purpose of the test. 2. Conducting a Utility Analysis: 2. Need for the test. ○ Expectancy Data: Tables (like 3. Target audience. Taylor-Russell and Naylor-Shine) used to 4. Content coverage and administration estimate the likelihood of test-takers scoring methods. within certain ranges on a criterion measure. ○ Brogden-Cronbach-Gleser Formula: Used Norm-Referenced vs. Criterion-Referenced Tests: to calculate the dollar amount of utility gain from a selection instrument. It estimates the Criterion-referenced tests focus on mastery of specific benefits of using a test or selection method. skills, while norm-referenced tests compare individuals against each other. Decision Theory and Practical Considerations Pilot Work: 1. Decision Theory: ○ Highlights the application of statistical Preliminary research to evaluate test items before decision theory in psychological testing (e.g., finalizing the test. Cronbach and Gleser’s work). 2. Practical Considerations: II. Test Construction ○ Job Applicant Pool: Many utility models assume all selected applicants will accept a Scaling: job offer, which may not be true for top performers. ○ Job Complexity: More complex jobs lead to Assignment of numbers to measure traits; scaling greater variability in performance. methods determine how these numbers reflect the ○ Cut Scores: Numerical reference points characteristics being measured. used to categorize test-takers based on their scores. Types of Scales: Rating Scales: Measure strength of traits or Types of Cut Scores attitudes. Likert Scale: Assesses attitudes on a unidimensional 1. Relative Cut Score: or multidimensional basis. ○ Based on the performance of a group Guttman Scale: Ranges items from weaker to (norm-referenced). stronger expressions of a trait. 2. Fixed Cut Score: ○ Determined by a judgment regarding Item Writing: minimum proficiency (absolute). 3. Multiple Cut Scores: ○ Use of multiple cut scores for categorization. Consider the coverage of content, item formats, and 4. Compensatory Model: the size of the item pool. ○ Assumes that high scores in one area can offset low scores in another. Item Formats: Methods for Setting Cut Scores 1. Selected-Response Format: Includes multiple-choice, matching, and true-false items. 2. Constructed-Response Format: Includes 1. Angoff Method: completion, short-answer, and essay items. ○ Used for establishing fixed cut scores based on expert judgment. 2. Known Groups Method: Item Banking and Computer Administration: ○ Compares data from groups known to possess or lack specific traits. Item Bank: A collection of test items organized by 3. IRT-Based Method: various categories. Computerized Adaptive Testing (CAT): Adapts item Quality Assurance: Emphasizes the role of trained difficulty based on test-taker performance. examiners in maintaining standardized test administration. Scoring Items: Use of IRT in Test Development Common models include cumulative scoring, class scoring, and ipsative scoring, each with different Item Response Theory (IRT): Evaluates tests and methods of interpreting test scores. guides revisions through item information curves. Differential Item Functioning (DIF): Analyzes III. Test Tryout whether items function differently across groups. Item Banks: Collection of items from existing Target Population: The test should be tried out on measures, evaluated for validity and reliability before individuals similar to the intended test-takers. use. Sample Size: ○ Minimum of 5 subjects, preferably 10 items. Instructor-Made Tests ○ Larger samples enhance reliability. Testing Conditions: Concerns about Test Administration: Professors ○ Mimic conditions of the standardized test focus on clarity, relevance, and representative closely. questions. ○ Control for extraneous factors affecting Content Validity: Typically assessed informally responses. during the development process. What Makes a Good Item? Chapter 9: Intelligence and Its Measures Pseudobulbar Affect: Example of a neurological WHAT IS INTELLIGENCE disorder relevant to test items. Item Analysis: Involves evaluating test scores and Intelligence is portrayed as a multifaceted capacity that individual item responses. manifests differently throughout life. It encompasses: IV. Item Analysis Knowledge Acquisition and Application: The ability to learn and utilize knowledge. 1. Difficulty Index: Measures how easy or difficult an Logical Reasoning: The capacity to reason and item is. make sound judgments. ○ Too easy (everyone gets it right) or too Effective Planning and Problem-Solving: Ability to difficult (everyone gets it wrong). strategize and resolve issues. 2. Reliability Index: Indicates internal consistency of Perceptual Inference: Inferring concepts and ideas the test. perceptively. ○ Calculated using item-score standard Attention and Intuition: The ability to focus and deviation and correlation with total test intuitively understand situations. score. Adaptation: Adjusting and thriving in new contexts. 3. Validity Index: Assesses whether a test measures what it claims to measure. Younger children often associate intelligence with social skills, 4. Discrimination Index: Evaluates how well an item while older children emphasize academic abilities like reading differentiates between high and low scorers. (Yussen & Kane, 1980). ○ Higher values indicate better discrimination. NOTABLE THEORISTS Analysis of Item Alternatives Francis Galton: Suggested intelligence is linked to Item Characteristic Curves (IRT): Graphic sensory abilities and initiated the heritability debate of representation used to assess item performance in intelligence. terms of difficulty and discrimination. Alfred Binet: Advocated for complex measures of Guessing: Addressing the impact of guessing on test intellectual ability, emphasizing the interplay of scores. reasoning, judgment, memory, and abstraction. Item Fairness: Evaluates if items are biased against David Wechsler: Defined intelligence as an specific groups. aggregate capacity, acknowledging non-intellective factors such as personality traits that influence Speed Tests performance. Jean Piaget: Viewed intelligence as evolving Analysis can be misleading if items near the end are biological adaptation, where cognitive skills develop not reached due to time constraints. over time through mental processes. Recommendations against limiting analysis only to completed items due to reliability issues. PERSPECTIVES IN INTELLIGENCE Qualitative Item Analysis Interactionism: Highlights the interplay of heredity and environment in developing intelligence. Qualitative Methods: Emphasize verbal data over Factor-Analytic Theories: Focus on identifying statistical measures. underlying abilities constituting intelligence through Think Aloud Technique: Involves test-takers statistical correlations. verbalizing their thought processes during the test. Information-Processing Theories: Concentrate on Expert Panels: Used for sensitivity reviews and the mental processes involved in intelligence. qualitative item analysis. FACTOR-ANALYTIC THEORIES V. Test Revision Spearman's Two-Factor Theory: Introduced "g" Purpose of Revision: To enhance existing tests (general intelligence) and "s" (specific abilities) to based on feedback and changing contexts. explain correlations among different intelligence Cross-Validation: Validates test performance on measures. different sample groups. Cattell-Horn Theory: Distinguished between Chapter 10: Assessment for Education crystallized intelligence (Gc) and fluid intelligence (Gf), later expanded by Horn to include various Key Themes: cognitive abilities (e.g., visual and auditory processing). 1. Standardized Testing Cattell-Horn-Carroll Model: A hierarchical model ○ Criticism: Viewed as the "root of all evil" in incorporating multiple cognitive abilities. education by many, attributing educational failures to these tests. INFORMATION-PROCESSING VIEW ○ Importance: Essential for screening, diagnostics, and comparisons among Aleksandr Luria's approach focuses on how information is students across various demographics. processed, distinguishing between simultaneous (integrated) 2. Common Core State Standards (CCSS) and successive (sequential) processing. The PASS model ○ A framework setting educational standards in includes: English and math, with plans for broader subject areas. Planning: Strategy development for problem-solving. 3. Response to Intervention (RtI) Attention: Receptivity to information. ○ Background: Mandate since the 1970s to Simultaneous and Successive Processing: identify and support children with learning Different styles of information processing. disabilities. ○ Definition (2007): A specific learning disability is characterized by a significant gap MEASURING INTELLIGENCE between achievement and intellectual ability. ○ Model: A multilevel framework to enhance Considerations for intelligence tests include theoretical student achievement using data to identify foundations, administration ease, scoring simplicity, at-risk students and implementing interpretative clarity, normative adequacy, reliability, and evidence-based interventions. validity indices. 4. Implementation of RtI ○ Involves tailored interventions decided by a The Stanford-Binet Intelligence Scales: Fifth Edition (SB5) multidisciplinary team of school professionals. Traces back to the Binet-Simon test and introduced ○ Integrative Assessment: Combines inputs concepts like IQ and alternate items. from multiple sources for a comprehensive Shifted from ratio IQ to deviation IQ, allowing for evaluation. performance comparison within age groups. 5. Dynamic Assessment Introduced routing tests to tailor item difficulty and ○ Based on Vygotsky's concept of the Zone of behavior observation during assessments. Proximal Development (ZPD), measuring potential growth through guided The Wechsler Tests problem-solving. 6. Types of Tests ○ Achievement Tests: Measure what a Developed by David Wechsler to evaluate diverse student has learned. clients; included core and supplemental subtests. ○ Aptitude Tests: Predict future performance Endorsed short forms of intelligence tests for and abilities; used for readiness screening purposes. assessments, especially in preschool. 7. Assessment Tools Group Tests of Intelligence ○ Checklists and Rating Scales: Used for behavioral assessments, including the Apgar The Army Alpha and Beta tests were early examples score for newborns. of group testing, adapting Binet's test for various ○ Informal Evaluation: Non-systematic recruits. assessments to gauge various attributes. 8. Psychological Assessment Other Measures of Intellectual Abilities ○ Focuses on cognitive, emotional, and social development through various evaluation Cognitive style reflects consistency in information methods, including observational and processing. interview techniques. Convergent thinking narrows down to a single 9. Elementary to Secondary School Assessments solution; divergent thinking generates multiple ○ Metropolitan Readiness Tests (MRT6): possibilities. Measures readiness in reading and math for early education. ○ SAT: Commonly used aptitude test for high ISSUES IN THE ASSESSMENT OF INTELLIGENCE school students, aiding in college admissions. Culture and Measured Intelligence 10. College and Graduate Level Assessments ○ Graduate Record Examinations (GRE): The quest for culture-free intelligence tests is Includes general and subject-specific tests ongoing, focusing on minimizing cultural bias in for graduate school admissions. assessments. ○ Miller Analogies Test (MAT): Assesses general intelligence and academic learning The Flynn Effect through analogies. 11. Diagnostic Tests Refers to the observed rise in intelligence test scores ○ Used to identify specific learning deficits for over time, influenced by cultural and item-specific targeted interventions. factors. 12. Psychoeducational Test Batteries ○ Kits that assess abilities related to academic Construct Validity of Intelligence Tests success and educational achievement. 13. Performance and Authentic Assessment ○ Performance assessment involves tasks The definition of intelligence by test developers requiring more than multiple-choice significantly affects expected outcomes in factor responses, focusing on skills and real-world analysis, whether reflecting a single factor (like applicability. Spearman's g) or multiple diverse factors (like 14. Peer Appraisal Techniques Guilford's theory). ○ Involves evaluations from peers, often Self-report assessments can yield uncertain truths illustrated through sociograms to show about personal traits, highlighting the importance of interactions. reliable measures. 15. Measuring Study Habits, Interests, and Attitudes ○ Instruments developed to evaluate factors 7. Developing Instruments to Assess Personality affecting academic performance beyond just ability, emphasizing the importance of Logic and Reason: Guided content development, motivation. often through a content-oriented approach. Theory: Reliance on personality theories for test Proponents/People Mentioned: development and interpretation. Data Reduction Methods: Techniques like factor Vygotsky: Introduced the concept of the Zone of analysis help identify key personality traits. Proximal Development, influencing dynamic assessment strategies. 8. The Big Five Personality Model Chapter 11: Personality Assessment: An Overview The Revised NEO Personality Inventory (NEO PI-R) assesses five major personality dimensions: 1. Definition of Personality ○ Neuroticism/Emotional Stability: Adjustment and emotional coping. ○ Extraversion: Sociability and assertiveness. Personality: Unique constellation of psychological ○ Openness/Intellect: Openness to traits that remains relatively stable over time. experiences and intellectual curiosity. Includes variables such as values, interests, attitudes, ○ Agreeableness: Altruism and sympathy. worldview, acculturation, sense of humor, cognitive ○ Conscientiousness: Organization and styles, and behavioral styles. planning abilities. 2. Personality Assessment 9. Criterion Groups Personality Assessment: Measurement and A criterion group serves as a standard for evaluation of psychological traits, states, values, developing and refining personality tests through interests, and individual characteristics. empirical criterion keying. 3. Traits, Types, and States 10. MMPI and Its Revisions Personality Traits: Defined as distinguishable, MMPI: A tool for psychiatric diagnosis and evaluation. enduring variations among individuals (Guilford, ○ MMPI-2: Revised version with updated 1959). language and norms. Personality Types: A constellation of traits ○ MMPI-2-RF: Restructured scales for better resembling a specific category within personality distinction. taxonomies (e.g., Type A vs. Type B). ○ MMPI-A: Adapted for adolescent Personality States: Transitory expressions of assessments. personality traits. 11. Personality Assessment and Culture 4. Basic Questions in Personality Assessment Cultural considerations are crucial for interpreting Who? assessment data. ○ Assessees may self-report or rely on Acculturation: The process of adapting to a new third-party informants (e.g., parents, culture, influencing personality traits and behaviors. teachers). Identity: Cognitive and behavioral characteristics ○ Potential biases include leniency error, defining individual group membership. severity error, central tendency error, and halo effect. What? Chapter 12: Personality Assessment ○ Assessment focuses on thoughts, feelings, and behaviors associated with the human Methods of Personality Assessment experience. ○ Response Styles: Tendency to respond in 1. Objective Methods: characteristic ways, which may affect validity (e.g., acquiescent response style, Definition: Administered via paper-and-pencil or impression management). computer; consist of short-answer items where Where? respondents select from provided options. ○ Conducted in various settings: schools, Scoring: Involves minimal judgment; responses are clinics, research labs, and counseling scored based on personality characteristics or the centers. validity of the response pattern. How? Advantages: ○ Methods include interviews, observations, ○ Quick response times allow extensive paper-and-pencil tests, and physiological coverage of traits. recordings. ○ Well-structured items require little ○ Different assessment methods can be explanation, facilitating group or structured (e.g., standardized interviews) or computerized administration. unstructured. ○ Scoring can be done swiftly and reliably using templates or computers. 5. Scoring and Interpretation ○ Interpretation can also be rapid, especially when aided by custom software. Approaches to scoring may be nomothetic (generalizable traits across individuals) or 2. Limitations of Objectivity: idiographic (focus on unique individual traits). Unlike objective ability tests, personality tests often 6. Issues in Personality Test Development and Use lack a single correct answer; instead, they reveal personality variables. Self-reporting can introduce bias; respondents may Example: Draw A Person (DAP) Test: Participants lack insight or manipulate responses to present a draw a person on paper, with interpretations based on certain image (e.g., "faking good" or "faking bad"). the drawing's details. 3. Projective Methods: Key Terms & Proponents Concept: Based on the projective hypothesis, where Objective Methods: Short-answer personality individuals project their unconscious needs, fears, assessments with quick, standardized scoring. and desires onto unstructured stimuli. Projective Methods: Indirect assessments relying on Nature: Indirect assessment; respondents do not ambiguous stimuli to reveal personality traits. explicitly disclose personal information. Faking is Rorschach: Inkblot test revealing subconscious minimized as examinees interact with ambiguous perceptions. stimuli. TAT: A storytelling test reflecting personal motivations Clinical Use: Historically favored for clinical insights through narrative construction. into unique personal perspectives. Sentence Completion: Gathers insights by completing prompts. 4. Rorschach Inkblot Test: Figure Drawing: Analyzes drawings for psychological insights. Developed by: Hermann Rorschach in 1921. Structure: Comprises 10 bilaterally symmetrical 1. Projective Methods inkblots (5 achromatic, 2 black/white/red, 3 multicolored). Criticism: Projective methods face critiques Administration: Testtakers interpret inkblots, regarding: responding to prompts like “What might this be?” ○ Assumptions: Critics argue that ambiguity Components: in stimuli does not necessarily yield more ○ Location: Which part of the inkblot was personality insights. Murstein emphasizes used for the perception. the total stimulus situation, which includes ○ Determinants: Qualities influencing the environmental factors and examiner perception. influence. ○ Content: The subject matter of the ○ Situational Variables: The examiner's response. presence and age, instructions given, and ○ Popularity: Frequency of specific subtle cues can significantly impact responses. responses. For example, TAT stories written ○ Form: Accuracy of the perception against in private often differ from those written in the the inkblot’s design. examiner’s presence. ○ Psychometric Issues: Concerns include 5. Thematic Apperception Test (TAT): uncontrolled variations, inappropriate samples, and challenges in conducting Developed by: Morgan & Murray in 1935. validity studies, complicating the assessment Structure: 31 cards (30 with pictures, 1 blank) of reliability and validity. depicting various human situations. Purpose: Testtakers create narratives about the 2. Objective vs. Projective Tests scenes, revealing underlying motivations and conflicts related to their own lives. Theoretical Basis: Emphasizes the interplay of Dichotomy: Objective tests can also be influenced by individual needs (determined by past experiences) response styles and lack of insight from test-takers. and environmental pressures. Weiner suggests using "structured" for objective tests and "unstructured" for projective tests, highlighting that structured tests access conscious personality 6. Other Projective Techniques: aspects, while unstructured ones delve into unconscious material. Hand Test: Uses cards with hand illustrations to gauge testtaker responses. Rosenzweig Picture-Frustration Study: Involves 3. Behavioral Assessment Methods cartoon images depicting frustrations, asking testtakers to respond verbally. Focus: Behavioral assessment emphasizes Apperceptive Personality Test (APT): Focuses on observable actions rather than inferred global traits. It everyday scenes, featuring diverse individuals, and aims to predict behavior based on situational utilizes more objective scoring. understanding. 7. Verbal Projective Techniques: 3.1 The Essentials of Behavioral Assessment Word Association Tests: Respondents say the first Who: Identifies both the assessee (e.g., patient, word that comes to mind after a stimulus word. client, subject) and the assessor (e.g., professional, Analyzed based on various factors (e.g., content, technician). reaction time). What: Targets measurable behaviors based on Sentence Completion Tests: Incomplete sentences assessment objectives. are presented, and respondents finish them, revealing When: Assessed when problem behaviors are likely personal insights and attitudes. to occur, often using methods like the Timeline Followback (TLFB). 8. Auditory Projective Techniques: Where: Ideally conducted in natural environments where behaviors occur. Why: To provide baseline data, identify triggers, and Involve responses to auditory stimuli (e.g., sounds or target behaviors for modification. spoken paragraphs) to elicit underlying thoughts or How: Can be conducted with minimal equipment or emotions. sophisticated technology; data analysis methods remain debated. 9. Figure Drawing Tests: 4. Varieties of Behavioral Assessment Definition: Involves creating drawings that are analyzed for content and related characteristics. Behavioral Observation and Rating Scales: ○ Social Support: Emotional and practical Involves observing behaviors directly or through support from social networks. preprinted scales to measure targeted actions. Self-Monitoring: Individuals record their behaviors or The Interview in Clinical Assessment thoughts, distinct from self-reporting, influenced by their diligence and motivation. Therapeutic Contract: Agreement between client Analogue Studies: Investigate similar variables in and therapist outlining goals and expectations. controlled settings. Situational Performance Measures: Assess Types of Interviews behaviors under standard conditions, such as in leaderless group techniques. Role Play: Used for teaching, therapy, and 1. Stress Interview: Places the interviewee under assessment by simulating scenarios. pressure to elicit specific personality traits. 2. Hypnotic Interview: Conducted under hypnosis for therapeutic assessment or eyewitness accounts. 5. Psychophysiological Methods 3. Cognitive Interview: Encourages use of imagery to recall information. Overview: Study physiological indices like heart rate 4. Collaborative Interview: Diminishes the boundary and blood pressure, influenced by psychological between assessor and assessee, promoting joint factors. discovery. Biofeedback: Monitors biological processes, providing real-time feedback on physiological Standard Questions in Intake Interviews functions. Key Instruments: Demographic data ○ Plethysmograph: Measures blood volume Reasons for referral changes. Medical history (past, present, familial) ○ Penile Plethysmograph: Specifically Psychological history measures blood flow to the penis. Current psychological conditions ○ Polygraph: Commonly known as a lie detector, crucial in applied psychology contexts. Mental Status Examination A clinical interview paralleling physical exams, 6. Unobtrusive Measures assessing areas like: ○ Appearance Definition: Indicators or records that do not interfere ○ Behavior with the observed behavior. ○ Orientation ○ Memory 7. Issues in Behavioral Assessment ○ Affect ○ Thought Processes Psychometric Evaluation: The reliability and validity ○ Judgment of behavioral assessment tools remain controversial. Contrast Effect: Previous ratings may unduly Psychometric Aspects influence current evaluations. Composite Judgment: Averaging multiple Conclusions drawn from interviews are evaluated for assessments can reduce error and enhance reliability. reliability and validity. Reactivity: Individuals may change their behavior in response to being observed, affecting the accuracy of Psychological Assessment assessments. Case History Data: Obtained from interviews, Chapter 13: Clinical and Counseling Assessment records, and other sources to understand behavior patterns. Overview of Clinical and Counseling Psychology Psychological Test Battery Clinical Psychology: Focuses on the prevention, diagnosis, and treatment of abnormal behavior. Refers to a group of tests administered together, Counseling Psychology: Similar focus as clinical typically including an intelligence test, a personality psychology with an emphasis on more typical test, and a neurological deficit screening. adjustment problems. Culturally Informed Psychological Assessment Diagnosis of Mental Disorders Accounts for cultural variables impacting the Incidence: Rate of new occurrences of a disorder in a evaluation process. population. Prevalence: Proportion of individuals in a population Special Applications of Clinical Assessment diagnosed with a disorder. Wakefield’s Evolutionary View: Defines mental 1. Assessment of Addiction: Tools like the MacAndrew disorders as evolutionary failures that harm Alcoholism Scale and role-play tests. individuals. 2. Forensic Psychological Assessment: Evaluates psychological factors in legal contexts (e.g., Biopsychosocial Assessment competence to stand trial, criminal responsibility). 3. Custody Evaluations: Assesses parental capacity A multidisciplinary approach that evaluates biological, and children’s needs/preferences. psychological, social, cultural, and environmental factors contributing to a presenting problem. Child Abuse and Neglect Important psychological factors include: ○ Fatalism: Belief that control over life events Definitions of abuse and neglect include is limited. physical/emotional harm and failure to provide ○ Self-efficacy: Confidence in one's ability to adequate care. accomplish tasks. Physical and Emotional Signs: Important for Components of a Neuropsychological Evaluation assessment, often identified through interviews and observations. 1. History Taking: Emphasis on patient narratives and case studies. Risk Assessment Tools 2. The Interview: Use of structured interviews and screening devices for varied populations at risk. Child Abuse Potential Inventory (CAP): Validates 3. Mental Status Examination: Evaluates abuser identification. consciousness, emotional state, thought clarity, Parenting Stress Index (PSI): Measures parental memory, language, etc. stress. 4. Physical Examination: Involves noninvasive procedures, such as assessing reflexes. Suicide Assessment Parkinson's Disease Signs include discussing suicide, having a plan, and previous attempts. Parkinson's Disease (PD): A progressive neurological illness with motor and nonmotor Psychological Reports symptoms, including depression and dementia. Caused by cell loss in the substantia nigra, impacting dopamine production. Key Elements: Diagnoses are often idiopathic (of unknown origin). ○ Demographic Data Related conditions include: ○ Reason for Referral ○ Rapid Eye Movement Sleep Behavior ○ Tests Administered Disorder: Acting out dreams. ○ Findings: Integrates behavioral ○ Dyskinesias: Involuntary jerking observations and test data. movements. ○ Recommendations: Addresses amelioration ○ Deep Brain Stimulation (DBS): A treatment of the presenting problem. method. ○ Summary: Concise recap of referral reason, Lewy Bodies: Protein clusters that deplete dopamine findings, and recommendations. and contribute to Lewy body dementia, exhibiting symptoms similar to both Parkinson's and The Barnum Effect Alzheimer’s. Describes the tendency to accept vague personality Neuropsychological Tests descriptions as accurate, highlighting the need for careful interpretation in assessments. Utilized to assess changes in mental status due to medication or disorders, and in forensic evaluations. Chapter 14: Neurological Assessment Types of Tests Overview of Neurology and Neuropsychology 1. Tests of General Intellectual Ability: Neurology: Focuses on the nervous system and its ○ Pattern Analysis and Deterioration disorders. Quotient (DQ). Neuropsychology: Examines the relationship 2. Tests for Abstract Thinking: between brain functioning and behavior. ○ Wechsler Similarities subtest and proverb Neuropsychological Assessment: Evaluates brain interpretation. and nervous system functioning concerning behavior. 3. Tests of Executive Function: Behavioral Neurology: A subspecialty of neurology ○ Sorting Tests, Clock-Drawing Test (CDT), that emphasizes brain-behavior relationships, Trail Making Test, Field of Search, primarily through biochemical perspectives. Identification Task, Picture Absurdity. Neurotology: Concentrates on hearing, balance, and 4. Tests of Perceptual, Motor, and Perceptual-Motor facial nerve issues. Function: ○ Evaluates sensory functioning and mobility; The Nervous System and Behavior includes the Bender Visual-Motor Gestalt Test. The nervous system consists of neurons and is 5. Tests of Verbal Functioning: divided into: ○ Aphasia: Loss of ability to express or ○ Central Nervous System (CNS): Brain and understand language. spinal cord. 6. Tests of Memory: ○ Peripheral Nervous System (PNS): ○ Procedural Memory: Skills and tasks (e.g., Neurons conveying messages to/from the riding a bicycle). body. ○ Declarative Memory: Facts and events Each cerebral hemisphere receives sensory (semantic and episodic memory). information and controls motor responses from the opposite side, known as contralateral control. Neuropsychological Test Batteries Neurological Damage and Organicity Fixed Battery: A standard set of tests. Flexible Battery: Customized assortment of tests Neurological Damage: Can involve lesions in the tailored to the patient’s needs. brain or nervous system, leading to impairments. A notable battery is the Halstead–Reitan Lesion: A pathological alteration of tissue due to Neuropsychological Battery. injury or infection. Brain Damage: Refers to any physical or functional Other Tools of Neuropsychological Assessment impairment in the CNS resulting in deficits. Functional Magnetic Resonance Imaging (fMRI): Neuropsychological Evaluation Real-time brain activity imaging. Cerebral Angiogram: X-ray of blood vessels in the Hard Signs: Definite indicators of neurological deficit brain. (e.g., abnormal reflexes). CAT Scan: Detailed three-dimensional brain imaging. Soft Signs: Suggestive indicators of neurological PET Scan: Diagnoses biochemical lesions. issues. SPECT Scan: Produces clear images of organs and tissues. EEG (Electroencephalograph): Records brain electrical activity. EMG (Electromyograph): Records muscle electrical activity. Echoencephalograph: Converts electric energy into sound. Lumbar Puncture (Spinal Tap): Assesses spinal fluid chemical normality and intracranial pressure. DEVELOPMENTAL PSYCHOLOGY scedequina ○ Social Learning Theory (Albert Bandura): Human Development Children learn by observing and imitating models. Definition of Human Development Reciprocal Determinism: The individual and environment influence each other. Human Development: The scientific study of Observational Learning: Learning processes of change and stability throughout the by watching others’ behavior human lifespan. (modeling). Self-Efficacy: Belief in one’s ability Domains of Development to master challenges and achieve