Psychological Assessment Notes PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document provides notes on psychological assessment, including the APS position statement on psychological testing and an apology to ATSI peoples. It also discusses various contexts for psychological assessments, such as clinical, educational, and forensic settings.
Full Transcript
Psychological Assessment Notes Week 1: APS Position Statement on Psychological Testing Integral part of ax process Administration and scoring of a psychological test done by a psychologist who is a) fully familiar with the test or b) can be undertaken by a person with training in the...
Psychological Assessment Notes Week 1: APS Position Statement on Psychological Testing Integral part of ax process Administration and scoring of a psychological test done by a psychologist who is a) fully familiar with the test or b) can be undertaken by a person with training in the administration and scoring of tests working under the direct supervision of a psychologist. Interpretation and integration of results must be undertaken by a psych with sound knowledge of test and psychological theory surrounding the test. Test must be appropriate to age, gender, ethnicity, language ability and mental state. Psychologists who conduct testing must maintain up to date knowledge and skills on delivering and interpreting tests. Psychological tests and materials must be stored securely in accordance with legislative requirements. Access to psychological test data is restricted to psychologists or individuals who are trained in test administration and working under the direct supervision of psychologists. APS Apology to ATSI People Disparities exist between ATSI Australians and other Australians in psychological distress, chronic disease, and incarceration rates. ATSI people face many daily stressors and have high rates of suicide, a phenomenon not present in their cultures before colonization. The persistence of these disparities in a first-world nation is unacceptable. It's important to acknowledge the strengths and resilience of ATSI peoples and communities. ATSI people have the oldest surviving cultures on Earth. Their resilience and resourcefulness could positively impact Australian society if they had more opportunities to contribute in their areas of expertise. Psychologists have not always respected the skills, expertise, worldviews, and wisdom of ATSI people. Apologies are made for: o Using diagnostic systems that don't honour cultural beliefs. o Inappropriately using assessment techniques that misrepresented the abilities of ATSI people. o Conducting research benefiting researchers over participants. o Ignoring ATSI healing approaches in treatments. o Remaining silent on important policy issues, such as the Stolen Generations. A new commitment is made to: o Listen more and talk less. o Follow more and steer less. o Advocate more and comply less. o Include more and ignore less. o Collaborate more and command less. The goal is a future where ATSI people control what is important to them, with more ATSI psychologists and decision-makers. Ultimately, the aim is for ATSI people to enjoy the same social and emotional wellbeing as other Australians. What is Psychological Assessment? Psychological assessment is a key skill across various psychology fields: clinical/counselling, neuropsychology, education, organizational, forensic, and research. It involves a formal process that may include structured or unstructured interviews, questionnaires, behavioural observation, and puzzle-like activities to evaluate specific skills. Psychological assessments aim to reliably and validly explore individual characteristics such as cognitive strengths, personality, and mental health. However, these assessments often reflect the values and priorities of the dominant culture, in Australia's case, an individualistic, capitalist Eurocentric society. Current assessment tools focus on cognitive skills valued in this context, assess personality traits deemed desirable, and evaluate mental health using the DSM. When using psychological assessments to gather information, it is important to consider how this data is used to inform recommendations. The primary purpose of psychological assessment in a clinical setting is to answer questions about an individual and inform recommendations. Assessments can address questions about clinical diagnosis, strengths and weaknesses, and traits and preferences. The specific questions and focus of an assessment depend on the referral reason and the assessment setting. Starting with the Referral: Essential to start with the referral to establish the reasons for the assessment and the specific questions to address. Initial referral questions are often vague; it's the psychologist's responsibility to clarify and refine these questions. Understanding the Assessment Context: Psychiatric Settings: Assessments assist in characterizing conditions and planning treatment. Communication with psychiatrists requires using understandable language and conceptual models. General Medical Settings: Assessments investigate psychosocial factors potentially causing or maintaining medical issues. Neuropsychological assessments can be particularly important. Psychosocial treatments may be beneficial even for biologically driven issues. Legal/Forensic Settings: Assessment reports can serve as evidence in various legal contexts, such as custody decisions, capacity assessments, mitigating factors, sentencing, remand decisions, victim impact, and re-offending risk. Academic/Educational Settings: Assessments address cognitive functioning, suitable careers, behavioural issues, streaming, and the need for additional support. Involves multiple clients (child, parents, school) and requires a strengths-focused approach. Consideration of the child's environment (school, family) and physical attributes (vision, hearing, endocrine function) is important. Private Practice/Community Psychological Clinics: Clinical and counselling psychologists assess clients to determine issues, conceptualize the presentation, and plan treatment/management. The nature of the assessment varies based on the presenting issue, psychologist's philosophical stance, setting resources, and client preferences. Consideration of dual roles (assessor and therapist) and a client-centred, therapeutic approach is important. Ethics in Psychological Assessment Governed by General Principle B: Propriety (section B.13) in the APS Code of Ethics. Psychologists must be competent in administering and interpreting tests. Informed consent from clients is necessary. Test integrity must be preserved. The APS provides guidelines on ethical management of assessments and potential risks to clients. Cultural Competence & Safety: Psychologists must be culturally aware to select appropriate assessment approaches. Key considerations when selecting tests for clients from Culturally and Linguistically Diverse (CALD) backgrounds include: o Linguistic Equivalence: Ensuring accurate translation of tests. o Conceptual Equivalence: Ensuring concepts have the same meaning across cultures. o Metric Equivalence: Ensuring the test has consistent psychometric properties across different groups. Awareness of cultural discrepancies that could affect assessment results is crucial. Issues are particularly significant in 'intelligence' and cognitive testing, as tests may reflect Western cultural learning and may not be culturally neutral. It is challenging to create culture-free tests unaffected by Western cultural experiences. This raises concerns about the validity and universality of psychological tests developed in Western contexts. Culturally Safe Assessment Culturally Relevant Evaluations and Treatments: It is important for counsellors and psychologists to understand and incorporate relevant cultural factors into evaluations and treatments. Consideration of cultural differences should also explore whether perceived differences are due to socioeconomic factors or differences in educational opportunity. Guidelines for Culturally Appropriate Assessment (Acevedo-Polakovich et al., 2007): Based on research with U.S. Latinas/os, the four-stage process can be generalized to other cultural groups. Stage 1: Proactive Steps: Practitioners should receive and maintain formal training in culturally appropriate assessment before conducting assessments. Stage 2: Assessment Outset: A comprehensive interview with the client should be conducted to explore cultural history, contact with other cultural groups, acculturation status and stress, and language skills. Use of interpreters and translation of materials may be necessary. It is crucial to explain and document the limitations of any testing protocol used. Stage 3: Assessment Process: Practitioners should recognize and document the impact of language and non-verbal communication. Proactive training should alert assessors to the potential impact of culturally relevant international variables. Stage 4: Results Reporting: Practitioners should incorporate cultural explanations in interpreting and reporting results. It is important to avoid labelling in the final interpretation of results. Week 2: Guidelines for Psychological Report Writing Psychological reports are essential in the assessment process, presenting findings clearly and integrating them with other sources of information, such as client background. Typical report length is between 5 and 7 single spaced pages. The reports communicate recommendations and address the referral question, tailored to the specific setting and audience (e.g., client, workplace, legal, parents, school). Groth-Marnat and Wright (2016) outline five key themes for assessing report writing proficiency: 1. Comprehensiveness: Includes the length, referral source, referral question, client history, behavioural observations, and a summary of impressions that address the referral question. 2. Integration: Data should be integrated to provide a holistic view of the person, avoiding a 'test by test' presentation. Conflicting or ambiguous findings should be addressed or explained. 3. Validity: Emphasize evidence-based interpretations, avoid sweeping generalizations, and use precise terms to describe assessment results with an appropriate degree of certainty. 4. Client-centredness: Focus on the specific concerns of the client, providing clear and actionable recommendations. Reports should be written with an awareness of the intended reader. 5. Overall Writing: Four writing styles are outlined: ▪ Literary: Everyday language, creative but can be imprecise. ▪ Clinical: Focuses on pathological aspects, may omit strengths. ▪ Scientific: Emphasizes normative comparisons and data, may lack personal touch. ▪ Professional: Recommended style, uses precise language, varied sentence structure, and short, focused paragraphs. The use of terminology should balance precision with accessibility, avoiding raw data and value-laden terms that may be misinterpreted. Feedback as part of the report writing process Consideration needs to be given as to how clients will be informed of their assessment results: Will they receive a copy of the report? An abridged copy? Will a specific feedback session be organised? Providing feedback respects a person’s autonomy and their right to be informed of issues relating to their health. Must be given sensitively and in a way that is clearly understood. Feedback can clear up misconceptions about the purpose and power of testing, such as whether it proves their “sanity” (or lack thereof) or their “intelligence”, etc. Feedback is a clinical intervention that can help or harm the client – the power of assessment feedback needs to be respected. Report Format Demographic/Referral Information Includes name, age, gender, ethnicity, date of report, examiner's name, and referral source. I. Referral Question Briefly describes the reason for the assessment and the nature of the problem, providing context and orienting the reader to the report's purpose. Numbering multiple referral questions can be helpful. II. Evaluation Procedures Lists and briefly describes the tests used without including actual test results. Detail on testing dates and time taken varies by assessment setting. III. Background Information (Relevant History) Concise information on the client's history relevant to the referral question and assessment. Clarifies the source of the information (e.g., self-report or other-report). IV. Behavioural Observations Describes the client's appearance, general behavior, and examiner-client interaction. Ties observations to specific examples, avoiding inferences. Includes only observations that provide insight or context for the assessment results. V. Results, Interpretations, and Impressions Main body of the report, integrating test data, behavioural observations, relevant history, and other data. Discusses prognosis and client strengths, ensuring a balanced view. VI. Summary and Recommendations Summarizes the primary findings and conclusions, restating the referral question as needed. Recommendations are clear, specific, practical, obtainable, and directly related to the report's purpose. Essentials of assessment report writing Week 3: Introduction to the WAIS What is intelligence testing? Intelligence Testing: Designed to measure various cognitive abilities, including performance in unfamiliar situations, learning capacity, and information retention. Result Formats: Commonly reported as IQ scores, intelligence scales, or mental age, reflecting both current capabilities and potential future prospects. Strengths: Predicts academic achievement and job performance effectively Identifies cognitive strengths and weaknesses Provides comparisons with age-related norms Facilitates behavioural observations during testing Establishes a baseline for monitoring changes over time and evaluating the impact of interventions Structured and consistent, minimizing assessor variability Limitations: Does not assess attributes like creativity, imagination, divergent thinking, social skills, or kinesthetic abilities Can be misinterpreted as measuring innate intelligence rather than current ability Focuses on outcomes rather than the process of problem-solving May exhibit cultural and socio-economic biases What is the WAIS? WAIS Overview: A comprehensive battery of intelligence tests measuring various intellectual functions, widely used in clinical practice for its psychometric reliability and clinical relevance. History and Development: Initial Development: The WAIS was first published in 1955, improving on previous measures' reliability and establishing a normative sample. Revisions: o WAIS-R (1981): Updated norms, expanded age range, and improved psychometric properties. o WAIS-III (1997): Further updates to norms and test structure. WAIS-IV Changes: Subtests: Added: Visual Puzzles, Figure Weights, and Cancellation Removed: Object Assembly and Picture Arrangement Total: 15 subtests (10 core + 5 supplemental) Index Structure: Replaced verbal and performance IQs with 4 indices: Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Processing Speed Index (PSI), and Working Memory Index (WMI) Full Scale IQ (FSIQ) and indices derived from 10 core subtests, allowing shorter administration times Introduced General Ability Index (GAI), combining VCI and PRI New process scoring for Block Design, Digit Span, and Letter-Number Sequencing Clinical Utility: Updated norms and better handling of floor and ceiling effects Normed linkages with the Wechsler Memory Scale–IV Enhanced kit for neuropsychologists and geropsychologists Versions for Specific Age Groups: WAIS-IV: Ages 16–90 years WISC-V: Ages 6–16 years WPPSI: Ages 2.5–7.5 years Psychometric Properties: Reliability: High split-half reliability for Full Scale IQ; acceptable for the Cancellation subtest Test-retest reliability ranges from 0.7 to 0.9, indicating high temporal stability with some practice effects Validity: High correlation between WAIS-III and WAIS-IV scores; strong correlation with similar cognitive measures Effective for identifying Alzheimer’s Disease and traumatic brain injuries, but less effective for detecting ADHD due to minimal testing environment distractions WAIS-IV Subtest and Index Structure: WAIS-IV Structure: Assesses abilities across four indexes, each with core and supplemental subtests. Indexes and Subtests: Verbal Comprehension Index (VCI): Core Subtests: Similarities: Identifying common characteristics between pairs of objects (assesses abstract thinking and verbal reasoning), e.g., in what way are an apple and a banana alike? Vocabulary: Defining words (measures language development, expressive language skills, and long-term memory retrieval). Information: Recalling factual information (evaluates intellectual curiosity, alertness to environment, and educational experience). Supplemental Subtest: Comprehension: Understanding norms and practical knowledge (tests understanding of complex questions and verbal reasoning). Perceptual Reasoning Index (PRI): Core Subtests: Block Design: Constructing patterns using blocks (assesses visual-motor skills and part- whole recognition). Visual Puzzles: Solving puzzles mentally (measures visual processing, fluid reasoning, attention to detail, and abstract visual thinking). Matrix Reasoning: Completing visual patterns (evaluates non-verbal reasoning, visuospatial processing. and visual intelligence). Supplemental Subtests: Picture Completion: Identifying missing parts of pictures (assesses visual organization and non-verbal reasoning). Figure Weights: Balancing scales with objects (measures nonverbal mathematical reasoning and quantitative understanding). Working Memory Index (WMI): Core Subtests: Digit Span: Repeating sequences of numbers (evaluates auditory recall, attention, and concentration). Sensitive to test anxiety, TBI, concentration. Arithmetic: Solving mental arithmetic problems (assesses computational skills and auditory short-term memory). Supplemental Subtest: Letter-Number Sequencing: Sequencing letters and numbers (measures mental manipulation of verbal information and attention). Processing Speed Index (PSI): Core Subtests: Coding: Matching symbols to numbers within a time limit (assesses psychomotor speed, visual short-term memory, ability to follow direction, motivation and persistence, and accuracy). Symbol Search: Identifying target symbols (measures speed of visual search and processing). Supplemental Subtest: Cancellation: Marking target pictures from an array (evaluates attention and processing speed). Key Points: Verbal Comprehension Index (VCI): Sensitive to cultural influences, measures abstract thinking and verbal abilities. Perceptual Reasoning Index (PRI): Assesses engagement with non-verbal stimuli and visuospatial processing. Working Memory Index (WMI): Measures short-term auditory memory and attention, sensitive to test anxiety. Processing Speed Index (PSI): Evaluates speed in solving non-verbal problems and planning, influenced by motivation and motor control. Chapter 5 – Handbook of Psychological Assessment Cognitive Assessment Purpose and Scope Objective: To assess and understand an individual’s cognitive functioning, including general intelligence and specific cognitive abilities. Importance: Cognitive assessment is critical for diagnosing cognitive disorders, planning interventions, and providing insights into an individual's cognitive strengths and weaknesses. Types of Cognitive Tests General Intelligence Tests: o Wechsler Adult Intelligence Scale (WAIS): Measures broad intellectual abilities through multiple subtests. o Wechsler Intelligence Scale for Children (WISC): Similar to the WAIS but designed for children. o Stanford-Binet Intelligence Scales: Another major test for measuring general intelligence, often used in educational settings. Specific Cognitive Abilities Tests: o Memory Tests: Evaluate various types of memory, including short-term, working, and long-term memory. o Attention Tests: Measure an individual's capacity to focus, sustain, and shift attention (e.g., Continuous Performance Test). o Executive Function Tests: Assess higher-level cognitive processes such as planning, problem-solving, and cognitive flexibility (e.g., Wisconsin Card Sorting Test). Theories of Intelligence Spearman’s g: o Proposes a general intelligence factor (g) that underlies specific cognitive abilities. Thurstone’s Primary Mental Abilities: o Identifies several distinct cognitive abilities, such as verbal comprehension, numerical ability, and spatial ability. Gardner’s Multiple Intelligences: o Suggests that intelligence is multi-faceted and includes various types such as linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, intrapersonal, and naturalistic. o Test Development and Standardization Development: o Involves creating test items, piloting the test, and refining it based on feedback. o Ensures that items are relevant and appropriate for measuring the targeted cognitive abilities. Standardization: o Establishes norms based on a representative sample to interpret individual scores. o Ensures that the test is administered and scored consistently. Reliability and Validity Reliability: o Test-Retest Reliability: Consistency of test scores over time. o Inter-Rater Reliability: Consistency of scores across different evaluators. o Internal Consistency: Consistency of scores within the test, often measured by Cronbach’s alpha. Validity: o Content Validity: The extent to which the test covers the entire domain of cognitive abilities it aims to measure. o Criterion-Related Validity: How well test scores predict performance on related measures (e.g., academic achievement). o Construct Validity: The degree to which the test measures the theoretical construct it is intended to measure. Administration and Scoring Administration: o Ensures that the testing environment is free from distractions. o Follows standardized procedures to avoid bias and ensure consistency. Scoring: o Involves converting raw scores into standard scores based on normative data. o Uses norm-referenced scores to compare an individual’s performance with that of a representative sample. Interpreting Cognitive Test Results Contextual Interpretation: o Considers the individual’s background, including educational, cultural, and socio- economic factors. o Integrates test results with other assessment data to provide a comprehensive understanding. Applications of Cognitive Assessment Clinical: o Diagnoses cognitive impairments, such as dementia or traumatic brain injury. o Plans and monitors interventions and treatments. Educational: o Identifies learning disabilities and informs educational planning. Forensic: o Evaluates cognitive capacities in legal contexts, such as competency evaluations. Challenges and Considerations Cultural and Linguistic Factors: o Recognizes that cultural and language differences can impact test performance and interpretation. o Stresses the need for culturally appropriate tests and interpretation methods. Test Bias: o Examines potential biases in cognitive testing and advocates for fair assessment practices. Future Directions Technological Innovations: o Explores advancements such as computer-based testing and adaptive testing methods. Research and Development: o Highlights ongoing efforts to improve cognitive assessment tools and methodologies. Week 4: Administering the WAIS-IV Standardization: The WAIS-IV is highly standardized, with detailed instructions for administration and scoring provided in the manuals. Procedure: Administered one-on-one, usually in one sitting (can be split if necessary). Average administration time: 70-100 minutes (longer with supplemental tests or more breaks). Thorough review of the manual and multiple practice sessions are required to ensure precise administration. Errors in administration can compromise score validity. Common Errors: Misapplying reverse rules. Failing to query verbal responses properly. Leniency or mistakes in scoring. Neglecting follow-up prompts as suggested in the manual. Incorrect calculations. Forgetting to keep time during timed tests. Use with Diverse Populations: Cultural Bias: The WAIS-IV has been criticized for cultural bias, particularly in: Subtests requiring proficiency in English. Performance linked to educational opportunities. Cognitive skills measured that are valued in specific cultural contexts. Publisher’s Claims: Efforts have been made to eliminate cultural bias, but assessors must consider cultural factors when interpreting results. Historical Considerations: Be mindful of how historically victimized groups might perceive cognitive assessments like the WAIS-IV. Online (Remote) Administration Virtual Administration: Increasingly administered online through video calls and Pearson's Q- Global platform. Validity and Reliability: Questions remain about the equivalence of online and face-to-face administration. Pearson’s studies show equivalence but caution is advised. Challenges: Difficulty in making behavioural observations. Test security concerns (exposure to unauthorized individuals). Responsibility to protect test security. Meaning of Index Scores Interpretation: Index scores should be interpreted with caution; the WAIS-IV is not strictly aligned with theories of intelligence. Non-Fixed Capacities: The WAIS-IV does not indicate fixed, innate capacities; scores can change due to environmental influences. Cultural Considerations: Recognize that WAIS-IV tests abilities prioritized by Eurocentric countries, which may have less validity for other cultures. Scoring the WAIS-IV Importance of Correct Scoring: Accurate scoring is crucial due to the significant implications of test results. Familiarity with the test process and strict adherence to scoring procedures are necessary. Clinical judgment may be required; consult a colleague if in doubt. Performance Classifications: WAIS-IV uses seven classifications, ranging from "Very Superior" (130+) to "Extremely Low" (69 and below). Always include the confidence interval, percentile rank, and age-based comparison in reports. Interpreting the WAIS-IV Complexity: Interpretation integrates test results with behavioural observations and referral information. Interpretations are hypotheses, avoid stating information with too much certainty. Be cautious with general measures like FSIQ, especially with significant differences between index scores. Levels of Interpretation: Full Scale IQ (FSIQ): o Provides a global estimate of overall mental abilities relative to age-related peers. o FSIQ can be reported as percentile ranks for better comprehension. o Consider the General Ability Index (GAI) if clinical issues might affect WMI and PSI scores. Index Scores: Index scores offer detailed insights into intellectual functioning. Only interpret if they represent a unitary ability (subtests within an index differ by less than five points). Consider ipsative (within-individual) and normative weaknesses for functional implications. Subtest Variability: Focus primarily on Index scores; use subtest variability analysis if there is significant "subtest scatter." Develop and integrate hypotheses about low/high scores with additional information about the examinee. Caution: Inferences based on subtests should be tentative due to lower reliability/validity compared to Index scores. Qualitative/Process Analysis: Analyse why scores might be high or low. Look at the content of responses and behavioural observations for psychological indicators. Consider test-specific factors like the impact of timing on performance (e.g., Block Design without a time bonus). Intra-Subtest Variability: Investigates response patterns within subtests. Look for inconsistent patterns that might indicate issues like concentration problems or malingering. This level is highly subjective and lacks consistent research support for specific clinical interpretations. Extra Issues What if only 2 subtests are available for an index? There is the option of prorating 2 (core) subtests for the VCI & PRI This must be noted in the report When to use supplementary tests? If one of the core tests is corrupted. Only one supplemental subtest can be substituted per index Only two supplemental tests can be substituted to calculate FSIQ What is GAI? Provides a cross-domain summary score that is less sensitive to WM and processing speed than FSIQ Consider reporting if VCI/ WM discrepancy &/or PRI/PSI discrepancy &/or you believe subtests within WM & PSI have pulled down FSIQ Why is the order proscribed? Block design is administered first because it is engaging and gives the opportunity to build rapport (so say the publishers, my experience is that people find it very anxiety provoking because it is timed and it is an unfamiliar task). Avoids interference effects between Digit Span & Letter-Number Sequencing (if needed). These 2 subtests both require examinees to manipulate letter/number information in their short-term memory. Week 5: Personality Assessment (NEO) History & Development of the NEO Personality Inventory (NEO-PI) Developed to assess five dimensions of personality, not a diagnostic tool. Originally created for a "normal" population, specifically adult males in the US. First version in 1978 by Paul Costa Jr. and Robert R. McCrae, called the NEO-I, assessing Neuroticism, Extraversion, and Openness. In 1985, two additional factors (Agreeableness and Conscientiousness) were added; renamed the NEO Personality Inventory (NEO-PI). The measure has undergone two further updates: NEO PI-R (1992) and NEO PI-3 (2005). Description of the Five-Factor Model of Personality as Measured by NEO-PI 1. Neuroticism: Emotional instability, tendency for anxiety, anger, and sadness. 2. Extraversion: Sociability, assertiveness, warmth; preference for social interactions. 3. Openness to Experience: Imagination, curiosity, creativity, openness to emotions and abstract thinking. 4. Agreeableness: Trust in others, empathy, compliance, and modesty; high scores may indicate passivity. 5. Conscientiousness: Goal-oriented, organized, disciplined, and achievement-driven. Factors & Facets of Each Personality Domain Neuroticism: Anxiety, Angry Hostility, Depression, Self-Consciousness, Impulsivity, Vulnerability. Extraversion: Warmth, Gregariousness, Assertiveness, Activity, Excitement-seeking, Positive Emotions. Openness to Experience: Fantasy, Aesthetics, Feelings, Actions, Ideas, Values. Agreeableness: Trust, Straightforwardness, Altruism, Compliance, Modesty, Tender- mindedness. Conscientiousness: Competence, Order, Dutifulness, Achievement-striving, Self- Discipline, Deliberation. Additional Notes NEO-PI scores can be analysed further to provide insights into personality styles, but this is an advanced analysis not required for all assessments. Description of the NEO-PI-R Test Two versions: self-report and other-report. Can be completed with pen and paper or online. Contains 240 questions rated on a five-point Likert scale (1 = Strongly Disagree to 5 = Strongly Agree). No time limit to complete the test. Items assess typical behavior, feelings, opinions, and attitudes related to self, others, and situations. Results are grouped into five personality domains and 30 more specific facet subscales. Psychometric Properties of the NEO-PI-R The five-factor model (FFM) of personality is a well-supported theory; the NEO-PI-R is considered a suitable measure for the FFM. Factor analysis supports the structure of the NEO-PI-R across varied populations. Cross-observer agreement is good, indicating strong validity. Five Personality Domains: o High internal consistency. o Very high short-term test-retest reliability (correlations >.9). o Excellent long-term stability (correlations ranging from.78 to.85). Facet Scales: o Most have internal consistency above.7, except for the Openness to Actions facet. o Test-retest reliability and long-term stability are adequate to good. Concurrent Validity: o NEO-PI-R shows significant relationships with other personality measures. o Can measure both normal and abnormal aspects of personality functioning. Interpreting the NEO-PI The NEO-PI is a well-established tool for assessing personality in non-clinical populations. Provides valuable information but has both benefits and limitations. Benefits and Limitations of the NEO-PI Benefits: o Versatile use in various settings (occupational, clinical, educational, research). o Easy to score, uses accessible language, and is non-threatening. o Can identify links between extreme scores and psychopathology. Limitations: o Not suitable as a standalone clinical assessment tool as it does not directly assess psychopathology. o Lacks validity scales to detect random responding or impression management ("faking good" or "faking bad"). o High face validity may lead to socially desirable responses. Use with Diverse Populations Translated into over 50 languages and dialects for global use. Potential issues with cross-cultural applicability: o Five-factor model is generally supported across cultures but shows structural deviations in certain groups (e.g., Chinese, sub-Saharan African, Filipino). o Cultural differences in response biases can affect scores. Clinicians must be aware of limitations when applying the NEO-PI across different cultures. Interpretation of Subscales Scores can be analysed at both domain and facet levels, providing detailed insights. Examining inconsistencies in domain and facet scores can reveal unique individual characteristics. Interpretation involves: o Individual scales. o Facet/domain comparisons. o Paired domain styles (optional, not required for all assessments). Assessing Validity of the Profile Lacks direct validity scales, making it challenging to detect dishonest responses. Guidelines for determining validity of responses: o More than 40 missing items indicate an invalid profile. o Caution needed if extreme counts of "agree" or "disagree" responses are observed. o Patterns of consecutive answers may require further scrutiny. Internal consistency scales and measures for positive and negative presentation management exist, but their support in clinical settings is limited. Important Considerations Clinicians should use the NEO-PI validity guidelines to scrutinize profiles but should not dismiss them solely based on questionable validity indicators. Week 6: Psychological Assessment in Organisations Assumptions and Realities: Psychologically sound assessment tools can predict job performance. Effective job matching is complex; it rarely relies on a single test or process. Types of Assessments Commonly Used in Organizations 1. Autobiographical Data (Biodata): o Predictive of job performance. o Includes application forms, resumes, and CVs with details like extracurricular activities, work history, or specific attitudes. o Can include both expected and unconventional questions. 2. Employment Interview: o Widely used but time-consuming and prone to biases (e.g., halo effect). o Validity improves with structured questions, interviewer training, and panel interviews. o Research suggests combining interviews with objective tests yields better predictive validity. 3. Tests of Cognitive Ability: o Assess general intelligence or specific skills (e.g., numerical, verbal). o Generally valid predictors of job performance. o Concerns exist regarding potential adverse impacts on minority applicants. o General cognitive tests are better predictors than specific tests due to job complexity. 4. Tests of Personality, Temperament, and Motivation: o Historically unreliable in predicting job performance, but improved with the introduction of the Big Five personality model. o Conscientiousness is the strongest predictor of job performance. o Also used to identify potential leaders. 5. Integrity Tests: o Evaluate honesty, pro-social behavior, and dependability. o Prone to social desirability bias and easy to fake. o Effective in predicting counterproductive behaviours but less so for general performance. 6. Work Sample Tests & Situational Exercises: o Involve tasks replicating job-related activities to assess performance. o Focus on specific important job domains. o Situational exercises simulate the work environment more comprehensively, often used for managerial roles. These various assessment methods each have their strengths and weaknesses, and their effectiveness often depends on the specific job context and careful implementation. Psychological Assessment for Career Development Work is a significant part of life, fulfilling various needs such as: Survival and Power: Provides income for life's necessities and the power to participate in broader opportunities. Social Connection: Workplaces often serve as venues for forming friendships and creating a sense of belonging. Self-Determination: Work can offer personal fulfillment and a sense of purpose. Theory of Person-Environment Fit The Theory of Person-Environment Fit, developed by John Holland in the 1950s, proposes that individuals are more satisfied and successful in their careers when their work environment aligns with their personality type. According to this theory: Holland identified six personality types, each corresponding to a work environment that suits them best: o Realistic (R): Prefers physical activities that require skill, strength, and coordination. o Investigative (I): Enjoys working with ideas and thinking rather than physical activity. o Artistic (A): Values creative and open environments that foster self-expression. o Social (S): Prefers activities that involve helping and developing others. o Enterprising (E): Thrives in environments that encourage persuasion and leadership. o Conventional (C): Likes structured tasks and values practical, orderly environments. The Self-Directed Search (SDS) The Self-Directed Search (SDS) is a self-administered assessment based on Holland’s RIASEC model (Realistic, Investigative, Artistic, Social, Enterprising, Conventional). It helps individuals explore their career options by identifying environments that align with their personality traits, interests, and skills. Key points about the SDS: Easy and Efficient to Administer: It is straightforward, making it accessible to a wide audience. Good Psychometric Properties: The SDS has strong reliability, construct validity, and predictive validity. Career Guidance Tool: It is valuable for helping people think about career paths that would best fit their personality and preferences. Interpreting the Three-Letter Code in the SDS When interpreting the results from the SDS, two important concepts come into play: 1. Consistency: o Refers to how similar the three domains (personality types) are in the individual's code. o Higher consistency indicates compatibility among the personality types, suggesting a more straightforward path to job satisfaction. 2. Differentiation: o Refers to the relative strength of each of the three letters in the code. o Individuals with a high level of differentiation (clear dominant types) tend to have a more defined sense of career direction. o Those with low differentiation may experience difficulty finding job satisfaction because their interests are more evenly spread across different types. The Self-Directed Search is thus a helpful tool for aligning one's career with their personality, making it a valuable part of career development and planning processes. Week 7: Projective Assessment Projective Testing Overview: Based on psychoanalytic theory, requiring clients to respond to ambiguous stimuli. Responses are believed to reflect the client's internal state, motivations, and thought processes. The interpretation of the stimulus is seen as a projection of the individual's inner mind. Controversy: Projective tests are considered subjective and 'unscientific.' Concerns about their reliability and validity have led to a decline in academic use. Despite this, they remain popular in clinical practice, especially in the USA, but less so in Australia. Types of Projective Tests: Rorschach Inkblot Test (Rorschach, 1927): o Participants describe what they see in ambiguous inkblot shapes. Holtzman Inkblot Test (Holtzman, 1968): o Similar to Rorschach, but with one answer per image and a standardized scoring system. o Considered to have poor validity and relies on the assessor’s experience. Thematic Apperception Test (TAT) (Morgan & Murray, 1935): o Participants create narratives based on ambiguous images of scenes and characters. o Complex and time-consuming with poor standardization, often deemed psychometrically unsound. Graphology: o Analyses personality traits based on handwriting. Draw-a-Person (DAP) Test (Goodenough, 1926): o Participants draw a human figure, assessed by set criteria. o Adapted into the human figure drawing test by Koppitz (1984). Word Association Tests (Jung, 1910): o Participants respond with the first word that comes to mind in relation to a given word. Draw-a-Person Test Overview: Participants (usually children) draw a whole person, avoiding stick figures, on a piece of paper with a pencil. No time limit; drawings typically completed in under 10 minutes. Participants may erase and redraw, and use as much paper as needed. After drawing, participants answer three questions: o Is the person someone you know or made up? o How old is this person? o What is this person doing or thinking, and how do they feel? Interpretation of Results: Interpreted based on six key elements: o Behavioral Observation o Overall Impression o Mental Maturity o Emotional Indicators o Content Analysis o Organic Signs Emotional Indicators and Their Categories: Impulsivity: o Poor integration of parts (e.g., disjointed limbs) o Gross asymmetry of limbs o Transparencies (e.g., major body parts not filled in) o Big figure (9 inches or more in height) o Omission of neck Insecurity/Feelings of Inadequacy: o Slanting figure o Tiny head o Hands cut off o Monster/grotesque figure o Omission of arms, legs, or feet Anxiety: o Shading of face or body o Legs pressed together o Omission of eyes o Depictions of clouds, rain, or flying birds Shyness/Timidity: o Tiny figure (2 inches or less) o Short arms o Arms clinging to the body o Omission of nose or mouth Anger/Aggressiveness: o Crossed eyes o Representation of teeth o Long arms o Big hands o Nude figures/genitals Overview of the Rorschach Test: The test involves showing a client 10 ambiguous inkblots and asking what they see, with the aim of revealing their inner thoughts and feelings through their projections. There are two phases: o Phase 1: Free association (client describes what they see). o Phase 2: Inquiry (clinician probes the responses). Controversy and Opinions: Opinions are divided: some view it as a valuable psychometric tool, while others see it as unreliable or akin to a party game. The test was popular between the 1930s and 1960s, during which various scoring systems were developed. Scoring and Interpretation: Scoring is straightforward, but interpretation is challenging due to: o Difficulty in determining if responses reflect direct experiences, memories, desires, or fears. o The influence of the Barnum effect, where general statements seem accurate for everyone. o Confirmation bias, where interpreters may reinforce their beliefs based on perceived correct interpretations. o Participants can give multiple responses per inkblot, complicating accuracy. Psychometric Issues: No universally accepted norms for interpreting results. Exner's revised norms (2007) are an improvement but still may overidentify psychological disorders. Evaluation: Against: o Lacks universally accepted standards for administration, scoring, and interpretation. o Evaluations are subjective. o Results are unstable over time. o Considered unscientific by some. o Evidence may be biased and poorly controlled. In Favor: o Lack of standardization is a historical issue that can be addressed. o Test interpretation involves a subjective component, which is inherent in all psychological testing. o Meta-analyses suggest Rorschach results are more stable than previously thought. o Has a large empirical base despite criticism. o Some view it as a valuable tool despite modern standards. Week 8: Memory and Memory Assessment Overview of Memory Memory complaints are common in clinical settings and may co-occur with: Anxiety, depression, substance use disorders Head injuries, learning disabilities, neurotoxic exposure Stroke, neurocognitive disorders Memory complaints can cause significant distress and may result from disruptions to various cognitive systems, not just the memory system. Memory and learning: Learning is acquiring new information, and memory is the persistence of that learning (Squire, 1987). Memory process: Stimulus passes through sensory registers into short-term memory (STM), which has limited capacity and requires active rehearsal. Information from STM can be transferred to long-term memory (LTM), which is more stable but requires retrieval into STM for conscious awareness. Information can decay at any stage. Types of memory: Explicit memory: Conscious awareness; includes: o Declarative memory: Storage of facts and specific information. o Semantic memory: General knowledge (facts and concepts). o Episodic memory: Memory specific to situations or events. Implicit memory: Unconscious, often procedural (behavior changes based on experience). Memory processes: Encoding: Transforming external information into memory. Consolidation: Biological solidification of memory. Retrieval: Accessing stored information for conscious awareness. Working memory (Baddeley, 2000): A cognitive system for temporarily storing and manipulating information. Phonological loop (PL): Stores auditory information. Visuospatial sketchpad (VS): Stores visual and spatial information. Central executive: Oversees PL and VS, controls attention, and engages long-term memory when needed. Working memory is critical to attention, consolidation, and retrieval of information. Wechsler Memory Scale (WMS) WMS Overview: A battery approach to assess learning, memory, and working memory, now in its 4th revision (WMS-IV). WMS-I (1945): o Non-specific conceptualization of memory; used "MQ" (memory quotient). o Brief assessment of immediate memory, no delayed recall. o Norms were poor and based on a small sample, but it remained popular for 42 years. WMS-R (1987): o Expanded age range (16-74), included delayed recall, and improved visual memory assessment. o Shifted from "MQ" to five indices: Verbal, Visual, General Memory, Attention/Concentration, and Delayed Recall. WMS-III (1997): o Content reviewed for cultural/gender bias; added recognition measures. o Lengthy administration, subtest issues, and limited use with older populations. WMS-IV (2009): o Goals: improved reliability, shorter administration, better scaling, and improved content for older adults. o Excluded clinical cases from norms and removed overlapping content with the WAIS. o Introduced 7 subtests distributed across 5 memory domains (Auditory, Visual, Working, Immediate, Delayed). o Two batteries: one for adults (16-69 years), another for older adults (65-90 years) with reduced subtests for bedside assessments. Memory Domains in WMS-IV: o Auditory Memory: Recall of orally presented information. o Visual Memory: Recall of visually presented information. o Working Memory: Manipulation of visually presented information in short-term storage. o Immediate Memory: Recall of information immediately after presentation. o Delayed Memory: Recall of information after a 20–30-minute delay. Psychometrics: o Generally well-researched with strong validity and reliability. o No clinical benchmarks, limiting its diagnostic predictive power. Strengths of WMS-IV: o Improvements based on past shortcomings, ease of administration, and strong psychometric properties. o Shortened version for older patients. Weaknesses of WMS-IV: o Still lacks a reliable core of subtests. o Overemphasis on attention, subtest removal not empirically justified, and long duration. o Limited clinical benefit of new subtests. o Australian adaptation uses American norms with small clinical samples. o Battery approach may demoralize memory-impaired individuals. Administering the WMS-IV Subtests General Testing Considerations Materials Required: o Administration & Scoring Manual, Technical & Interpretative Manual o Record form (separate for adult and older adult batteries) o Response & Stimulus booklets (1 for immediate recall, 2 for delayed recall) o Memory grid, Designs & Spatial Addition cards, Scoring template o Stopwatch, Pencil, Clipboard Physical Environment: o Ensure a comfortable and suitable environment for a lengthy assessment. o Follow standardized procedures, establish rapport with the client. Administration Guidelines: o Administer tests in the recommended order but adjust for the 20–30-minute delay required for delayed recall. o Avoid placing the Symbol Span between Visual Reproduction I and II. o Immediate and delayed recall trials of a subtest must be in the same session. Duration: o Adult battery averages 82 minutes; Older adult battery averages 50 minutes. Feedback & Rapport Encouragement: o GOOD: “You’re working hard” o BAD: “Good”, “Right” Poor performance: o GOOD: “That was a hard one, but this next one may be easier” o BAD: “You did not do well on this one” If Examinee Says "You do it": o GOOD: “I want to see how well you can do it yourself” o BAD: “No, you do it!” Acceptable Prompts: “Just try once more”, “Explain what you mean”, “Tell me more about that”, etc. Abbreviations for Notations on Record Form P: Prompt used to elicit response. R: Repeated directions/item. SC: Self-corrected response. DK: Examinee indicated "don’t know". NR: No response given. Administering Each Subtest 1. Brief Cognitive Status Exam (BCSE) (Optional) Screening across various cognitive functions (similar to MMSE). Assesses orientation, mental control, clock drawing, verbal fluency, and recall. Produces a classification from Average to Very Low. 2. Visual Reproduction I & II Memory for non-verbal, visual material. VR-I: Draw designs from memory immediately after seeing them. VR-II: Draw the designs after 20-30 minutes (free recall), identify them (recognition), and draw them while viewing (optional). 3. Logical Memory I & II Memory for narratives under free recall conditions. Two short stories are orally presented; the examinee retells them immediately (LM-I) and after a delay (LM-II). Recognition task follows LM-II, with yes/no questions about each story. 4. Spatial Addition (Adult battery only) Visual-spatial working memory using an addition task. Examinee adds or subtracts locations of circles on grids based on rules. 5. Verbal Paired Associates I & II Memory for word pairs. o VPA-I: Recall the second word of a word pair after being presented the first. o VPA-II: After a delay, recall the second word of each pair. o Recognition trial: Identify which word pairs were presented earlier. 6. Designs I & II (Adult battery only) Spatial memory for unfamiliar designs. Designs I: Recreate a design grid after viewing it for 10 seconds. Designs II: Recreate the grid after a 20–30-minute delay. 7. Symbol Span Visual working memory using abstract symbols. Examinee selects symbols from an array in the same order they were presented. Special Considerations for Older Adults: Only five subtests are administered in the Older Adult battery due to "portability" for bedside assessment. Key Rules: Maintain the sequence of subtests. Ensure feedback maintains rapport and doesn't influence performance. Record responses accurately using the appropriate abbreviations. Interpretation of the WMS-IV Requirements for Interpretation Knowledge Base: o Familiarity with memory research and literature. o Understanding cognitive development changes and specific diagnostic conditions. o Awareness of the relationships between memory, cognitive processes, brain, and behavior. Broader Context: o Recognize that the WMS-IV is one of many tools for assessing cognitive function. Reporting and Describing Scores Types of Scores: o Five Index Scores (standard scores) o Subtest Scaled Scores o Process Scores: Optional scores providing extra performance details. o Contrast Scaled Scores: Useful for hypothesis testing, comparing performance across indices/subtests. o BCSE Performance Classification: Differentiates between impaired vs. not- impaired cognitive function. Basic Steps to Interpretation 1. Index Scores: Identify strengths and weaknesses across the five index scores. 2. Subtest Variability: Look for patterns and variability across subtest scores. 3. Qualitative Data: Examine raw responses for insights into performance. 4. Comparisons: Compare memory performance to general intellectual abilities and other cognitive domains. Key Questions to Answer in Interpretation What are the individual's memory strengths and weaknesses? How severe are the memory impairments? Are impairments specific to memory or generalized across cognitive domains? Do scores align with premorbid expectations based on education, occupation, and behavior? What does the individual's history suggest? What could be contributing to the memory problems? Factors Affecting Performance and Interpretation Sensory or Motor Deficits Intellectual Disability Language, Attentional, or Executive Functioning Deficits Spatial Impairments Poor Effort, Cooperation, or Fatigue Slowed Processing Key Considerations Memory is a multifaceted process that requires complex interpretation. The WMS-IV should be part of a comprehensive assessment but is insufficient as a standalone tool. Accurate administration and scoring are essential for valid results. Interpretation typically requires neuropsychological expertise. Week 9: The Assessment Interview Interview Types and Challenges Assessment interviews are crucial for psychological evaluations, providing context that can't be gathered from tests alone, and offering unique insights like behavioural observations. Interviews provide valuable data on the client’s unique features, testing approach, test performance, and personal history. Following ethical guidelines (APS) is essential to avoid misinterpretation, ensure validity, and make clinical observations. Skilled interviewers should focus on both content and process, observing body language and emotional cues to gather deeper insights. Types of interviews: Structured/semi-structured: Standardized questions, high reliability/validity, minimizes bias, efficient, less reliant on interviewer experience. Unstructured: Guided conversation, better rapport, flexibility, more detail on subjective experience. Bias in interviews: Interviewer biases: Halo effect (general impressions influencing other judgments), confirmatory bias (seeking evidence for initial assumptions). Participant biases: Faking good/bad, exaggerating traits, or omitting information. Unconscious biases: Affecting questions asked, rapport building, and interpretation. Can occur on both interviewer and client sides. Combatting bias: Self-awareness, using tools like the Harvard Implicit Association Test to identify biases, and taking steps to broaden perspectives. Reliability: Structured interviews offer higher consistency between interviewers; unstructured interviews have lower reliability. Validity: Structured interviews are more valid as they mitigate bias, especially cross-cultural distortions and interviewer biases. Key Interview Strategies Preparation and Preliminaries: Organize the environment, introduce yourself, explain the interview process, confidentiality, client rights, and fee arrangements. Open-ended Questions: Encourage expansive, detailed responses, revealing unique client characteristics. Direct Questions: Elicit specific, concise answers; useful for gathering targeted information. Combination of Open-ended & Direct: Start with open-ended questions to explore, then use direct questions to clarify and confirm understanding. Facilitation: Encourage clients to elaborate with prompts like “Tell me more…” to keep the conversation flowing. Clarification: Ask specific questions to resolve unclear or ambiguous responses. Empathic Statements: Build rapport by acknowledging feelings and validating the client’s experiences. Confrontation: Address inconsistencies in client information or behavior carefully, used by experienced interviewers. Culturally Relevant Information Cultural Exploration: Ask clients how they identify culturally, important cultural aspects, and whom they rely on for support. Client Strengths: Focus on cultural strengths such as acculturation, bilingual skills, intergenerational wisdom, and resilience against oppression. Types of Assessment Interviews Mental Status Evaluation (MSE): Reviews psychiatric functioning to guide case management, diagnosis, and integration with other tests. Diagnostic Interviews: Structured approaches to determine appropriate diagnoses based on referral questions. Structured Clinical Interview for DSM (SCID): Comprehensive diagnostic tool used by trained professionals, with moderate reliability. Kiddie Schedule for Affective Disorders and Schizophrenia (KSADS): A child- appropriate, time-consuming diagnostic tool used in clinical and research settings. Week 10: The Scientist Practitioner Model and EBP Origins of the Scientist-Practitioner Model: Developed in clinical psychology in 1949 at the Boulder Conference. Integrates science and practice to inform both domains. Emphasizes training psychologists in research and practice simultaneously. Training Requirements in Australia: APAC mandates the scientific method and research critique in accredited psychology programs from undergraduate level. Evidence-Based Practice (EBP): Integrates best available research with clinical expertise, considering patient characteristics, culture, and preferences. Aims to enhance psychological practice and public health through empirically supported methods in assessment, case formulation, relationships, and intervention. EBP Components (Lilienfield, 2013): Best Available Research: Hierarchy of evidence (meta-analyses, RCTs at the top, case studies at the bottom). Clinical Judgment & Experience: Involves rapid diagnosis and assessing risks and benefits of interventions. Client Preferences & Values: Client's preferences can influence treatment choice, even if evidence strongly supports a particular intervention. Strengths of Psychology: Combines scientific research with professional practice. Psychologists are well-positioned to design and interpret research for EBP. Challenges in EBP Implementation: Balancing different research methods and sample representativeness. Generalizing controlled research results to clinical practice. Addressing the limited scope of research on minority and marginalized populations. Types of Research Evidence: Refers to results from intervention strategies, clinical problems, and patient populations across different settings. Dimensions of Treatment Guidelines (APA, 2002): Treatment Efficacy: Scientific evaluation of treatment effectiveness. Clinical Utility: Feasibility and applicability of interventions in specific settings. Clinical Expertise: Involves assessment, decision-making, treatment planning, and interpersonal skills. Emphasizes continual learning, cultural understanding, and appropriate use of research evidence. Patient Characteristics, Culture, and Preferences: EBP requires considering the patient’s values, beliefs, preferences, and developmental factors. Cultural competence and humility are essential for effective treatment. Evidence for Psychological Interventions: Depression: Level I Evidence: Cognitive Behaviour Therapy (CBT), Interpersonal Psychotherapy (IPT), Brief Psychodynamic Psychotherapy, CBT-based self-help. Level II Evidence: Solution-Focused Brief Therapy, Dialectical Behaviour Therapy (DBT), Emotion-Focused Therapy, Psychoeducation. Lower-Level Evidence: Mindfulness-Based Cognitive Therapy (MBCT), Acceptance and Commitment Therapy (ACT). Insufficient Evidence for other interventions. Bipolar Disorder: Level II Evidence: CBT, IPT, Family Therapy, MBCT, Psychoeducation (as adjuncts to pharmacotherapy). Insufficient Evidence for other interventions. Generalised Anxiety Disorder (GAD): Level I Evidence: CBT. Level II Evidence: Psychodynamic Psychotherapy. Lower-Level Evidence: MBCT, CBT-based self-help. Insufficient Evidence for other interventions. Panic Disorder: Level I Evidence: CBT. Level II Evidence: CBT-based self-help, Psychoeducation. Insufficient Evidence for other interventions. Specific Phobia: Level I Evidence: CBT. Level II Evidence: CBT-based self-help. Insufficient Evidence for other interventions. Social Anxiety: Level I Evidence: CBT. Level II Evidence: CBT-based self-help, Psychodynamic Psychotherapy (with pharmacotherapy). Lower-Level Evidence: IPT, ACT. Insufficient Evidence for other interventions. Obsessive-Compulsive Disorder (OCD): Level I Evidence: CBT. Level II Evidence: CBT-based self-help. Lower-Level Evidence: ACT. Insufficient Evidence for other interventions. Posttraumatic Stress Disorder (PTSD): Level I Evidence: CBT. Level I Evidence (not in review): Eye Movement Desensitization and Reprocessing (EMDR). Insufficient Evidence for other interventions. Substance-Use Disorders: Level I Evidence: CBT, Motivational Interviewing (MI). Level II Evidence: CBT-based self-help, Solution-Focused Brief Therapy, DBT. Lower-Level Evidence: IPT, ACT, Psychodynamic Psychotherapy. Insufficient Evidence for other interventions. Anorexia Nervosa: Level II Evidence: Family Therapy, Psychodynamic Psychotherapy. Lower-Level Evidence: CBT. Insufficient Evidence for other interventions. Bulimia Nervosa: Level I Evidence: CBT. Level II Evidence: DBT, CBT-based self-help. Lower-Level Evidence: IPT. Insufficient Evidence for other interventions. Resistance to Evidence-Based Practice (EBP) Naïve Realism: o Clinicians may mistakenly attribute client improvement to an intervention without considering alternative explanations. o Intuitive judgments like "I saw the change with my own eyes" can lead to erroneous conclusions about treatment effectiveness. Myths and Misconceptions Regarding Human Nature: o Clinicians may rely on widely accepted, but unproven, beliefs about human behavior (e.g., repressed memories). o These misconceptions can lead to the use of ineffective or risky techniques like memory recovery procedures. Application of Group Probabilities to Individuals: o EBP often uses general, population-based findings (nomothetic), but clinicians work with unique cases (idiographic). o Clinicians may struggle to apply group-level data to individual clients in practice. Reversal of the Onus of Proof: o The burden of proof should rest with proponents of new or untested treatments, not on skeptics. o While scientists should remain open to new therapies, they require rigorous evidence to support their efficacy. Mischaracterisations of What EBP Is and Is Not: Common misconceptions include that EBP: o Stifles innovation. o Enforces a "one-size-fits-all" approach. o Neglects non-specific therapeutic influences. o Doesn't apply to individuals outside of controlled studies. o Exclusively focuses on Randomised Controlled Trials (RCTs). o Wrongly assumes that therapeutic change is quantifiable or predictable. Pragmatic, Educational, and Attitudinal Obstacles: o Practical barriers such as lack of time, steep learning curves, and access to resources. o Educational challenges with understanding the complex statistical methods used in EBP research. o Attitudinal barriers, including resistance from clinicians who perceive EBP as an "Ivory Tower" mentality. Treatment Monitoring Benefits: Guide treatment decisions Establish reliable baseline, set measurable goals Help patients (and clinicians) recognise improvement o Subjective assessment of progress can be unreliable o Goal setting together can provide autonomy to the client o Good when clients have limited insight into their own improvements/strengths Identify need for additional professional education and training Making appropriate progress notes, ensure meets ethical requirements Objections Practical (e.g., cost, time) – measure dependent o More administration o Management has funding goals Philosophical (e.g., relevance) o Set way of practicing o Gold standard measures of clinical practice o Concerns on impact on rapport based on too many requests for feedback o Appropriateness of measures for different populations (e.g., neurodivergent) Clinicians who assess outcome in practice are more likely to: Younger CBT orientation Have more hours per week Provide services for children and adolescents Work in institutional settings Under NMHP, public mental health services must collect and report information on: Severity of symptoms Psychosocial functioning Level of disability Focus of care Self-assessment of their mental health status Report (deidentified data) to National Outcomes and Casemix Collection (NOCC). Coherent picture of mental health outcomes in each state and territory. Measure clinical effectiveness across different service sectors and age cohorts. Lilienfield et al. Required Reading EBP vs. ESTs: EBP refers to a broader clinical decision-making approach that integrates the best available research evidence, clinical expertise, and client preferences. ESTs (empirically supported therapies) are specific therapeutic techniques validated by research, often misunderstood as the entirety of EBP. Resistance to EBP: Many clinical psychologists are resistant to EBP, which has widened the gap between scientific research and clinical practice. The authors argue that the resistance stems from misunderstandings, educational gaps, and long-standing misconceptions. Six key sources of resistance to EBP: 1. Naïve realism: o Clinicians often rely on their direct observations (e.g., "I saw the change with my own eyes"), assuming they can accurately judge therapy effectiveness without the need for controlled research. This cognitive bias leads them to overestimate treatment effectiveness, missing rival explanations such as placebo effects, spontaneous remission, or regression to the mean. 2. Misconceptions about human nature: o Deep-seated myths about psychology, such as the belief in memory repression and recovery, can lead to skepticism about EBP. Therapists who believe in techniques like hypnosis to recover repressed memories may avoid EBP because they view these scientifically unsupported methods as essential to their practice. Misconceptions also persist about the lasting impact of early childhood experiences, leading some clinicians to overemphasize early trauma in treatment decisions. 3. Misunderstanding group probabilities and individual cases: o Some clinicians reject EBP because they believe that group-based research (e.g., findings from randomized controlled trials) cannot be applied to individual clients. However, the paper argues that probabilistic data from groups can and should inform individual treatment, just as statistics in medicine (e.g., survival rates) are used to guide patient care. 4. Reversal of burden of proof: o Clinicians often place the burden of proof on skeptics to disprove the effectiveness of untested therapies, rather than demanding that proponents of such therapies provide evidence for their efficacy. This leads to the continued use of unvalidated or potentially harmful treatments. 5. Mischaracterization of EBP: o EBP is sometimes mischaracterized as rigid or overly reliant on research data, neglecting the role of clinical expertise and client preferences. This misunderstanding fosters resistance, as some practitioners fear that EBP reduces clinical autonomy or flexibility in therapy. 6. Pragmatic, educational, and attitudinal barriers: o Practical challenges, such as time constraints, discomfort with the complexity of psychotherapy research, and a steep learning curve for understanding research findings, contribute to resistance. Additionally, some clinicians feel alienated by the “ivory tower” mentality of academic researchers and believe EBP is disconnected from the realities of clinical work. Consequences of resistance: The authors argue that resistance to EBP has serious consequences, as it may lead clinicians to use ineffective or even harmful treatments, widening the gap between science and practice. Recommendations for addressing resistance: Education: Clinical training should emphasize the importance of scientific evidence and teach clinicians how to critically evaluate research. Addressing misconceptions about EBP and human nature in graduate education is crucial. Encouraging a scientific mindset: Practitioners should adopt a more scientific approach to clinical decision-making, including the use of research evidence to rule out alternative explanations for therapeutic outcomes. Bridging the gap between research and practice: Efforts should be made to make research findings more accessible and applicable to clinicians. This could include simplifying the presentation of research and offering practical tools for integrating evidence into practice. Cultural shift: Advocates for EBP should acknowledge the practical realities clinicians face and emphasize the flexibility of EBP, which allows for the integration of clinical judgment and client preferences alongside research evidence. Conclusion: The paper concludes that resistance to EBP is understandable but can be mitigated by targeted education, addressing misconceptions, and promoting a more balanced understanding of EBP. By overcoming these barriers, the gap between research and practice in clinical psychology can be reduced, improving treatment outcomes for clients. Week 11: Assessment and Treatment Planning Importance of understanding treatment mechanisms: Evidence-based practice requires knowing why and how treatments work. 'Splitters' vs 'Lumpers': o Splitters: Focus on specific techniques. o Lumpers: Emphasize common factors that facilitate change. Three main approaches to treatment planning: 3. Focus on the therapy: ▪ Emphasizes the type of therapy, e.g., psychoanalysis, behavioural, cognitive, person-centred. ▪ Early research (e.g., Rogers, Wampold) identified key components: genuineness, unconditional positive regard, and accurate empathy. ▪ Strong focus on the therapeutic relationship. 4. Focus on the symptoms: ▪ Known as differential therapeutics. ▪ Targets specific diagnoses or symptoms. ▪ Effective for anxiety-related conditions, but less so for depression, schizophrenia, sexual disturbances, and personality disorders. 5. Focus on client characteristics: ▪ Considers personal traits that may enhance or inhibit therapy response. ▪ Over 200 characteristics investigated, with treatment effectiveness improving when these are addressed. Intervention options Within each approach to treatment planning, there are various intervention options that can be utilised. Groth-Marnat and Wright (2016) highlight six categories of intervention that can be recommended following an assessment: Treatment: For example, therapy, medication, neuropsychological rehab. Placement: Where the client can receive services. Further evaluation: Identifying whether there are any issues that need further investigation. Alterations to a client’s environment: Establishing whether there are environmental changes needed to address the problem. Education and self-help: For example, books, online programs or apps. Other: This might include a range of miscellaneous recommendations. Clinical decision-making considerations: Select an evidence-based approach for the client’s presenting issue. Ensure recommendations are understandable and actionable by the client. Factors driving clinical decision-making: Case formulation: Describes the client's situation, including predisposing, precipitating, perpetuating factors, and protective factors. Understanding the client’s problem: Involves diagnosis, functional impairment, problem complexity, chronicity, and subjective distress. Understanding the problem context: Takes into account coping style, social support, and current life circumstances. Treatment-specific client characteristics: Considers resistance, stages of change, cultural background, personal preferences, and expectations. Case formulation models: 1. Diathesis-stress model: Identifies diathesis, stressors, and outcomes. 2. Developmental model: Examines developmental mismatch between functioning and environment. 3. Common function model: Focuses on the underlying need or desire maintaining the issue. 4. Complex model: Integrates multiple considerations, beyond simple stress-diathesis. Understanding the problem: Diagnosis: Key to prognosis and treatment barriers, requires mindful use of DSM-5 and avoiding labels. Functional impairment: Assesses impact on domains (e.g., social, occupational), informing treatment restrictiveness and intensity. Problem complexity & chronicity: Complex problems involve unresolved conflicts and are persistent across situations, while non-complex problems are situational and transient. Subjective distress: Client’s own experience of distress, a fluctuating factor that influences therapy engagement. Understanding problem context: Coping style: Externals (blame others) benefit from behavioural interventions; internals (self-blame) benefit from insight-based therapy. Social support: Evaluates the quality of client’s social networks, affecting treatment duration and type. Current life circumstances: Assesses recent life events, transitions, and cumulative stress. Treatment-specific client characteristics: Resistance: Reluctance to open up, viewed as protective; should be addressed without pathologizing. Stages of change: Treatment should be aligned with the client's stage in the change process. Cultural background: Language barriers, religious or social expectations can influence treatment options. Personal preferences: Incorporating client preferences about therapist characteristics reduces therapy drop-out. Expectations: Managing client expectations regarding the therapy process and outcomes is crucial. Systematic Treatment Selection Approach (Beutler et al.): Links client variables (functional impairment, social support, problem complexity, etc.) to treatment considerations (e.g., restrictiveness, cognitive-behavioural interventions, supportive interventions). Variable Treatment Considerations Restrictiveness (inpatient/outpatient) Intensity (duration and frequency) Functional impairment Medical versus psychosocial intervention Prognosis Urgency of achieving goals Cognitive behavioural versus relationship enhancement Duration of treatment Social support Psychosocial intervention versus medication Possible group interventions Problem Symptom focus versus resolution of thematic unresolved complexity/chronicity conflicts Coping style External versus internal attribution Supportive, nondirective or paradoxical versus structured, Resistance directive intervention Subjective distress Increase or decrease arousal Supportive versus insight-oriented versus cognitive/behavioural Stage of change interventions Week 12: Instruments for treatment planning, monitoring and outcome assessments Why are brief instruments used? Emergence of brief instruments: Linked to evidence-based practice, with a need to demonstrate treatment effectiveness. Helps identify clients who may need a different approach. Demonstrates financial efficacy of services. Around 37% of professional psychologists use outcome measures in their client work. Frequently used in research to evaluate treatment approaches. Key considerations for using brief instruments: How are outcomes defined (symptom reduction, functional improvements, deeper shifts)? Can a brief self-report measure accurately capture these outcomes? What are the risks and benefits of using brief measures in practice? Selection of brief measures: Consider who will administer or interpret the instrument (self-reported, clinician- administered, or completed by others). Time to complete should be under 15 minutes. Measures should directly relate to treatment planning and outcome assessment. Measures must be relevant to the client group (e.g., adults, children, older adults, diverse backgrounds). Measures should be easily understood and sensitive to detecting changes and outcomes. Measures of non-symptom outcomes: Some instruments go beyond symptom alleviation to measure other therapeutic impacts, such as: Difficulties in Emotion Regulation Scale Self-Compassion Scale Measures of Self-Concept Measure of Experiential Avoidance Measures of Coping Skills Symptom Checklist-90-R (SCL-90-R) and Brief Symptom Inventory: Developed by Derogatis (1993) to assess type and severity of symptoms over a one-week period. Contains 90 items and takes 12–15 minutes to complete. Measures 9 symptom dimensions: Somatisation, Obsessive-Compulsive, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Paranoid Ideation, Psychoticism. Includes 3 global indices: Global Severity Index (intensity of stress and reported symptoms), Positive Symptom Index (symptom intensity), Positive Symptom Total (breadth of symptoms). Reliability and validity are good, but the psychoticism dimension has weaker psychometric properties. Useful for culturally diverse populations and a wide age range. Beck Depression Inventory-II (BDI-II): 21-item self-report measure of depressive symptoms over the past two weeks. Takes 5–10 minutes to complete, popular in research studies. Scores range from 0-63, with severity levels: No/Minimal (0-13), Mild (14-19), Moderate (20-28), Severe (29-63). High internal consistency, test-retest reliability, and validity, though factor analysis varies. Used across diverse populations and provides insight into depression but is not a standalone diagnostic tool. State Trait Anxiety Inventory (STAI): 40-item self-report measure assessing both state (transient) and trait (stable) anxiety. Includes subscales for current anxiety and general worry. Translated into 60 languages, revised to distinguish anxiety from depression and differentiate state from trait anxiety. Shows good internal consistency, with higher reliability for trait than state anxiety. Issues with distinguishing anxiety from depression due to high comorbidity. Four interpretive levels: High T-anxiety, High S-anxiety, High S-/Low T-anxiety, High T-/Low S-anxiety. Outcome Rating Scale (ORS) and Session Rating Scale (SRS): Ultra-brief measures with four items each, used to monitor progress and therapeutic fit. ORS assesses individual, interpersonal, social, and overall functioning. SRS assesses therapeutic relationship, goals, therapist’s approach, and session fit for the client. Developed within the "practice-based evidence" movement, emphasizing tailored therapy and client feedback. These instruments are used to track symptoms and therapeutic progress, helping clinicians assess the effectiveness of treatment and make necessary adjustments.