Psychological Testing and Assessment PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This document provides learning objectives, details on psychological testing, and psychological assessment processes. The document also features different tools, contexts, and processes in psychological assessment.

Full Transcript

Psychological Testing and Assessment conduct the assessment did not understand why Shelly had been Learning Objectives: referred, because everybody who u...

Psychological Testing and Assessment conduct the assessment did not understand why Shelly had been Learning Objectives: referred, because everybody who understand the difference between worked with her in the hospital, testing and assessment including the psychiatrist, felt certain that she had bipolar disorder. The enumerate and define the different tools psychologist investigated the case of psychological assessment further and found that the outside identify the different contexts where agencies responsible for providing psychological testing and psychological treatment for Shelly had declined the assessment are employed hospital's request for intensive services. Shelly had a lot of drug problems, and understand the process of psychological the outside agency attributed her assessment difficulties to substance abuse. The psychiatrist wanted the assessment done Psychological Assessment to add “ammunition” to the treatment it refers as the gathering and integration team's efforts to obtain appropriate of psychology-related data for the community service for Shelly, so that purpose of making a psychological she could be safely discharged from the evaluation that is accomplished through hospital setting. the use of tools Types of Referral Question Psychological Testing Explicit- Direct the process of measuring those that are clearly stated by the psychology-related variables by means referral source of devices or procedures designed to obtain a sample of behavior Implicit the process of administering, scoring, those that the assessor tries to discover, and interpreting psychological tests as he/she considers all the aspects of the case Difference Between Testing and Assessment Referral Question Difference Between Testing and Assessment Case of Shelly The Process of Assessment “Does Shelly have bipolar disorder?” is Referral Question an explicit question oftentimes referred to as reason for “How certain is the diagnosis of bipolar referral, which pertains to the rationale disorder?”, “Could her symptoms be due for requesting that a client undergoes a to drug abuse?”, and “Does she need to psychological evaluation have treatment for her condition?” are Example: possible implicit questions. Shelly is a 22-year-old patient in a Tools of Psychological Assessment psychiatric hospital. Her psychiatrist The Tools of Psychological Assessment referred her for assessment to clarify her diagnosis. The psychologist assigned to Test a measuring device or procedure it answers questions such as “How long is the test?”, “Does it require additional Psychological Tests – refers to instructions from the examiner?”, “Can a device or procedure designed it be group administered?”, “Are there to measure variables related to separate time limits for each subtest?” psychology Types of Psychological Test Characteristics of a Good Test Tests According to Purposes of Measurement Standardized Intelligence Tests - measures general the test has uniformity of procedure in potential to solve problems, adapt to administration and scoring. changing circumstances, think abstractly exact materials employed, time limits, and profit from experience and oral instructions to subjects Types of Psychological Test Characteristics of a Good Test Tests According to Purposes of Measurement Objectivity Aptitude Tests – measures the potential It implies that scoring and for learning or acquiring a specific skill interpretation are objective insofar that Achievement Tests - measures previous they are independent of the subjective learning judgment of the individual examiner Interest Tests - measures an Characteristics of a Good Test individual’s likes and dislikes along Reliability occupational preferences usually. it implies that there is a consistency of Types of Psychological Test scores obtained by the same person/s Tests According to Purposes of Measurement when retested with an identical test or equivalent form Personality Tests - tests which measure typical behavior like traits, disposition, Characteristics of a Good Test temperament and attitudes Validity Structured Personality Tests - it refers to the degree which the test provides self-report statements actually measures what it purports to which requires the examinee to measure choose between two or more alternative responses Characteristics of a Good Test Projective Tests - provides an Appropriateness of Difficulty Level ambiguous or vague stimulus it refers to the level of difficulty of each wherein the examinee presents item of the test as a whole and the an open-ended response appropriateness to the population for Example of Personality Test whom it is designed Structured/Objective Personality Test (True Characteristics of a Good Test or False) Ease of Administration Example of Personality Test Projective Test (Images) Uses of Test Types of Psychological Test Research Tests According to Administration tests also play a major role in both applied and theoretical branches of Individual Tests - these are tests that behavioral research can only be given to only one person at a time Reference Sources of Information Group Tests - these are tests that can be Reference Sources of Information (Test administered to more than one person at catalogues, manuals, reference volumes, a time by a single examiner journal articles, on-line database) Uses of Test Parties Involved in Psychological Testing (Test developer, test user, testtaker, Society at Uses of Test large, etc) Classification Parties Involved in Psychological Testing assigning a person to one category Interview rather than another method of gathering information ✔ Placement through direct communication involving ✔ Screening reciprocal exchange ✔ Certification Behavioral Observation ✔ Selection monitoring the actions of others or oneself by visual or electronic means Uses of Test while recording quantitative and/or qualitative information regarding the Diagnosis and Treatment actions determine the nature and the underlying Portfolio cause of a person’s abnormal behavior and classify those behavioral patterns samples of one’s ability and accomplishment within an accepted diagnostic system Case History Data Uses of Test refers to records, transcripts, and other Self-knowledge accounts in written, pictorial, or other form that preserve archival information, psychological tests can also supply a official and informal accounts, and other potent source of self-knowledge about data and items relevant to an assessee an individual’s intelligence and personality characteristics Role-Play Test Uses of Test a tool of assessment wherein assessees are directed to act as if they were in a Program Evaluation particular situation another use of psychological tests is the Role-Play Test systematic assessment and evaluation of educational and social programs Example Assessment of People with Disabilities You are working in the customer support team of Alternate Assessment a retail firm. A customer who bought a ginger is an evaluative or diagnostic procedure beer from one of your stores discovered that it or process that varies from the usual, had a dead snail in it after drinking and is now customary, or standardized way a threatening to call the consumer watchdog. measurement is derived either by virtue Contact the customer to resolve their issue. of some special accommodation made to Computer as Tools the assessee or by means of alternative methods designed to measure the same Computer-Assisted Psychological Assessment variable(s) (CAPA) Assessment of People with Disabilities refers to the assistance computers provide to the test user, not the testtaker Accommodation Computer Adaptive Testing (CAT) may be defined as the adaptation of a test, procedure, or situation, or the a form of computer-based test that substitution of one test for another, to adapts to the examinee's ability level make the assessment more suitable for The Process of Assessment an assessee with exceptional needs Assessment Proper Guidelines for Accommodating Test Takers with Disabilities the test administrator (or examiner), or assessor must be familiar with the test Guidelines for Accommodating Test Takers materials and procedures and must have with Disabilities at the test site all the materials needed to Guidelines for Accommodating Test Takers properly administer the test. with Disabilities might include a stopwatch, a supply of Of Tests and Testing pencils, and a sufficient number of test protocols. Common Questions in Psychological Testing and Assessment Protocol refers to the form or sheet or booklet on which the testtaker’s What is this patient’s diagnosis? responses are entered Is this person competent to stand trial? Settings where Assessment is Conducted Who should be hired, transferred, Approaches to Assessment promoted, or fired? Assessment of People with Disabilities Which individual should gain entry to this special program or be awarded a people with disabilities are assessed for scholarship? exactly the same reasons that people with no disabilities are assessed: to Who shall be granted custody of the obtain employment, to earn a children? professional credential, to be screened Some Assumptions About Psychological for psychopathology, and so forth. Testing and Assessment Assumption 1: Psychological Traits and “the objective of the test is to provide States Exist some indication of other aspects of the examinee’s behavior, not to predict the Traits future test-related behavior (e.g. grid any distinguishable, relatively enduring blackening or key-pressing).” way in which one individual varies from Assumption 4: Tests and Other Measurement another Techniques Have Strengths and Weaknesses Construct – an informed, - Process (The test user should know the scientific concept developed or process) constructed to describe or explain behavior Assumption 5: Various Sources of Error Are Part of the Assessment Process Assumption 1: Psychological Traits and States Exist Error refers to a long-standing assumption that factors other than what a test attempts to Overt Behavior measure will influence performance on the test refers to an observable action or the (extraneous variable) product of an observable action, Error Variance - the including test- or assessment-related component of a test score responses attributable to sources other Assumption 1: Psychological Traits and than the trait or ability measured States Exist (Fluctuation of the Score due to the error) States Assumption 5: Various Sources of Error Are any distinguishable, less enduring way Part of the Assessment Process in which one individual varies from another Classical Test Theory (CTT) Assumption 2: Psychological Traits and also known as true score theory (actual States Can Be Quantified score), is an assumption that each testtaker has a true score on a test that to be able to measure a trait or a state, it would be obtained but for the action of needs to be carefully the measurement error defined/operationally defined Assumption 6: Testing and Assessment Can Domain Sampling Be Conducted in a Fair and Unbiased it refers to either (1) a sample of Manner behaviors from all possible generally, all major test publishers strive behaviors that could be to develop instruments that are fair indicative of a particular when used in strict accordance with construct (2) a sample of test guidelines in the test manual items from all possible items that could conceivable be used Assumption 7: Testing and Assessment to measure a particular construct Benefit Society Assumption 3: Test-Related Behavior Predicts Non-Test Related Behavior “How else might a world without tests refers to the process of administering a or other assessment procedures be test to a representative sample different from the world today?” (normative sample) of test takers for the purpose of establishing norms What Constitutes a “Good Test”? Test Tryout Norm-Referenced Testing and Assessment Sampling - process of selecting the portion of a method of evaluation and a way of the universe deemed to be representative of the deriving meaning from test scores by whole population is referred to as sampling evaluating an individual testtaker’s score and comparing it to scores of a group of - probability (stratified random sampling) testtakers - nonprob (convenience and purposive sampling) Norms Sample the test performance data of a particular group of testtakers that are designed for a portion of the universe of people use as a reference when evaluating or deemed to be representative sample of interpreting individual test scores the whole population Mean and SD of the population Developing Norms for a Standardized Test point of reference (baseline) the test developer administers the test according to the standard set of Normative Sample instructions that will be used with the group of people whose performance on test a particular test is analyzed for reference Types of Norms in evaluating the performance of individual testtakers - Percentile (people whose score Falls below a particular raw score) getting a group of people (sample) in a - Percentage Correct (# of items population answered correctly) sample represents the population Types of Norms Norming - Age Norms (age- equivalent scores) refer to the process of deriving norms Types of Norms User Norms or Program Norms - Grade Norms (Grade level) included in the test manual which Types of Norms consist of descriptive statistics based on a group of testtakers in a given period of - Developmental Norms (includes Age time rather than norms obtained by Norms and Grade Norms) formal sampling methods Types of Norms same group - National norms (represents a country) Sampling to Develop Norms Types of Norms Standardization - National Anchor Norms (based on the identify the different sources of error pre-existing test) (new test for the same variance population) recognize the different ways to estimate Types of Norms a test’s reliability - Subgroup norms (male or female) (per calculate and interpret reliability strata) coefficients Types of Norms identify ways on how to increase reliability - Local norms (Locality) recognize how to estimate true scores Types of Norms using standard error of measurement Criterion-Referenced Testing and Assessment Reliability may be defined as a method of refers to the consistency of scores evaluation and a way of deriving obtained by the same person when meaning from test scores by evaluating re-examined with the same test on individual’s score with reference to a set different occasions, or with different sets standard (criterion – a standard on of equivalent items, or under other which judgment or decision may be variable examining condition based) like qualifying exam Reliability Coefficient – Criterion-Referenced Testing and Assessment measure of the accuracy of a Examples: test or measuring instrument obtained by measuring the same obtaining a individuals twice and computing Psychometrician/Psychologist License the correlation of the two sets of earning a privilege of driving an measures automobile The Concept of Reliability getting a scholarship Classical Test Theory obtaining a Latin honor this assumes that each person has a true Reminder: score that would be obtained if there were no errors in measurement “In selecting a test for use, the responsible test user does some advance research on the test’s X = T + E available norms to check on how appropriate Observed Score True Score they are for use with the targeted testtaker Error population.” Measurement Error Reliability refers to, collectively, all of the factors Learning Objectives associate with the process of measuring define and understand the concept of some variable, other than the variable reliability being measured (extranous variable) Random Error - source of Coefficient of Stability – error in measuring a targeted estimate of test-retest reliability variable caused by when the interval between unpredictable fluctuations and testing is greater than six inconsistencies of other months variables in the measurement Source of error variance: process (temperature, mood) significant life events, accidents Systematic Error - a source of during the interval error in measuring a variable Limitations of Test-Retest Reliability that is typically constant or proportionate to what is Carryover Effect presumed to be the true value occurs when the first testing session Source of Error Variance influences the results of the second session, and this can affect the test-retest Test Construction reliability of a psychological measure item sampling or content sampling - Practice Effect -a type of terms that refer to variation among items carryover effect wherein the within a test as well as to variation scores on the second test among items between tests administration are higher than Source of Error Variance they were on the first Test Administration Parallel Forms and Alternate Forms Reliability sources of error variance that occur during test administration may influence Parallel Forms Reliability (Equivalent Forms the testtaker’s attention or motivation Reliability) ✔ Test Environment uses one set of questions divided into two equivalent sets (“forms”), where ✔ Testtaker Variables (mood) both sets contain questions that measure ✔ Examiner-related Variables (test the same construct, knowledge or skill administrator) Alternate Forms Reliability Source of Error Variance an estimate of the extent to which these Test Scoring and Interpretation different forms of the same test have been affected by item sampling error, or scorers and scoring systems are potential other error. sources of error variance alternate forms simply means different Reliability Estimates versions of a test that have been constructed so as to be parallel Test-Retest Reliability (Time Sampling) Limitations of Parallel Forms and Alternate an estimate of reliability obtained by Forms Reliability correlating pairs of scores from the same people on two different administrations one of the most rigorous and of the same test burdensome assessments of reliability since test developers have to create two Kuder-Richardson Formula 20 (KR20) forms of the same test the statistics used for calculating the practical constraints make it difficult to reliability of a test in which the items retest the same group of individuals are dichotomous or scored as 0 or 1 Internal Consistency Estimate of Reliability like achievement test also called estimate of inter-item above 0.50 is reasonable reliability consistency, which pertains to a coefficient and the test is considered measure based on the correlations homogeneous if the reliability between different items on the same test coefficient is above 0.90 (or the same subscale on a larger test) KR21 same level of difficulty the degree of correlation among all the Kuder-Richardson Formula 20 (KR20) items on a scale Coefficient Alpha (Cronbach’s Alpha) Split-Half Reliability Estimates the preferred statistic for obtaining an obtained by correlating two pairs of estimate of internal consistency scores obtained from equivalent halves reliability of a single test administered once may be thought of as the mean of all Three (3) steps of estimating split-half possible split-half correlations, corrected reliability by the Spearman–Brown Formula Step 1. Divide the test into equivalent halves. appropriate for use on tests containing (odd, even) non-dichotomous items Step 2. Calculate a Pearson r between scores on Coefficient Alpha (Cronbach’s Alpha) the two halves of the test. Inter-rater Reliability Estimate Step 3. Adjust the half-test reliability using the Spearman–Brown formula - consistent scoring between the scorers Spearman-Brown Formula Summary of Reliability Estimates a statistics which allows a test developer The Nature of the Test to estimate what correlations between the two halves would have been if each Homogeneity vs Heterogeneity half had been the length of the whole Homogenous test (measuring one factor test and have equal variances or trait) have reasonably high internal Inter-item Consistency consistency than heterogenous test (measuring multiple factors or traits) Homogeneity Dynamic vs Static Characteristics tests are said to be homogeneous if they contain items that measure a single trait The best reliability for a test measuring a dynamic characteristic (e.g. anxiety, Heterogeneity happiness) is internal consistency. the degree to which a test measures different factors For static characteristic, test-retest or provides an estimate of the amount of alternate/parallel forms method would error inherent in an observed score of be appropriate measurement The Nature of the Test The higher the SEM, the lower the reliability, and vice-versa Speed (time limit, hindi natatapos sagutan, same diff lvl) vs Power Test (generous time, Standard Error of Measurement increasing difficulty level) How Reliable is Reliable? Test-retest, Alternate Forms, Split-Half, For Basic Research, a reliability and KR-20 can be utilized to estimate coefficient of 0.70 and 0.80 is the reliability acceptable True Score Model of Measurement and For Clinical Research, 0.90 or better is Alternatives to It acceptable Generalizability Theory What to do about Low Reliability? a modification of the domain sampling ✔ Increase the number of Items theory which indicates that a person’s test scores may vary from testing to ✔ Factor Analysis and Item Analysis testing because of variables in the testing situation VALIDITY Domain Sampling theory Validity Facets (specific aspects of a trait or is a judgment or estimate of how well a construct) test measures what it purports to measure in a particular context True Score Model of Measurement and Alternatives to It Validation - the process of gathering and evaluating evidence about validity Item Response Theory Face Validity a way to analyze responses to tests or questionnaires with the goal of relates more to what a test appears to improving measurement accuracy and measure than to what the test actually reliability measures increasing of level of difficulty judgment concerning how relevant the test items appear to be the test will adjust depending on the level of difficulty that the test taker can presentation or physical appearance answer of the psychological test Standard Error of Measurement Content Validity also known as Standard Error of describes a judgment of how Scores adequately a test samples behavior representative of the provides a measure of the precision of universe of behavior that the an observed test score test was designed to sample Does your item reflects to the an index of the degree to which variable measured? a test score is related to some criterion measure obtained at Test Blueprint – a plan regarding the types of the same time information to be covered by the items, the number of items tapping each area of coverage, Test ->🡪 Immediately the organization of the items in the test, and so Available Criterion forth Comparing new test to an Content Validity Ratio (CVR) available test developed by C.H. Lawshe Construct Validity a method of gauging agreement among is a judgment about the appropriateness raters or judges regarding how essential of inferences drawn from test scores a particular item is (essential, useful but regarding individual standing on a not essential, not necessary) variable Criterion-Related Validity theres a change, significant difference a judgment of how adequately a test Construct - an informed, scientific idea score can be used to infer an individual’s developed or hypothesized to describe or explain most probable standing on some behavior measure of interest—the measure of Construct Validity (w/ Image) interest being the criterion Validity, Bias, and Fairness Criterion - the standard against which a test or a test score is evaluated Test Bias - predict a factor inherent in a test that systematically prevents accurate, Criterion-Related Validity impartial measurement; systematic Predictive Validity variation an index of the degree to which Validity, Bias, and Fairness a test score predicts some Rating Error criterion measure judgment resulting from the Test 🡪 Future Criterion intentional or unintentional Incremental Validity - misuse of rating scale (group of the degree to which an numbers to describe a particular additional predictor characteristic or attribute explains something about the criterion ▪ Rating - numerical or measure that is not verbal judgment or an explained by predictors attribute along a already in use continuum ---rating Criterion-Related Validity scale Concurrent Validity ▪ Leniency Error expenditures associated with testing or (generous in giving not testing. scores) cost like funds (company), facility ▪ Severity Error (opposite Benefits of Leniency) refers to profits, gains, or advantages ▪ Central tendency Error (not sure, middle some benefits that can be acquired for scores) using a well-designed testing program ▪ Halo Effect (positive ✔ an increase in the quality of impression influenced workers’ performance; the evaluation) ✔ an increase in the quantity of Validity, Bias, and Fairness workers’ performance; Test Fairness ✔ a decrease in the time needed to extent to which a test is used in an train workers; impartial, just and equitable way ✔ a reduction in the number of UTILITY accidents; UTILITY ✔ a reduction in worker turnover - usefulness of a particular test Utility Analysis Some Frequently Raised Utility Issues family of techniques that entail a cost-benefit analysis designed to yield ✔ cost efficiency (price of the test) information relevant to a decision about the usefulness of a tool of assessment. ✔ savings in time evaluates whether the benefits of using a ✔ comparative utility (how useful) test outweigh the costs. ✔ clinical utility (diagnostic Utility Analysis assessment) It helps in making decisions regarding ✔ diagnostic utility whether: ✔ personnel selection ✔ one test is preferable to another test for Factors That Affect a Test’s Utility use for a specific purpose Psychometric Soundness ✔ one tool of assessment is preferable to another tool of assessment for a specific - a reliable and valid test is a useful test purpose - higher criterion-related validity (Predictive validity) ✔ the addition of one or more tests to one or more tests that are already in use is Costs preferable for a specific purpose in the context of test utility, refers to ✔ no testing or assessment is preferable to disadvantages, losses, or expenses both any testing or assessment economic and noneconomic terms How Is Utility Analysis Being Conducted? good prediction Expectancy Data Miss an indication of the likelihood that a an incorrect classification; a mistake testtaker will score within some interval wrong prediction of scores on a criterion measure—an interval that may be categorized as Hits and Misses “passing,” “acceptable,” or “failing.” Hit Rate probability the proportion of people that an Linear (High predictive ability) assessment tool accurately identifies as possessing or exhibiting a particular Example of Expectancy Data/Table trait, ability, behavior, or attribute Taylor-Russell Tables Miss Rate it provides an estimate of an extent to the proportion of people that an which inclusion of a particular test in the assessment tool inaccurately identifies selection system will improve selection Hits and Misses it also provides an estimate of the percentage of employees hired by the False Positive use of a particular test who will be successful at their jobs, given different a specific type of miss whereby an combinations of three variables: assessment tool falsely indicates that the testtaker possesses or exhibits a Test Validity particular trait, ability, behavior, or attribute Selection Ratio (# of available positions divided by the total # of applicants) False Negative Base Rate (% of people hired under the a specific type of miss whereby an existing system for a particular position assessment tool falsely indicates that the that were considered as satisfactory) testtaker does not possess or exhibit a particular trait, ability, behavior, or Example of Taylor-Russell Tables attribute Naylor-Shines Tables How Is Utility Analysis Being Conducted? it is used in determining the utility of a The Brogden-Cronbach-Gleser Formula particular test by obtaining the difference between means of the developed by Hubert Brogden, is a selected and unselected groups to derive formula for the utility gain resulting an index of what the test (or some other from the use of a particular selection tool of assessment) is adding to already instrument under specified conditions established procedures used to determine if there is a huge Hits and Misses benefit in using another test for personal selection Hit a correct classification The complexity of the job ▪ Utility Gain refers to an estimate of the benefit of using -same sorts of approaches to utility analysis are a particular test or selection put to work for positions that vary greatly in method (how much benefit into terms of complexity using a new test rather the Some Practical Considerations pre-existing test) The cut score in use ▪ Productivity Gain refers to an Cutoff score – reference point derived as a result estimated increase in work of a judgment and used to divide a set of data output into two or more classifications. Brogden-Cronbach-Gleser Formula -relative cut score – reference point that is set where: based on norm-related considerations rather than on the relationship of test scores into a criterion N = number of applicants selected per year - comparing one score to the scores of those who take examination T = average length in the position (Tenure) The Cut Score In Use rxy = criterion-related validity Fixed Cut Score coefficient for the given -a reference point that is typically set with predictor and criterion reference to a judgment concerning minimum level of proficiency required to be included in a SDy = standard deviation of particular classification. performance (in dollars) of employees - 70% (like in school) ZM = mean (standardized) score on the test for selected applicants The Cut Score In Use C = cost of testing Multiple Cut Score Some Practical Considerations -the use of two or more cut scores with reference to one predictor for the purpose of categorizing The pool of job applicants testtakers -there is a limitless supply of potential - like in CvSU: employees looking for employment 1.00-1.45 (Full Academic Scholar) -some jobs have require unique skills or demand great sacrifice that there are relatively few 1.46-1.75 (Partial) people who would even apply, let alone be Multiple Hurdle selected - a cut score is in place for each predictor used -may also vary depending on economic climate - most used in company -how many people would actually accept the job offer? - in every process there is an existing cut score Some Practical Considerations The Cut Score In Use Compensatory Model of Selection understand the relevance of Mental Status Examination (MSE) -an assumption is made that high scores on one attribute can “balance out” or compensate for identify the different areas that can be low scores on another attribute. evaluated in a Mental Status Examination (MSE) Mental Status Examination (MSE) Methods for Setting Cut Scores is the part of the clinical assessment that The Angoff Method describes the sum total of the examiner’s -devised by William Angoff, this method for observations and impressions of the setting fixed cut scores can be applied to client at the time of the interview personnel selection tasks as well as to questions What are included in the MSE? regarding the presence or absence of a particular trait, attribute, or ability I. General Descriptions - personnel selection like a checker A. Appearance Methods for Setting Cut Scores description of the client’s appearance and overall physical impression The Known Groups Method conveyed to the clinician -collection of data on the predictor of interest examples of items in the appearance from groups known to possess, and not to category include body type, posture, possess a trait, attribute, or ability of interest. poise, clothes, grooming, hair, and nails - using the pre-existing data as basis in creating anxiety indicators/signs: moist hands, a cut off score to newly developed test perspiring forehead, tense posture, wide Methods for Setting Cut Scores eyes IRT – Based Methods What are included in the MSE? - cut scores are typically set based on testtakers’ B. Behavior and Psychomotor Activity performance across all the items on the test it refers to both the quantitative and -some # of items on the test must be marked qualitative aspects of the client’s motor “correct” in order for the test to be passed behavior Methods for Setting Cut Scores it includes mannerisms, tics, gestures, twitches, stereotyped behavior, Bookmark Method echopraxia, hyperactivity, agitation, -a standard setting method used to establish one combativeness, flexibility, rigidity, and or more cut scores associated with interpretable agility levels of performance on an assessment. What are included in the MSE? - increasing difficulty C. Attitude Toward Examiner Mental Status Examination the client’s attitude toward the examiner Learning Objectives can be described as cooperative, friendly, attentive, interested, frank, seductive, defensive, contemptuous, perplexed, apathetic, hostile, playful, III. Speech ingratiating, evasive, or guarded; any can be described in terms of its quantity, number of adjectives can be used rate of production, and quality level of rapport should also be recorded may be described as talkative, silent, What are included in the MSE? unspontaneous, or normally responsive to cues from the interviewer II. Mood and Affect speech may be rapid or slow, pressured, A. Mood hesitant, dramatic, monotonous, loud, statements about the client’s mood whispered should include depth, intensity, duration, What are included in the MSE? and fluctuations IV. Perceptual Disturbances common adjectives used to describe mood include, depressed, irritable, perceptual disturbances such as anxious, angry, euphoric, empty, guilty, hallucinations and illusions may be awed, frightened, and other adjectives experienced in reference to the self or that describe the subjective state of the environment client “Have you ever heard voices or other What are included in the MSE? sounds that no one else could hear or when no one else was around?”, “Have II. Mood and Affect you experienced any strange sensations B. Affect in your body that others do not seem to experience?”, “Have you ever had is what the clinician infers from the visions or seen things that other people client’s facial expression, including the do not seem to see?” amount and the range of expressive behavior What are included in the MSE? it can be constricted, blunted, or flat V. Thought in a normal range of affect, there is a A. Process or Form of Thought variation in facial expression, tone of either an overabundance or poverty of voice, use of hands, and body ideas movements rapid thinking, but if carried extreme, it What are included in the MSE? called flight of ideas II. Mood and Affect slow or hesitant thinking C. Appropriateness Blocking – an interruption of this train the appropriateness of the patient’s of thought before an idea has been emotional responses can be considered completed, inability to recall what was in the context of the subject matter the being said or intended to be said patient is discussion What are included in the MSE? What are included in the MSE? V. Thought A. Process or Form of Thought approximate day and time of the day, and where he/she is currently at Coherence – the client’s thoughts organized well-enough that they make What are included in the MSE? sense to the listener VI. Sensorium and Cognition Logical – are the conclusions a client C. Memory reaches based on sound or flawed logic? divided into four areas: remote, recent What are included in the MSE? past, recent memory, and immediate V. Thought retention and recall B. Content of Thought What are included in the MSE? involves actual, statements, themes, and VI. Sensorium and Cognition beliefs presented by the client D. Concentration and Attention delusions, preoccupations, obsessions, cognitive disorder, anxiety, depression, compulsions, phobias, plans, intentions, and internal stimuli such as auditory recurrent ideas about homicide or hallucination --- may contribute to suicide impaired concentration What are included in the MSE? Subtracting serial of 7s from 100 is VI. Sensorium and Cognition simple task that requires both concentration and cognitive capacities can be assessed using the Mini-Mental be intact. Was the patient able to Status Examination (orientation, subtract 7 from 100 and keep memory, calculation, reading, and subtracting 7s? writing capacity, visuospatial ability and language) What are included in the MSE? What are included in the MSE? VI. Sensorium and Cognition VI. Sensorium and Cognition E. Capacity to Read and Write A. Alertness and Level of Consciousness the client should be asked to write a sentence (for example, “close your Clouding of consciousness – overall eyes”) and then do what the sentence reduced awareness of the environment says Are they responsive to environmental you can also ask your client to write a stimuli or can sustain goal-directed simple but complete sentence thinking or behavior? What are included in the MSE? What are included in the MSE? VI. Sensorium and Cognition VI. Sensorium and Cognition F. Visuospatial Ability B. Orientation you can ask your client to copy a figure, clinicians must determine whether the such as a clock face or interlocking client can give his/her name, pentagons What are included in the MSE? 2. Slight awareness of being sick and needing help but denying it VI. Sensorium and Cognition at the same time G. Abstract Thinking 3. Awareness of being sick, but ability to deal with concepts blaming it on others, on external factors, or an organic factor can patient explain similarities, such as those between an apple and a pear or 4. Awareness that illness is due to those between truth or beauty, or other something unknown in the concepts. client What are included in the MSE? 5. Intellectual Insight and True Emotional Insight VI. Sensorium and Cognition What are included in the MSE? H. Fund of Information and Intelligence IX. Reliability if a possible cognitive impairment is suspected, does the client have trouble the mental status part of the report with mental tasks, such as counting the concludes with the clinician’s change from $ after a purchase of impressions of client’s reliability and $6.37? If the task is too difficult, ask capacity to report his or her situation easy items and determine if it solved. accurately What are included in the MSE? it includes an estimate of the assessor's impression of the truthfulness or VII. Impulse Control veracity is the client capable of controlling Interviewing Techniques sexual, aggressive, and other impulses? Interview assessment of impulse control is essential in the measurement of client’s a method of gathering information by talk, potential danger to self and others discussion, or direct questions What are included in the MSE? Case of Maria VIII. Judgment and Insight Maria was being considered for a high-level public relations position with the computer firm Judgment - ability to make good for which she worked. The job duties would decisions concerning the appropriate require her to interact with a variety of people, thing to do in various situations ranging from heads of state and corporation Insight – degree of awareness and presidents to rank-and-file employees and union understanding that they are ill. officials. In addition, the position would involve making formal policy statements for news What are included in the MSE? media. Any poorly phrased statements or inappropriate reaction on her part could result in Insight adverse publicity, which could cost the firm Levels of Insight million dollars. The application process therefore involve an elaborate testing procedure, 1. Complete denial of illness including two lengthy interviews. The first was with the firm’s personnel selection officer, and Maria: I graduated from high school in June of the second was with the firm’s clinical 2001. I majored in history and social studies. psychologist. Psychologist: Yes, I see. Case of Maria Maria: I then attended college and finally Officer: I’ve read your application form and finished graduate school in 2008. My master’s have gone over your qualifications. Would you degree should help me assume the duties of the now please outline your educational new position experiences, beginning with high school? Case of Maria Maria: I graduated from high school in June Psychologists: You feel that your master’s 2001 with an emphasis in history and social degree is a useful asset in your application. studies. I began attending college in September 2001. I graduated in June 2006 with major in Maria: Yes, my graduate experiences taught me psychology and minor in business management. how to work with others. I then entered the university’s graduate program in business. I earned my master’s degree in Psychologist: With these graduate experiences, business administration in 2008. you learned the art of working with other people. Officer: What is your work history? Begin with Maria: Well, I guess I didn’t learn it all in your first full-time employment graduate school. I’ve always managed to get along with others. Case of Maria Case of Maria How do your education and experience relate to the job for which you are Psychologists: As far as you can tell, you work applying? pretty well with people. What educational experience have you Maria: That’s right. As the oldest of four had that might help you function in the children, I’ve always had the responsibility for job for which you are applying? supervising others. You know what I mean? What employment experiences have you Psychologist: Being the oldest, you were given had that might help you function in the extra responsibilities as a child. job for which you are applying? Maria: Not that I resented it. Well, maybe Identify the deficiencies in your sometimes. It’s just that I never had much time educational and work experiences. for myself Structured Interview Case of Maria the interviewer asks a specific set of questions Psychologists: And having time for yourself is important to you. Case of Maria Maria: Yes, of course it is. I guess everybody Psychologists: Maria, why don’t you tell me a needs some time alone. little bit about yourself? Psychologist: As a person who deals with others Maria: Where do you want me to begin? all day long, you must treasure those few moments you have to yourself. Psychologist: Oh, it doesn’t matter. Just tell me about yourself. Maria: I really do. Whenever I get a chance I ✔ Developing Positive Working like to drive up to the lake all by myself and just Relationships think. Specific Behaviors Psychologist: Those moments are precious to you. Active Listening Unstructured Interview fully concentrating on what is being said rather than just passively 'hearing' the there are no specific questions or guidelines for message of the speaker the interviewer to follow Attending Behaviors Semi-Structured Interview it demonstrates that you respect a person is a type of interview in which the interviewer and are interested in what he/she has to asks only a few predetermined questions while say the rest of the questions are not planned in advance. Attending Behaviors Clinical Interview ✔ Eye Contact dialogue between psychologist and patient that ✔ Body Language is designed to help the psychologist diagnose ✔ Vocal Qualities and plan treatment for the patient ✔ Verbal Tracking Intake Interview ✔ Referring to the client by proper name the initial interview with a client by a therapist or counselor to obtain both information Components of the Interview regarding the issues or problems that have Rapport brought the client into assessment, therapy, or counseling and preliminary information positive and comfortable relationship regarding personal and family history. between interviewer and client Panel Interview Rapport a type of interview which includes one applicant Techniques in Interviewing and several interviewers, often representatives of different departments within a company like the Squarely face your client hiring manager and a member of the human Open Posture resource recruitment team Lean Forward Eye Contact Group Interview Relaxed Demeanor a screening process where you interview Directive Style multiple candidates at the same time get exactly the information they need by The Interviewer asking clients specifically for it General Skills Non-Directive Style ✔ Quieting yourself allow the client to determine the course of the interview ✔ Being Self-aware Other Things to Considered in an Interview Closed-Ended Questions – questions that can be answered with “yes” or “no” - note taking or a specific response - Audio or video recording - Interview room To acquire a specific - confidentiality information from a client Reciprocal Nature of Interviewing To shutdown an overly talkative client All interviews involve mutual interaction whereby participants are Effective Responses interdependent—that is, they influence Open-Ended Questions each other. questions that cannot be answered Interview participants also affect each briefly other’s mood. Responses to Keep the Interaction Flowing Social Facilitation – the tendency for people to behave Transitional Phrase like models around them “Yes” or “I see” Interviewers should provide a relaxed and safe atmosphere through social Verbatim Playback facilitation. repeats interviewee’s exact words Principles of Effective Interviewing Paraphrasing and Restatement Proper Attitudes repeats interviewee’s response using warmth, genuineness, acceptance, different words understanding, openness, honesty, and Responses to Keep the Interaction Flowing fairness Summarizing Responses to Avoid pulls together the meaning of several Judgmental or Evaluative Statements responses (good, bad, excellent, terrible, disgusting, disgraceful, stupid) Clarification Response Probing questions that starts with clarifies the interviewee’s response “Why” Empathy and Understanding Effective Probing Statements (image) communicates understanding Principles of Effective Interviewing Responses to Keep the Interaction Flowing Responses to Avoid Reflection of Feeling/s Hostile Statements – it directs anger a statement that is intended to highlight toward the interviewee the feelings or attitudes implicitly False Reassurance – “Don’t worry, expressed in a client's communication everything will be alright” and to draw them out so that they can be clarified Confrontation similar to clarifications, but they focus on apparently contradictory information provided by clients Learning Check Read the following list of statements below, and provide a response that will keep the interaction flowing. 1. I hate school. 2. My boyfriend is a jerk. 3. Most people are liars. 4. We were ahead until the last minute of the game. 5. All men are cheaters. 6. I hope I passed the test.

Use Quizgecko on...
Browser
Browser