COUN 6070 Midterm Exam Study Guide PDF

Summary

This document is a study guide for a COUN 6070 midterm exam. It covers topics like assessment procedures, different types of tests, and measurement scales. The guide also includes information on the history of assessments and relevant legislation.

Full Transcript

COUN 6070 Midterm Exam Study Guide *This is meant to be a general overview of topics that may appear on the midterm Chapter I  Define Assessment o Procedure for gathering client information to facilitate clinical decisions and provide information to clients. Test information i...

COUN 6070 Midterm Exam Study Guide *This is meant to be a general overview of topics that may appear on the midterm Chapter I  Define Assessment o Procedure for gathering client information to facilitate clinical decisions and provide information to clients. Test information is integrated with information from other sources for a broader representation.  Four Broad Steps in Assessing Clients o Assessing the client – Initial intake o Conceptualizing and defining the problem – Confirmatory bias o Selecting and implementing effective treatments – Consistent monitoring o Evaluating the counseling – Documentation  Types of Assessment Tools o Standardized vs. Nonstandardized ▪ Standardized: Fixed instructions for administering and scoring an instrument ▪ Nonstandardized: Content may not stay constant or may not have been tested on a representative sample; may not have fixed instructions for administration, such as observing a client o Objective vs. Subjective ▪ Objective: Predetermined method for scoring the assessment – No judgments are made by the scorer ▪ Subjective: Require the individual to make a professional judgement when scoring o Cognitive vs. Affective ▪ Cognitive: Assess cognitive abilities, such as memory, processing, abstract and concrete thinking, such as an IQ test ▪ Affective: Assess personality/temperament, values, etc  History of Assessments o First edition of defined standards published by the APA in the 1950s – Standards for Educational and Psychological Testing o Racism visible in early assessments – One of the first examples in 1995 when a Psychological Review article asserted that Native American and Black participants had faster reaction times than White participants o FERPA 1974 – Family educational rights & privacy act, protects records and gives individuals rights to access their records o No Child Left Behind Act 2001 – Now known as the Every Student Succeeds Act, and gives standards that all schools should abide by to ensure all students are at the same level when they graduate, ensure protocol is met, etc but one drawback includes teachers teaching to the test Chapter II  Measurement o The application of specific procedures for assigning numbers to objects  Measurement Scales: o Nominal: Assigns numbers to name or represents mutually exclusive groups, such as 1 = freshmen, 2 = sophomore, 3 = junior, 4 = senior o Ordinal: Degree of magnitude indicated by rank ordering, such as scoring the highest, middle, and lowest o Interval: Units are in equal intervals, such as 70 to 90 and 110 to 130 o Ratio: Both interval data as well as a meaningful zero, such as running 0 miles per 0 minutes, 1 mile per 8 minutes, 2 miles per 16 minutes, etc  Norm-Referenced vs. Criterion-Referenced (Benefits & Drawbacks) o Norm-referenced: Scores compared with scores of other individuals who took the same assessment, such as comparing a student’s exam grade to their peers’ grades when grading on a curve, the ACT, IQ tests, personality inventories, etc ▪ Drawbacks: Who’s the norm? Not culturally responsive o Criterion-referenced: Scores compared with an establish standard or criterion, which tends to include a mastery component, such as needing 80% to pass the class, NCE, college writing entrance or exit exam, driver’s license test, etc ▪ Drawbacks: Not looking at norm, assuming everyone is on the same level and doesn’t consider norms  Frequency Distribution: Scores (x) on first line, frequency (f) on line beneath o Scores and frequency of people receiving each score are summarized in a chart o N = Number of participants  Measures of Central Tendency o Mean (average), median (middle), and mode (most)  Measures of Variability o Range: Spread of scores indicating variability between highest and lowest (highest minus lowest) ▪ Drawbacks: Doesn’t account for outliers and can’t see where majority of scores lie o Deviation: Subtract mean from each number then add altogether to equal zero ▪ D = (X-M) then (X-M) o Deviation^2: Square the deviation numbers separately, then add them altogether ▪ D^2 = (X-M)^2 then (X-M)^2 o Variance: Calculate average of the deviation^2 o Standard Deviation: Calculate square root of the variance ▪ S=  Normal Distribution Percentages o 68% - One standard deviation from the mean o 95% - Two standard deviations from the mean o 99.7% - Three standard deviations from the mean  Positive/Negative Skews o Positive: Most of the scores are on the lower end (MMM lower) o Negative: Most of the scores are on the higher end (MMM higher)  Standard Scores o Raw: Unadjusted score of an instrument before it has been converted o Z score: Number of standard deviations above or below the mean ▪ Subtract mean from raw score and divide by standard deviation ▪ Fixed z score mean is 0 and standard deviation is 1 ▪ Z = (Raw – M) / SD o T Score: Scores converted from z scores that are positive and whole ▪ Multiply z score by 10 and add 50 ▪ Fixed t score mean is 50 and standard deviation is 10 ▪ T = 50 + 10(z) o Stanine: Standard nine, ranges from 1-9 ▪ Fixed mean of 5 and standard deviation of ~2 o Percent: Percentage of people out of the group who received that score ▪ Number of people who received the score divided by total people o Percentile: Percentage of people in the norming group who had a score at or below a given raw score ▪ Add percentage of people at each score that are at or below it Chapter III  Classical Test Theory o Reliability ▪ The degree to which there is error within the instrument ▪ All about measuring consistency! Often calculated based on the degree of consistency between two sets of scores o Reliability coefficient: Gives an estimate of how much of the variance is TRUE variance and how much of it is ERROR o Example: A reliability coefficient of 0.90 tells us that 90% of the variance is TRUE, and the ratio of error variance to observed variance is 0.10 or 10% ▪ 1.00 – x = Reliability ▪ 1.00 – 0.10 (Error) = 0.90 (RC)  Systematic Error vs Random Error o Systematic: Consistent errors everyone experiences, such as a typo on a test o Random: Inconsistent errors that do not impact everyone, such as test anxiety  Correlation Coefficient & Pearson-Product Moment Correlation o Correlation coefficient: Indicates numerical relationship between two sets of data, ranging from -1.00 to 1.00 – Closer to 1 means stronger relationship, closer to 0 means weaker relationship, exact 0 means no relationship o Something impactful would be around 0.9, and acceptable would be around 0.7 o Pearson-product moment correlation: Calculate relationship between two sets of data by converting each individual’s scores into z scores, multiply the first and second scores, add the scores of each individual together, and divide by number of individuals – R = (SumZxZy)/N  Types of Reliability Coefficients o Test-Retest: Correlate performance on the first and second administration of the same instrument by the same group of individuals- Variation scores should relate to random error because everything else should be the same o Alternate/Parallel Forms: Correlate performance of one form and an alternate/parallel form o Split-Half: Internal reliability, the istrument administered once then split in half, typically divided by odd or even questions, content area, or randomly to avoid problems  Standard Error of Measurement: An estimation of the range of scores that would be obtained if someone took an instrument over and over again – Where the individual’s true score would fall! Increases as reliability decreases o SD x SqR(1-r)  Standard Error of Difference: Used to compare certain aspects within a client, such as more aptitude in reading versus math Chapter IV  Validity vs. Reliability: Reliability consistency, validity whether measuring as intended o Validity: The extent to which an instrument measures what it says it intends to measure – No clear cut decision whether an instrument has validity or not, and the instrument itself isn’t validated, but rather the uses o To establish validity, determine the construct you are measuring (anxiety, depression?) and determine whether an instrument captures all domains of the construct (construct representation) and only measures that specific construct (construct relevance) – Construct underrepresentation is the degree to which an instrument is unable to capture significant aspects of the construct, and construct irrelevance is the degree to which scores or results are affected by factors that are extraneous to the instrument’s intended purpose  Validity Types o Content-related: How well are test items representing the domain of knowledge that the test measures? o Criterion-related: How well does the test predict an individual’s performance? o Construct: Does the test measure the construct it is designed to measure? Chapter V  Knowledge of ACA Code of Ethics Relatability to Assessment Procedures  Client & Counselor Rights & Responsibilities o Client rights: To be assessed with assessments that meet current standards, to basic information about the instrument, how results will be used, and confidentiality, to know the consequences of not taking an assessment, to know how results will be disseminated, and informed consent should be attained by client or legal rep (except when mandated by law, performed as part of school activities, or consent is implied), to receive information in an accessible manner and know policies related to retesting o Client responsibilities: Be prepared to take the test, follow directions of test administrator, represent themselves honestly, protect security of the test, request accommodations as warranted o Counselor rights and responsibilities: ▪ 1. Validity of interpretation: Ensure requirements, codes, and standards are met for administration and interpretation, be knowledgeable about assessment’s manual and research, clear about reasoning for using, aware of potential scoring errors or misuses and take appropriate action, don’t interpret results in isolation, inform relevant clients of available accommodation and ensure it is appropriate ▪ 2. Disseminating information: Inform others of appropriate uses, how it will be administered, factors associated with scoring, record maintenance, and to whom and under what circumstances results may be released, provide results in a timely manner with supplemental information to minimize misinterpretation, inform client of rights if concern about result’s integrity arise, inform client if there is an option to retake an assessment and relevant policies, ensure protection of client and institution’s privacy ▪ 3. Instrument security and protection of copyrights: Keep assessment content, scoring, and interpretation information secure, do not reproduce or create electronic versions of copyrighted materials for use without consent ▪ Counselor is ultimately responsible for the entire process!  Legislation Linked to Assessments o Counselors need to adhere to laws from legislation (governmental bodies passing laws) and litigation (when laws formed by rules of the law, interpreting the US constitution, federal law, state law, and common law in a particular case), which is focused on areas within the assessment where judicial decisions are influential o Civil rights act of 1991: Outlaws discrimination in employment, requires hiring procedures to be connected to job duties, bans separate norms in employment tests o Americans with disabilities act amendment acts of 2008 (ADAAA): Allows more individuals to qualify as being disabled and further protected, bans discrimination in employment and in access to services on the basis of disabilities, and tests must be administered to individuals with disabilities using reasonable accommodations o Individuals with disabilities education act of 2004 (IDEA): Each state must have a system to evaluate children who may have a disability, to learn the nature of services the child might need to develop an IEP, should be based on assessment tools, and focus is on issues of race, culture, and language o Health insurance portability and accountability act of 1996: Clients must be notified of how their information might be used and disclosed, and how to get access to their information, right to request restrictions on use and disclosure, right to receive confidential communication and protected information, right to inspect and copy protected information, right to amend information, right to accounting of disclosures, and counselors are ultimately responsible for developing, maintaining, and accounting disclosures of private info for six years o Family education rights and privacy act of 1974 (FERPA): Parents and students over 18 have access to their educational records, which can’t be released without parental or adult student permission to anyone other than those who have a legitimate educational interest, and no student should be required without parental permission to submit to psychological examination, testing, or treatment that could reveal information to the student’s family Chapter VI  Etic vs. Emic Perspective o Etic: A universal perspective where an assessment is believed to measure a universal trait or construct – ET o Emic: Culture is always considered – Empathy  Test Fairness o Responsiveness to individual characteristics and testing contexts so instrument scores will result in valid interpretations – Identify and remove construct irrelevant barriers to maximize performance for any examinee o Accessibility and universal design are also important for test fairness, including adaptation, accommodation, and modification  Instrument Bias o Content Bias: Information on the test may be more familiar or appropriate for one group compared to another group- Language/interpretation can play a role o Internal Structure: Looking at the instrument’s reliability to see how it differs among client populations  General Recommendations for Practice o Recognize importance of social justice advocacy, integrate understanding of cultural factors and personal characteristics to provide appropriate assessment and diagnostic techniques, make selections that are appropriate and effective for diverse client populations, recognize challenges inherent in the assessment of persons and seek to provide administration that scoring that respects personal characteristics, acknowledge importance of social justice advocacy in interpretation and communication of results, and seek training and supervision to ensure appropriate services are provided  Differences in Performance Considering Race/Ethnicity o Race is not a psychological construct, but rather a sociopolitical construct o Systemic and and structural issues within society disproportionately impact minority groups and conclusions can be stereotypical o Average scores on intelligence, aptitude, and achievement tests are typically lower in minoritized groups than for White Americans o An overrepresentation of Black/African American students in special education classes exists compared to students with comparable issues  Linguistic Background o Language skills almost always have some degree of influence on an individual’s performance: Reading or listening skills are typically needed for test instructions, test results may reflect one’s proficiency with English more than the intended construct, and this impacts nonnative speakers and others  Assessing Individuals with Disabilities o IDEA: Assess clients using their most proficient language o In the US, about 19% of individuals have disabilities, with more than ½ severe o No general consensus across states about certain accommodations o Being an empathic listener, advisor, educator, advocate, intermediary, and ensuring involvement in documenting a disability or accommodations that are implemented during a testing situation Chapter VII  Best practices for administering an assessment o Read administration materials and directions ahead of time, become familiar with procedural aspects o Engage in preparations (set up room, gather materials, etc) o Awareness of timed sections or a time schedule required for the assessment o Practice reading instructions ahead of time, especially if they are scripted  Best practices for scoring an assessment o Read scoring instructions ahead of time to determine whether self-scoring is appropriate o Practice scoring the instrument if it is hand-scored o Ensure you know how to use computer software if it is computer scored o Utilize consultation, the manual, and available literature if subjective scoring is used, and know difference between using performance (of tasks and activities) and authentic assessment (“real” applications of learning)  Feedback Sessions o Immediately or shortly after an assessment, summarize purpose, connect to presenting problem or referral question, explain that results are just one source of information and not fully sufficient to define/assess skills, abilities, traits, or symptoms, explain any limitations, explain results clearly, and get input/reactions  Therapist Factors & Related Research o Knowledge of the instrument and interpretation of results o Ability to build rapport with client o Skill to connect assessment results with treatment o Skill to explain results in different ways based on client  Guidelines for Communicating Results o Make sure tone is neutral, make sure client or family have their questions addressed and can understand what you explain, being considerate of language or approach (meet client where they’re at, no psych jargon)

Use Quizgecko on...
Browser
Browser