Summary

This document contains practice questions on psychology topics. The questions cover various aspects of assessment, including cognitive abilities, personality traits, emotional functioning, and mental health concerns. It also includes questions on different scales of measurement (nominal, ratio, interval, ordinal).

Full Transcript

14\. What types of questions are answered by psychologists through assessment? Diagnosis, questions about an individual\'s cognitive abilities, personality traits, emotional functioning, and potential mental health concerns, including things like: \"Does this person have a learning disability?\", \...

14\. What types of questions are answered by psychologists through assessment? Diagnosis, questions about an individual\'s cognitive abilities, personality traits, emotional functioning, and potential mental health concerns, including things like: \"Does this person have a learning disability?\", \"What is the severity of their anxiety?\", \"How is their cognitive function impacted by a brain injury?\", \"Are they suitable for a particular job?\", or \"What is the best course of treatment for their specific needs?\" by utilizing various psychological tests and assessments to gather information about their strengths and weaknesse 15\. In what settings do psychologists assess and what is their primary responsibility in each? *Clinical settings: Diagnose mental health disorders and develop treatment plans.* *Educational settings: Assess learning disabilities, giftedness, and academic accommodations.* *Forensic settings: Evaluate criminal responsibility, competency, and risk assessment.* *Occupational settings: Conduct employee selection, leadership evaluations, and workplace mental health assessments.* *Counseling settings: Provide career guidance, personal growth assessments, and therapy-related evaluations.* 16\. What are the three properties of scales that make scales different from one another? Describe each 1. 17\. Know the four scales of measurement and be able to differentiate between these scales 1- nominal (categories, none of the three properties) 2-ratio (all three of the properties 3-interval (magnitude and equal intervals but lacks true zero 4-ordinal (magnitude but lacks equal intervals and true zero) 27\. What is a z score? How is it calculated? A z score is a measure of how many standard deviations something is away form the mean. You just subtract the data point from the mean and divide it by the SD 28\. How are T scores different from Z scores? T scores are adjusted a little to account for a smaller sample size and add a little more room for error 33\. What are the five characteristics of a good theory? Welp you could say you want it to be -generalizable (broad scope) -falsifiable -applicable (generative or fruitful) -simple (acham's razor) ' -systematic and coherent -have explanatory power (explain as much as the variation between variables as possible) 39\. What is the Pearson product moment correlation? What meaning do the values -1.0 to 1.0 have? It's r. It tells you the strength and the direction of the relationship between variables. -1 is a perfect negative correlation, 1 is a perfect positive correlation, 0 is no correlation 41\. What is the standard error of estimate? What is its relationship to the residuals? It is the average amount or distance that all points fall from the regression line. Residual is the single distance between a point and the line, standard error is the sTANDARD DEVIATION of all those points form the line 45\. What is the coefficient of determination? What is the purpose of the coefficient of determination? R squared, it shows us the amount of shared variance (i.e. the amount of variance predicted or described by the other variable) 50\. What contributes to measurement error? 1. 2. 3. 4. 54\. Test reliability is usually estimated in one of what three ways? Know the major concepts in each way. Test-retest, Alternate forms, internal reliability (, Split method w spearman-Brown, KR-20 for T/F, Cronbach alpha for likert), and inter-rater reliability (kappa coefficient) 55\. What is a carryover effect? It's when taking the test once means your score will be different the second time (hurts test-retest) 56\. Define parallel/alternate forms reliability. What are its advantages and disadvantages? It's hard to create (\$\$) but it does good at taking away carry over effects, rigorous 57\. Define split half reliability. How is this measured? Split the test in meaningful twos and then compare (use spearman-brown) 62\. What does the standard error of measurement do? It makes observed scores different than true scores. It kind of gives us an estimate for how observed scores will fluctuate around the true score 63\. What factors should be considered when choosing a reliability coefficient? Ok dichotomous- KR-20 Continuous or liekrt scale- kronsbach alpha Depending on the context- inter rater reliability (multiple raters) Test- restest (future predictions, should stay constant ) 64\. Why types of irregularities might make reliability coefficients biased or invalid? Carryover effects, not using the whole scale, floor or ceiling effects? **Time sampling error- test-retest reliability** **Item sampling and internal consitency errors** ----------------- --------------------------------------- **rregularity** **Effect on Reliability Coefficient** ----------------- --------------------------------------- ------------------- ----------------------------------- Small sample size Unstable, fluctuating coefficient ------------------- ----------------------------------- ------------------ ---------------------------- Restricted range Underestimates reliability ------------------ ---------------------------- --------------------------- --------------------------- Extreme score variability Overestimates reliability --------------------------- --------------------------- ----------------------- ---------------------------- Floor/Ceiling effects Underestimates reliability ----------------------- ---------------------------- -------------------------- -------------------- Random measurement error Lowers reliability -------------------------- -------------------- ---------------------------- ---------------------- Short test-retest interval Inflates reliability ---------------------------- ---------------------- --------------------------- ---------------------- Long test-retest interval Deflates reliability --------------------------- ---------------------- ------------------- ---------------------- Poor item quality Low Cronbach's Alpha ------------------- ---------------------- --------------------------------------- ----------------------------------------- Unequal item difficulty in split-half Artificially low split-half reliability --------------------------------------- ----------------------------------------- 65\. How can one address/improve low reliability? Get a bigger sample, kick out items that are poor or don't discriminate, (factor analysis etc..) Adjust for measurement error via structural equation modeling or spearman's corelation for attenuation 68\. What are the stages of test development? 1. 2. 3. 4. 5. 72\. What are the two major formats of summative scales, as given in lecture? What type of data do they create? Likert and category format they create interval data? Ordinal unless considered interval 75\. In creating a category format, the use of what will reduce error variance? Clearly defined achor points! 76\. When does the category format begin to reduce reliability? At like 9-10 point 77\. What are the four questions that should be asked when generating a pool of candidate test items? 1)What construct should the items cover 2\) how many items? 3\) what are the demographics of my population? (making items clear and culturally specific) 4\) how shall I word the items? (avoid double barrel etc..) 78\. What are the four ways to score tests and how is each differentiated from the others? Cumulative socreing (summative scales like likert)- total score Subscale scoring- subscales scored individually class/ category scoring (DSMV)- diagnosed into a classification based on how much they meet criteria Ipsative scoring (foreced choice)- subscale scores are compared to other subscale scores (individual against themselves) 79\. Define item analysis. What two methods are closely associated with item analysis? Its seeing how good each item did at its job. You can analyse it's difficulty and its ability to differentiate. This helps you know if your questions were effective 80\. Define item difficulty. What does the proportion of people getting the item correct indicate? How hard the item was, (IDI- proportion who got it right) you want it halfway between chance and perfect 81\. Define item discriminability. What is good discrimination? What are two ways to test item discriminability? Did getting the item right correlate with doing better on the test? You can use point-biscereal method (correlation between getting the question right and doing well on the test- closer to 1 the better! ) and extreme group method (Id= proportion correct in top quartile - proportion correct in bottom quartile) 84\. Define and explain how the extreme group and point biserial methods differ. Look above 85\. Define item characteristic curve. Know what information the X and Y axes give as well as slope Item characteristic curve X axis- plot of ability (high vs low scorers) Y axis- probability of correct response on the item good curve is like wide S High positive slope is good. Negative slope is whaaaaa ??? 88\. Know ceiling effects, floor effects, and indiscriminant items. Yessir Ceiling effect- scores are concentrated at top of scale Floor- bottom of scale -these limit variability and thus THE ABILITY TO DISCRIMINATE

Use Quizgecko on...
Browser
Browser