Psychometrics and Psychological Testing Practice Test for Written Comps.docx
Document Details
Uploaded by WellBredUkiyoE
Tags
Full Transcript
[Psychometrics Overview] 1. The correct hierarchy for psychological testing, from most *comprehensive* to most *specific*, is a. Items; Tests; Scales; Batteries b. **Batteries; Tests; Scales; Items** c. Batteries; Tests; Items; Scales d. Items; Scales; Tests; Batteries...
[Psychometrics Overview] 1. The correct hierarchy for psychological testing, from most *comprehensive* to most *specific*, is a. Items; Tests; Scales; Batteries b. **Batteries; Tests; Scales; Items** c. Batteries; Tests; Items; Scales d. Items; Scales; Tests; Batteries 2. All the following are examples of cognitive functioning tests, except e. **Projective Tests** f. Aptitude Tests g. Intelligence Tests h. Achievement Tests 3. Personality tests look at measuring i. Intelligence and global ability j. Interest, attitudes, or values k. Ability and achievement l. **States or traits** 4. If a test is evaluating an individual against a set of norms collected from a particular population, it is norm reference. If it is evaluating against a predetermined set of criteria, it is m. Trait referenced n. Not a valid psychological testing measure o. **Criterion referenced** p. Interval referenced 5. \_\_\_\_ tests emphasize speed over difficulty, whereas \_\_\_\_ tests emphasize difficulty over speed. q. Achievement; Intelligence r. Norm-referenced; Criterion-referenced s. **Speed; Power** t. Personality; Cognitive 6. Positive skews indicate that a test has an inadequate \_\_\_\_, i.e., the test is too \_\_\_\_ for the test-takers. u. Ceiling; Easy v. Ceiling; Difficult w. **Floor; Difficult** x. Floor; Easy 7. The solution for positive skews is to y. **Replace harder items with easier items** z. Replace easier items with harder items a. Add more difficult items to balance it out b. The test building needs to start over again from scratch 8. Negative skews indicate that a test has an inadequate \_\_\_\_, i.e., the test is too \_\_\_\_ for the test-takers. c. **Ceiling; Easy** d. Floor; Easy e. Ceiling; Difficult f. Floor; Difficult 9. The solution for negative skews is to g. Add more easy items to balance it out h. Replace harder items with easier items i. **Replace easier items with harder items** j. The test building needs to start over again from scratch 10. In both positive and negative skews, the median is always in the middle. But in positive skews, the mode is \_\_\_\_ the median and the mean is \_\_\_\_ the median. k. Above; Below l. **Below; Above** m. All three measures of central tendency are always at the same point n. The order can only be determined on a case-by-case basis 11. \_\_\_\_ tails approach but never reach zero and are a property of normal distributions. o. Symptotic p. Platykurtic q. Leptokurtic r. **Asymptotic** 12. On a normal distribution with M = 60 and SD = 6, what percentage of scores will fall between 54 and 66? s. 95% t. 50% u. **68%** v. Cannot be determined with the information provided 13. On a normal distribution with M = 60 and SD = 6, what percentage of scores will fall at or below 60? w. Cannot be determined with the information provided x. **50%** y. 68% z. 34% 14. On a normal distribution with M = 60 and SD = 6, what percentage of scores will fall at or below 66? a. Cannot be determined with the information provided b. 68% c. **84%** d. 34% 15. The WIAT is an example of a psychological test that uses e. Personality measures f. Projective testing g. **Developmental norms** h. Within-group norms 16. Within group norms compare the examinee directly against their peers and includes all of the following, except i. Z-Scores j. Percentile ranks and standard scores k. T-Scores l. **All of these are included in within group norms** 17. The \_\_\_\_ is the number of people who obtained a raw score equal to or lower than a given raw score. m. T-Score n. **Cumulative frequency** o. Z-Score p. Frequency distribution 18. The percentile rank is calculated directly from q. T-Scores r. **Frequency distributions** s. Z-Scores t. Cumulative frequency 19. The distance between two adjacent percentile ranks is not equal across the scale, meaning percentiles are on u. An interval scale v. A ratio scale w. A nominal scale x. **An ordinal scale** 20. If someone's z-score is 0, then y. Their standard score cannot be determined z. There is no relationship between their score and others a. The results of their assessment have been invalidated b. **They scored exactly at the mean** 21. Standard scores use \_\_\_\_ and \_\_\_\_ to transform a raw score into a new score to tell us where an examinee scores relative to their peers. c. **Mean; SD** d. Z-scores; T-scores e. SD; T-scores f. SD; Z-scores 22. Someone who scored exactly at the mean g. Has a Z-score of 0 h. Scored higher than 50% of their peers i. Has a T-score of 50 j. **All of these are true** 23. If *r* = 0, then k. **X and Y are not related** l. Higher scores on X are associated with lower scores on Y m. Higher scores on X are associated with higher scores on Y n. There is a significant effect 24. Restricted range \_\_\_\_ the magnitude of the calculated *r*. o. **Reduces** p. Strengthens q. Eliminates r. Does not influence 25. The coefficient of determination is found by s. Dividing the correlation coefficient by 2, i.e., ( [\$\\frac{r}{2}\$]{.math.inline} ) t. Doubling the correlation coefficient, i.e., 2*r* u. **Squaring the correlation coefficient, i.e.,** [**r**^**2**^]{.math.inline} v. None of these are true 26. If the correlation between IQ and reading test scores is +0.70, then w. The coefficient of determination is 0.49 x. About half of the variance in the reading test scores is due to factors other than IQ scores y. 49% of the variance in the reading test scores is predictable (i.e., explained by IQ scores) z. **All of these are true** 27. Subtracting the mean from the raw score and dividing by the standard deviation gives you a. The SEM b. **The Z-score** c. The T-score d. The standard score 28. Multiplying the Z-score by 10 and then adding 50 gives you the T-score, which is e. Not used on any psychological measures f. Tells you nothing about how you did on the test compared to your peers g. **A linear transformation of the Z-score** h. The same as the standard score 29. When the distribution is spread out (i.e., the correlation coefficient does not accurately reflect), it is said to be \_\_\_\_. \_\_\_\_ is better because it is more evenly distributed. i. Homoscedastic; Heteroscedasticity j. Insignificant; High effect size k. **Heteroscedastic; Homoscedasticity** l. Significant; Low effect size 30. When a variable has restricted range, there is \_\_\_\_ in *r*. m. Great confidence n. Little variation o. Homoscedasticity p. **Little confidence** 31. One of the assumptions with factor analysis is that q. Factors will not have any relationship with one another r. The more unrelated the factors are, the better so that it does not confound the results s. Information must be made more complex in order to provide a deeper understanding to unifying themes t. **Variables in the matrix correlate because of one or more underlying themes that link some of the variables together** 32. In factor analysis, shifting the factors to see if they are correlated or not is referred to as u. Factor loading v. Manipulation w. **Rotation** x. Matrix 33. The correlation between each of the original variables and each factor is known as y. Rotated factor z. **Factor loading** a. Factor extraction b. Unrotated factor 34. Eigenvalue is c. **The amount of variance associated with each factor** d. The magnitude of a relationship in Pearson's *r* e. The theoretical basis for classical test theory f. An indication of a test's reliability 35. The maximum number of factors that can be extracted is \_\_\_\_ the number of variables. g. **Equal to** h. Double i. Disproportionate to j. Half 36. A scree plot is a graphical representation in which the \_\_\_\_ represents the *x* axis and the \_\_\_\_ represents the *y* axis. k. **Eigenvalues; Number of factors** l. Number of factors; Eigenvalues m. Factor loadings; Number of factors n. Rotations; Eigenvalues 37. The place where there is a large drop on the scree plot will tell you o. Which type of factor analysis needs to be performed p. **The number of factors to extract in a factor analysis** q. How strong the factor loading is r. What type of relationship the factors have with one another 38. A unidimensional test that has all items loading on a single factor, which means s. The test is measuring multiple constructs t. **The test measures a single construct** u. The items can be grouped together into two or more separate factors v. The test is not effective 39. Orthogonal assumes that w. Extracted factors are not correlated with multiple variables x. Extracted factors are correlated with one another y. **Extracted factors are not correlated with one another** z. All extracted factors are correlated with the same variable 40. When we extract many factors and the factors are oblique (i.e., correlated), we can repeat the process and factor analyze the factors themselves. These are known as a. Twice-extracted factors b. Convergent factors c. Metafactors d. **Second-order factors** 41. Your obtained score consists of your true score plus error is an example of e. **Classical reliability theory** f. Turnstone's criteria g. The impact of the observer's effect h. Systemic error 42. [*X*~*o*~= *X*~*t*~ + *e*]{.math.inline} i. Is the formulaic representation of factor loading j. Is the formulaic representation of the impact of Turnstone's criteria k. **Is the formulaic representation of the classical reliability theory** l. Is the formulaic representation of discovering systemic error 43. The practice effect and time interval are concerns when it comes to m. Inter-rater reliability n. Item sampling o. **Time sampling** p. Internal consistency 44. How do you calculate the common variance? q. You subtract the reliability of the test from 1.00 r. You add together the common variance and error variance and subtract the sum from 1.00 s. **It is the sum of the squared factor loadings for the test** t. All of these are correct ways of calculating the common variance 45. How do you calculate the error variance? u. **You subtract the reliability of the test from 1.00** v. You add together the common variance and error variance and subtract the sum from 1.00 w. It is the sum of the squared factor loadings for the test x. All of these are correct ways of calculating error variance 46. How do you calculate the specific variance? y. It is the sum of the squared factor loadings for the test z. You subtract the reliability of the test from 1.00 a. **You add together the common variance and error variance and subtract the sum from 1.00** b. All of these are correct ways of calculating specific variance 47. Error tends to \_\_\_\_ across respondents. c. Increase d. Remain equal e. Decrease f. **Cancel out** 48. Because variance of the observed scores is always larger than the variance of the true scores, g. The observed score is virtually meaningless h. Error makes people look more similar than they are i. Error is not as random as we think it is j. **Error makes people look more different from one another than they actually are** 49. The proportion of the variance in observed scores that is due to true differences among test-takers on the trait being measured is another way of stating k. The classical reliability theory l. Error variance m. **Reliability** n. Specific variance 50. If the reliability is.95, then.5 is the o. Pooled variance p. Common variance q. **Error variance** r. Specific variance 51. In actuality, we never calculate the reliability of a test but rather s. Factor analysis t. **The effect of different sources of error** u. The validity of a test v. Individual differences 52. This typical source of measurement error comes from when the test is given with the assumption that even stable traits can often change from day to day w. **Time sampling** x. Internal consistency y. Inter-rater differences z. Item sampling 53. In order to correct for this, the test-retest method gives the same exact test to the same exact group at a later date to obtain the test-retest coefficient, also known as the test's a. **Stability** b. Reliability c. Adaptability d. Consistency 54. If the interval between administrations is too short, then the test-retest method will cause the reliability of the test to be \_\_\_\_ because \_\_\_\_. e. Underestimated; Similar conditions may still be impacting testing f. Overestimated; Real changes in the trait being measured may have occurred g. **Overestimated; Similar conditions may still be impacting testing** h. Underestimated; Real changes in the trait being measured may have occurred 55. If the interval between the administration is too long, then it will cause the reliability of the test to be \_\_\_\_ because \_\_\_\_. i. Underestimated; Similar conditions may still be impacting testing j. Overestimated; Real changes in the trait being measured may have occurred k. Overestimated; Similar conditions may still be impacting testing l. **Underestimated; Real changes in the trait being measured may have occurred** 56. Larger, heterogenous samples tend to have more accurate reliability estimates than smaller and/or homogenous samples because of the \_\_\_\_ in the smaller and/or homogenous samples. m. Sample error n. **Restriction of range** o. Error variance p. Common variance 57. The practice effect will \_\_\_\_ the correlation between test and retest and lead to an \_\_\_\_ of time sampling error. q. **Decrease; Over-estimation** r. Increase; Over-estimation s. Decrease; Under-estimation t. Increase; Under-estimation 58. The \_\_\_\_ the test-retest reliability, the \_\_\_\_ the regression toward the mean. u. Lower; Lesser v. Higher; Greater w. **Lower; Greater** x. None of these are true 59. This typical source of measurement error comes from which items are selected for the test because we cannot be certain that we selected items randomly. y. **Item sampling** z. Internal consistency a. Inter-rater differences b. Time sampling 60. In order to correct for this, we use the \_\_\_\_ in which two parallel forms of the same test are constructed that are equivalent but not identical in terms of items, and then administer them on the same day to the same sample with the order being counterbalanced. c. Split-half method d. **Alternate form method** e. Counterbalance method f. Inter-item correlation method 61. This typical source of measurement error comes from whether the items are all measuring the trait or construct of interest because items are usually not equally good measures of a construct g. Item sampling h. Inter-rater differences i. **Internal consistency** j. Time sampling 62. What is a major problem with the split-test method? k. It can be used on speed tests but not power tests l. It will underestimate the amount of error associated with item sampling m. It will overestimate the alternate form reliability n. **Reliability is related to test length because there is more opportunity for errors to cancel out on a longer test** 63. If a scale lacks internal consistency, then o. **We can't be sure that the total score on the scale always has the same meaning** p. We will be oversimplifying constructs q. People will be able to malinger on the test r. All of these are true 64. The \_\_\_\_ allows us to predict what the alternate form reliability would be from the split-half reliability and allows us to calculate how many items we would have to add to the test to achieve a desired reliability. s. Spearman's rho t. Pearson's *r* u. **Spearman-Brown formula** v. None of these 65. When adding or eliminating items to a test to change the reliability, they must be selected from the same domain, i.e., equivalent in terms of measurement properties and w. **Selected or deleted randomly** x. Carefully and purposefully selected or deleted y. It does not matter which way you add or delete items z. It matters only if you are deleting but not if you are adding 66. Inter-item correlation, item-total correlation, Kuder-Richardson formula, and Cronbach's Alpha are all ways of assessing a. Common variance b. Item sampling error c. Inter-rater reliability d. **Internal consistency** 67. What is the minimal acceptable value for Cronbach's alpha? e. 2.5 f. **0.6** g. 0.3 h. 1.0 68. Cronbach's alpha i. Is the average of all possible split-half correlations j. Is the most common statistic for estimating the internal consistency of a scale k. Measures internal consistency on a scale from 0 to 1 with larger values having greater internal consistency l. **All of these are true** 69. What should you do if you have a low alpha score? m. Throw out the test because it's useless n. Create more items to increase the reliability of the scale o. **Consider using factor analysis to create smaller subscales** p. Randomly delete items from the scale and test it again 70. Very large alpha values could be indicative of q. **Item redundancy** r. Low item-total correlation s. Common variance t. Low internal consistency 71. This typical source of measurement error comes from whether different rates assign the same scores when judgment must be exercised in scoring responses u. Time sampling v. Internal consistency w. **Inter-rater differences** x. Item sampling 72. Percent agreement is one method of correcting for inter-rater error in which the percentage is taken for all cases for which both raters make the same decision. However, it can \_\_\_\_ inter-rater reliability and so \_\_\_\_ is the preferred method for assessing inter-rater reliability because it takes chance agreement into account. y. **Overestimate; Kappa** z. Underestimate; Omega a. Overestimate; Alpha b. Underestimate; Alpha 73. Reliability coefficients apply to the test itself. The standard error of measurement (SEM) permits us to estimate how much error is likely to be present in an individual examinee's score. The SEM c. Is calculated by taking the square root of (1 -- reliability) and multiplying the answer by the SD of the test d. Will be high if reliability is low e. Will be low if reliability is high f. **All of these are true** 74. Confidence intervals tell us g. The amount that error has impacted their observed score h. The actual true score of the individual i. **The range in which the person's true score is likely to fall with a specified degree of certainty** j. All of these are true 75. If you multiply the standard error of measurement (SEM) by the reliability k. **You'll get the standard error of estimate (SEE)** l. You'll be able to give the individual their true score m. You'll get the confidence intervals (CI) n. None of these are true 76. Because of regression towards the mean o. The estimated true score will be greater than the observed score when the observed score is below the mean p. The estimated true score will be lower than the observed score when the observed score is above the mean q. Estimated true scores will always be closer to the mean r. **All of these are true** 77. The difference between estimated true scores and observed scores will be \_\_\_\_ when reliability is lower, and/or the observed score is farther from the mean. The difference will be \_\_\_\_ when the reliability is higher, and/or the observed score is closer to the mean. s. Symmetrical; Greater t. Less; Greater u. Greater; Symmetrical v. **Greater; Less** 78. Validity does not pertain to the test itself, but rather w. Empirical evidence x. Theory y. The interpretation of scores z. **All of these** 79. Including irrelevant content or failing to include constructs that should be included are two ways of compromising a. Face validity b. **Content validity** c. Criterion validity d. Construct validity 80. The \_\_\_\_ expresses the correlation between the test and the criterion and can be affected by any factors that affect the correlation coefficient (i.e., restriction of range, heteroscedasticity, non-linear relationship). e. Criterion correlation f. Criterion coefficient g. **Validity coefficient** h. Validity correlation 81. Face validity and content validity both rely on judgment, but face validity relies on the judgment of \_\_\_\_ and content validity relies on the judgment of \_\_\_\_. i. Test-takers; Non-experts j. **Non-experts; Experts** k. Non-experts; Test-takers l. Experts; Non-experts 82. When referring to validity, alternate ways of measuring the construct is referred to as m. Vertical constructs n. Content o. Horizontal constructs p. **Criterion** 83. Restricted range will always result in \_\_\_\_ correlation between test and criterion than would be obtained if range was unrestricted. q. A more positive r. **Lower** s. Higher t. Equal 84. \_\_\_\_ also affects the validity coefficient and happens when the assessment of the criterion is not independent of the test results, causing the validity coefficient to \_\_\_\_. u. Criterion unreliability; Either inflate or deflate v. Criterion contamination; Inflate w. **Criterion contamination; Either inflate or deflate** x. Criterion unreliability; Inflate 85. A test shows adequate overall validity but much higher validity for males than females. This test is showing y. **Differential validity** z. Discriminant validity a. Criterion contamination b. Divergent validity 86. If those who assign criterion ratings have knowledge of the test scores, this is an example of c. **Criterion contamination** d. Criterion unreliability e. Differential validity f. None of these accurately describe what is happening 87. The type of validity that refers to a test that exhibits higher validity when used for one subgroup of individuals than when used for another subgroup is g. Discriminant validity h. Divergent validity i. Descriptive validity j. **Differential validity** 88. A test and a criterion might show a lower-than-expected validity coefficient because of measurement error that affects both the test and criterion. In many cases, the criterion that are employed have lower reliability than the tests whose validity is being examined. This describes k. Criterion contamination l. **Criterion unreliability** m. The validity coefficient n. Differential validity 89. In the hypothetical test that shows higher validity for males then females, the sex of the individual is o. The differential variable p. **The moderator variable** q. The divergent variable r. The discriminant variable 90. \_\_\_\_ tells us if the test is improving upon other tests we are already using or if adding this test to a battery we are already using will allow us to predict the criterion more accurately. s. **Incremental validity** t. Content validity u. Predictive validity v. Convergent validity 91. To determine if a test improves upon other tests, you look to see if the correlation between the criterion and the new test is significantly greater in magnitude than the correlation between the criterion and other tests. To see if adding the test to a battery will improve our ability to predict the criterion more accurately, you use w. **Hierarchical multiple regression** x. Double cross validation y. The same method z. Pearson's *r* 92. There is no single definitive test of construct validity because a construct is an abstraction that cannot be directly observed. Evidence for construct validity a. Is more subjective than other forms of validity b. **Builds over time resulting in a nomological net** c. Is decided upon by expert panels and previous empirical findings of similar constructs d. All of these are true 93. Cross-validation is done to e. Determine if the formula can be generalized beyond the derivation sample f. Determine if shrinkage has actually occurred g. Estimate the degree of shrinkage by applying the regression formula to a new sample h. **All of these are true** 94. \_\_\_\_ always occurs when applying a regression formula calculated on a sample to a new sample. i. Inflation j. **Shrinkage** k. Multicollinearity l. Cross-validation 95. \_\_\_\_ validity assess whether a test correlates with a criterion with which it should correlate, whereas \_\_\_\_ validity assesses whether a test does not correlate with criteria with which it should not correlate. m. Convergent; Divergent n. Congruent; Incongruent o. **Convergent; Discriminant** p. Consummate; Discriminant 96. Studying group differences, such as administering a test that measures creativity to a group of professional artists and comparing that to a group of bankers, is one method of building q. **Construct validity** r. Incremental validity s. Convergent validity t. Content validity 97. If a test cannot differentiate between the intended construct and closely related constructs, it is said to lack specificity and has u. Adequate discriminant but inadequate convergent validity v. **Adequate convergent but inadequate discriminant validity** w. Adequate divergent but inadequate convergent validity x. Adequate convergent but inadequate divergent validity 98. \_\_\_\_ is used to look at construct validity by examining the test's factor structure to determine if it fits what is predicted or theoretically expected. y. Predictive validity z. Convergent validity a. **Factorial validity** b. Construct validity 99. You have built a test to measure assertiveness. You want to see if there is a significant difference in scores on that measure between those who have completed an assertiveness class and those placed on a waiting list. Conducting research to test hypotheses about the construct is one way of building c. Discriminant validity d. Convergent validity e. Incremental validity f. **Construct validity** 100. \_\_\_\_ evaluates how well a test can make a binary decision and depends on a cutting score. g. Face validity h. **Decision theory** i. Forced-choice theory j. Discriminant validity 101. Increasing the cutting score will \_\_\_\_ the sensitivity and \_\_\_\_ the specificity. k. Increase; Increase l. Decrease; Decrease m. **Decrease; Increase** n. Increase; Decrease 102. In terms of item difficulty and cutting scores, higher values mean \_\_\_\_ items. o. **Easier** p. Inconsistent q. Harder r. Consistent 103. A \_\_\_\_ test has items that all have the same difficulty level. s. Flat t. Steep u. **Peaked** v. Fixed 104. If an item has a difficulty of.60, this means w. We cannot determine how many people got the item correct or incorrect from this information alone x. 40% of examinees got the item correct y. 60% of examinees got the item incorrect z. **60% of examinees got the item correct** 105. Discriminability refers to the correlation between an item score and \_\_\_\_. It refers to the degree to which an item can differentiate among test-takers on the trait being measured. a. Scores of other participants on that same item b. **The total test score** c. Scores on different items d. Scores on similar items 106. Item characteristic curves are plots that show the percentage of those passing the item on the *y* axis against the total score on the test on the *x* axis. The \_\_\_\_ the slope, the \_\_\_\_ the discriminability of the item. e. Steeper; Weaker f. Flatter; Greater g. Flatter; More unknown h. **Steeper; Greater** 107. Item response theory, also known as \_\_\_\_, involves the use of mathematical models to define the item characteristic curve. i. Individual characteristic theory j. Individual trait theory k. **Latent trait theory** l. Latent characteristic theory 108. Constructing the item characteristic curve (ICC) attempts to reflect the relationship between the probability of passing an item and m. **Standing on a scale that does not vary across different samples** n. Standing on a scale that predicts traits about the individual o. The probability of scoring below the average p. The probability of failing another related item 109. If one group obtains higher scores on a test than the other, even though the test predicts the criterion score equally well for both groups, this is known as q. Test bias r. Slope bias s. **Intercept bias** t. Item bias 110. A question about pregnancy is an example of \_\_\_\_ because it assesses a trait that is not equal across all subgroups. u. **Item bias** v. Slope bias w. Test bias x. Intercept bias 111. Another name for differential validity is y. Test bias z. Item bias a. **Slope bias** b. Intercept bias 112. A test that predicts criterion with different accuracy for two groups is said to have c. **Test bias** d. Intercept bias e. Slope bias f. Item bias [Psychological Testing Overview] 1. A parent demands that you give their 5-year-old child an IQ test. Which would be your best option? a. The WISC-V b. The WAIS-IV c. The WIAT d. **None of these** 2. You are performing an intelligence test on a high schooler that is 17 years old. The best option is e. The WISC-V f. The WIAT g. The MMPI-2 h. **The WAIS-IV** 3. Not including the supplemental subtests, which of these is found in the PRI scale for the WAIS but not the WISC? i. **Matrix Reasoning** j. Visual Puzzles k. Coding l. Block Design 4. Which of these are true for the FRI subscale and the WAIS? m. Picture concepts are part of the FRI subscale on the WAIS n. **The FRI is not a subscale of the WAIS and is only found on the WISC** o. Matrix Reasoning is part of the FRI subscale of the WAIS p. Figure Weights is part of the FRI subscale for the WAIS 5. Picture Concepts is a WISC-specific subtest that looks at abstract reasoning and is analogous to Similarities but is more concrete. Which index scale does it fall under? q. **FRI** r. WMI s. PRI t. VCI 6. Picture Span is a WISC-specific subtest that gives pictures with the instruction to remember the order in which the pictures were seen. Which index scale does it fall under? u. **WMI** v. VCI w. FRI x. PRI 7. If the VCI and PRI differ by \_\_\_\_ points or more, the FSIQ is considered invalid. y. 18 z. 15 a. **20** b. 25 8. The GAI uses the \_\_\_\_ and excludes the \_\_\_\_. c. WMI and PSI; VCI and PRI d. VCI and WMI; PRI and PSI e. **VCI and PRI; WMI and PSI** f. WMI and PRI; VCI and PSI 9. Compared to the FSIQ, the GAI g. Incorporates the WMI and PSI indexes h. Focuses more on global abilities i. Is considered a more reliability estimator of intelligence j. **Focuses more on reasoning abilities** 10. A normative strength for a standard score is considered \_\_\_\_ and above, and a normative weakness is considered \_\_\_\_ and below. k. **115; 85** l. 120; 80 m. 105; 95 n. 110; 90 11. On the subtests of the WAIS and WISC, a normative strength is considered \_\_\_\_ and above and a normative weakness is considered \_\_\_\_ and below. o. 15; 5 p. 11; 9 q. 115; 85 r. **13; 7** 12. For the MMPI-2, the test-taker must be at least \_\_\_\_ years old and have at least a \_\_\_\_ grade reading level. s. 18; 4^th^ t. **18; 6^th^** u. 16; 6^th^ v. 14; 5^th^ 13. The MMPI-A is w. **For children between ages 14 and 18** x. An MMPI that focuses on alcoholism and addiction y. The adult version of the MMPI-2 z. The same as the MMPI-2 but shorter 14. VRIN and TRIN are both a. Effort to present oneself in a positive light measures b. **Response consistency measures** c. Underreporting measures d. Overreporting measures 15. F, Fb, and Fp are all e. Effort to present oneself in a positive light measures f. Response consistency measures g. Underreporting measures h. **Overreporting measures** 16. L, or the "lie" scale and K, or the "correction" scale comprise the \_\_\_\_ measures. i. Effort to present oneself in a positive light measures j. Response consistency measures k. **Underreporting measures** l. Overreporting measures 17. When 65 or above, this scale identifies those who present themselves as highly virtuous, responsible people who are free of psychological problems (i.e., "faking good") m. K n. VRIN o. L p. **S** 18. \_\_\_\_ is the validity scale of the MMPI-2 that looks at inconsistent or random responding and a score greater than \_\_\_\_ invalidates the test. q. VRIN; 90 r. TRIN; 90 s. **VRIN; 79** t. TRIN; 79 19. \_\_\_\_ is the validity scale of the MMPI-2 that identifies people who indiscriminately give true or indiscriminately give false responses and a score greater than \_\_\_\_ invalidates the test. u. TRIN; 90 v. VRIN; 79 w. VRIN; 90 x. TRIN; 79 20. If the F scale is 90 or higher, then the profile should not be interpreted unless y. The L scale is higher than 65 z. The Fp scale is higher than 100 a. The results are invalid and should never be interpreted b. **The Fp scale is less than 100** 21. The Fp scale is known as the \_\_\_\_ scale because it consists of items answered infrequently by both psychiatric and normative populations. A score of \_\_\_\_ or more invalidates the entire profile because of overreporting, exaggeration, and the possibility of faking bad. c. "Infrequency-personality"; 100 d. **"Infrequency-psychopathology"; 100** e. "Infrequency-psychiatric"; 90 f. "Infrequency-psychotic"; 90 22. The Fb scale is comprised of additional items developed to assess overreporting and/or inconsistent responding in the second part of the test. If the score is 90 or greater, then g. The entire report is invalid and should not be interpreted h. **The content and supplemental scales should not be interpreted, but the clinical scales can be interpreted as long as Fp is less than 100** i. The content and supplemental scales should not be interpreted, but the clinical scales can be interpreted as long as L is higher than 65 j. You can interpret the content and supplemental scales with caution 23. Elevated scores on the F scale, or "infrequency" scale, suggest deviant or atypical ways of responding. If F scale is between 80-89 and VRIN \> 79, then it is likely the result of random responding. If F scale is between 80-89 and VRIN \< 79, then k. It is likely that they have a high degree of self-criticism l. It is likely that they are deliberately defensive or lack insight m. **It is likely overreporting, exaggeration, or a cry for help** n. None of these are true 24. An MMPI-2 profile is invalid and uninterpretable if more than \_\_\_\_ items have been omitted. o. 20 p. 25 q. 50 r. **30** 25. Scores of \_\_\_\_ and lower on \_\_\_\_ suggests a high degree of self-criticism and possible exaggeration of problems. s. 40; L t. 50; K u. **40; K** v. 50; L 26. Someone with the following scales, F \< 80, L \< 60, and K between 40-60 can be described as w. Self-critical x. Malingering y. **Open and non-defensive** z. Mildly defensive 27. A person who responded to the test in a defensive manner and tended to minimize or overlook any problems or difficulties would likely score at least a \_\_\_\_ on the \_\_\_\_ scale. a. 65; L b. 60; K c. 60; L d. **65; K** 28. A person who wishes to present themselves in an unrealistically virtuous light, free from even commonplace human weaknesses or shortcomings, would like score at least a \_\_\_\_ on the \_\_\_\_ scale. e. 65; K f. 60; L g. **65; L** h. 60; K 29. Moderate elevations of the L scale are \_\_\_\_ and suggests that the person might have been deliberately defensive and/or might be rigid, moralistic, and lacking insight into their own motivations. i. Higher than 65 j. Between 55 and 60 k. Between 50 and 60 l. **Between 60 and 64** 30. T-score elevations of 65-69 can be described as m. Severe n. **Mild** o. Moderate p. No pathology for scores in this range 31. T-scores must be \_\_\_\_ or higher to be considered indicative of severe pathology. q. **75** r. 69 s. 65 t. 70 32. A spike is when only one scale is 65 or higher and the highest scale is more than \_\_\_\_ higher than the second highest scale. u. 3T v. **5T** w. 8T x. 10T 33. The AAS and APS/MAC-R scales differ in that y. **High scores on the AAS indicates open acknowledgement of substance abuse behaviors, whereas high scores on APS or MAC-R indicate that one is very prone to substance abuse, or drugs are prominent in their life but there is more denial present** z. Low scores on the AAS indicates open acknowledgement of substance abuse behaviors whereas high scores on APS or MAC-R indicate that one is very prone to substance abuse, or drugs are prominent in their life but there is more denial present a. High scores on the APS or MAC-R indicate open acknowledgement of substance abuse behaviors whereas high scores on AAS indicate that one is very prone to substance use or drugs are prominent in their life but there is more denial present b. Low scores on the APS or MAC-R indicate open acknowledgement of substance abuse behaviors, whereas high scores on AAS indicate that one is very prone to substance abuse, or drugs are prominent in their life but there is more denial present [True or False] 1. TrueFalse: It is possible to take a non-normal distribution and normalize the scores. 3. **True**/False: If variables have a curvilinear relationship, they cannot be described by Pearson's *r*. 4. **True**/False: A test can have strong reliability without having strong validity. 5. TrueFalse: Split-half method cannot be used with speed tests. 7. **True**/False: Reliability is related to test length. 8. TrueFalse: The mean of the observed scores is equal to the mean of the true scores. 11. **True**/False: The average effect of error across respondents is zero. 12. **True**/False: Random error is impossible to measure and/or eliminate. 13. TrueFalse: One criticism of a test having low face validity is that it could cause test-takers to become wary and thus introduce error. 16. TrueFalse: If the criterion measure we used are poor (in terms of reliability) then we are unlikely to find strong evidence for validity even when the test is actually a good measure of the intended construct. 18. TrueFalse: Multicollinearity, or when two variables are really measuring the same construct (i.e., are redundant), can result in larger shrinkage. 20. **True**/False: The Information subtest is found in the VCI scale for the WISC, but it is supplemental. For the WAIS, it is not supplemental. 21. TrueFalse: Picture Span is only found on the WISC and not the WAIS. 23. **True**/False: If the VCI and WMI differ by 20 points or more, the FSIQ is considered invalid. 24. TrueFalse: If the PRI and PSI differ by 20 points or more, the FSIQ is considered invalid. [Application of Psychometrics] 1. What is the formula for converting a z-score into a t-score? a. T = [\$\\frac{(z - 100)}{15}\$]{.math.inline} b. **T = 10z + 50** c. T = 15z + 100 d. T = [\$\\frac{(z - 50)}{10}\$]{.math.inline} 2. What is the formula for converting a t-score into a z-score? e. Z = 10t + 50 f. Z = 15t + 100 g. Z = [\$\\frac{(T - 100)}{15}\$]{.math.inline} h. **Z =** [\$\\frac{\\mathbf{(z - 50)}}{\\mathbf{10}}\$]{.math.inline} 3. What is the formula for converting a standard score into a z-score? i. **Z =** [\$\\frac{\\mathbf{(SS - 100)}}{\\mathbf{15}}\$]{.math.inline} j. Z = 10ss + 50 k. Z = 15ss + 100 l. Z = [\$\\frac{(\\text{SS} - 50)}{10}\$]{.math.inline} 4. What is the formula for converting a z-score into a standard score? m. **SS = 15z + 100** n. SS = 10z + 50 o. SS = [\$\\frac{(z - 100)}{15}\$]{.math.inline} p. SS = [\$\\frac{(z - 50)}{10}\$]{.math.inline} 5. What is the standard score when the t-score is 64? q. 89 r. **121** s. 135 t. 105 6. How do you calculate the z-score from an individual raw score? u. Z = [\$\\frac{(x - \\text{SD})}{M}\$]{.math.inline} v. Z = [\$\\frac{(\\text{SS} - 100)}{15}\$]{.math.inline} w. **Z =** [\$\\frac{\\mathbf{(x - M)}}{\\mathbf{\\text{SD}}}\$]{.math.inline} x. Z = [\$\\frac{(x + M)}{\\text{SD}}\$]{.math.inline} 7. Given that M = 40 and SD = 6, what is the z-score for a raw score of 38? y. 1.33 z. 0.33 a. **-0.33** b. -1.33 8. Given that M = 40 and SD =6, what is the standard score for a raw score of 48? c. **120** d. 100 e. 115 f. 125 [Types of Variance] \_\_C\_\_ Percentage of total variance in a test that is due to random measurement. \_\_B\_\_ Percentage of total variance in a test that is unique to that test and not shared with other tests in the factor analysis. \_\_A\_\_ Percentage of total variance in a test that is shared with the other tests in the factor analysis. A. Common Variance B. Specific Variance C. Error Variance [Validity Scales for MMPI-2] \_\_C\_\_ Correction \_\_G\_\_ Infrequency \_\_D\_\_ Response consistency \_\_E\_\_ Infrequency, 2^nd^ part \_\_A\_\_ Lie \_\_F\_\_ Superlative \_\_B\_\_ Infrequency-Psychopathology A. L B. Fp C. K D. VRIN, TRIN E. Fb F. S G. F [Clinical Scales of MMPI-2] \_\_H\_\_ Anxiety and self-doubts \_\_J\_\_ Overactivity and impulsivity \_\_B\_\_ Excessive bodily concerns \_\_D\_\_ Somatization and denial A. Scale 0: Social Introversion B. Scale 1: Hypochondriasis C. Scale 2: Depression D. Scale 3: Hysteria \_\_E\_\_ Rebelliousness and non-conformity \_\_A\_\_ Shyness and social avoidance \_\_I\_\_ Odd and eccentric thoughts and beliefs \_\_C\_\_ Depressive symptoms \_\_G\_\_ Suspiciousness and oversensitivity \_\_F\_\_ Non-conformity to traditional gender traits E. Scale 4: Psychopathic Deviate F. Scale 5: Masculinity-Femininity G. Scale 6: Paranoia H. Scale 7: Psychasthenia I. Scale 8: Schizophrenia J. Scale 9: Mania