Psychological Assessment (1-10, Aly)
96 Questions
2 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What term describes the science of psychological measurement?

  • Psychometrics (correct)
  • Psychotherapy
  • Psychoanalysis
  • Psychomotor

Which component of a psychological test involves the specific stimulus to which a person responds overtly?

  • Format
  • Item (correct)
  • Score
  • Administration Procedure

What is the purpose of a 'Cut-Score' in psychological testing?

  • To divide a set of data based on judgment (correct)
  • To arrange the layout of a test
  • To summarize the test statements
  • To score individual performances

What term is used for the summary statement that reflects an evaluation of performance on a test?

<p>Score (B)</p> Signup and view all the answers

Assigning scores to performances is referred to as what?

<p>Scoring (D)</p> Signup and view all the answers

Who is referred to as a professional that uses, analyzes, and interprets psychological data?

<p>Psychometrist (C)</p> Signup and view all the answers

What is actuarial assessment characterized by?

<p>Application of empirically demonstrated statistical rules (D)</p> Signup and view all the answers

Which level of interpretation is primarily concerned with minimal interpretation and lacks concern with underlying constructs?

<p>Level I (D)</p> Signup and view all the answers

Which interpretation level involves developing a coherent and inclusive theory of the individual’s life?

<p>Level III (C)</p> Signup and view all the answers

What does mechanical prediction in data interpretation use?

<p>Algorithmic and statistical methods (A)</p> Signup and view all the answers

What's a common objective of a test battery?

<p>To measure different variables but with a common objective (C)</p> Signup and view all the answers

Which role in psychological assessment involves creating the tests or other methods of assessment?

<p>Test Author/Developer (A)</p> Signup and view all the answers

What does the term 'over behavior' refer to in psychological assessment?

<p>An observable action or the product of an observable action (D)</p> Signup and view all the answers

Which party is primarily responsible for marketing and selling psychological tests?

<p>Test Publishers (C)</p> Signup and view all the answers

What is the key focus of Level II interpretation in psychological assessment?

<p>Descriptive generalizations and hypothetical constructs (A)</p> Signup and view all the answers

In psychological assessment, which type of prediction involves using computer algorithms?

<p>Mechanical prediction (A)</p> Signup and view all the answers

Which type of test primarily relies on predictive validity?

<p>Aptitude Test (D)</p> Signup and view all the answers

What kind of performance do typical performance tests measure?

<p>Usual or habitual performance (A)</p> Signup and view all the answers

Which type of personality test requires choosing between two or more alternative responses?

<p>Structured Personality Tests (D)</p> Signup and view all the answers

What is the primary characteristic of projective personality tests?

<p>Unstructured with ambiguous stimuli or responses (C)</p> Signup and view all the answers

What does the mental status examination primarily determine?

<p>Mental status of the patient (C)</p> Signup and view all the answers

What can be inferred about the reliability of a test in different contexts?

<p>Reliability of a test can vary depending on the context in which it is used. (A)</p> Signup and view all the answers

Which type of interview allows a client to express feelings without fear of disapproval?

<p>Non-Directive (D)</p> Signup and view all the answers

What does the 'error' component in Classical Test Theory refer to?

<p>The part of the observed test score unrelated to the testtaker's ability. (D)</p> Signup and view all the answers

Which component of psychological assessment relies mainly on content validity?

<p>Achievement Test (B)</p> Signup and view all the answers

What component is essential in the Cumulative Scoring method besides appropriate test items?

<p>Appropriate scoring and result interpretation methods (D)</p> Signup and view all the answers

Which type of test measures individual dispositions and preferences?

<p>Personality Test (C)</p> Signup and view all the answers

Which test's primary purpose is to measure the speed at which the test taker can complete it correctly?

<p>Speed Test (C)</p> Signup and view all the answers

What does the reliability coefficient represent in psychological testing?

<p>The ratio between true score variance and total variance. (D)</p> Signup and view all the answers

Which assumption states that tasks in tests mimic the actual behaviors they attempt to understand?

<p>Assumption 3 (B)</p> Signup and view all the answers

Which kind of interview typically involves more than one interviewer?

<p>Panel Interview (B)</p> Signup and view all the answers

Which factor is NOT a potential source of error variance identified in the content?

<p>True scores (D)</p> Signup and view all the answers

What is error variance defined as in the content?

<p>The variance in test scores due to factors other than the trait or ability measured. (A)</p> Signup and view all the answers

Which assumption highlights that various measurement techniques have both strengths and weaknesses?

<p>Assumption 4 (A)</p> Signup and view all the answers

Which of the following is true about testing and assessment as per Assumption 7?

<p>They can lead to critical decision-making that benefits society. (B)</p> Signup and view all the answers

In which theory is the true score considered an unobtainable ideal?

<p>Classical Test Theory (B)</p> Signup and view all the answers

What does Fleiss Kappa measure?

<p>Agreement between two or more raters on a categorical scale (C)</p> Signup and view all the answers

How does restriction of range affect the correlation coefficient?

<p>Decreases the correlation coefficient (A)</p> Signup and view all the answers

What is the main goal of reliability in psychological assessment?

<p>To estimate errors and devise techniques to reduce them (D)</p> Signup and view all the answers

Which type of error is caused by unpredictable fluctuations and inconsistencies in the measurement process?

<p>Random Error (D)</p> Signup and view all the answers

Which theory emphasizes the problem of using a limited number of items to represent a larger, more complicated construct?

<p>Domain Sampling Theory (C)</p> Signup and view all the answers

According to the true score formula, what does the $Rxx$ represent?

<p>The correlation coefficient (A)</p> Signup and view all the answers

According to Generalizability Theory, what should happen if all facets in the universe are identical during testing?

<p>The exact same test score should be obtained (A)</p> Signup and view all the answers

Which measure is specifically designed to evaluate inter-rater reliability with only two raters?

<p>Cohen's Kappa (C)</p> Signup and view all the answers

What is a major source of error variance related to the environment during test administration?

<p>Testtaker's motivation or attention (A)</p> Signup and view all the answers

What is a key characteristic of a dynamic trait when measuring internal consistency?

<p>Presumed to be ever-changing as a function of situational and cognitive experiences (A)</p> Signup and view all the answers

What happens to the test score when the test-retest interval is short and the second test is influenced by the first?

<p>The correlation is inflated due to carryover effects (B)</p> Signup and view all the answers

What is the best description of a power test?

<p>A test with a long enough time limit to allow test takers to attempt all items (C)</p> Signup and view all the answers

What is the effect of measurement error that consistently affects the test score in the same direction?

<p>Systematic Error (C)</p> Signup and view all the answers

What increases the reliability of a test?

<p>Increasing the number of items (B)</p> Signup and view all the answers

What is the primary focus of Item Response Theory?

<p>Item difficulty (B)</p> Signup and view all the answers

Which term describes the degree to which an item differentiates among individuals with varying levels of trait or ability?

<p>Discrimination (B)</p> Signup and view all the answers

Which type of reliability is obtained from correlating pairs of scores from the same people on two different administrations of the test?

<p>Test-Retest Reliability (A)</p> Signup and view all the answers

What element of variance is described as the difference between the observed score and the true score?

<p>Measurement Error (D)</p> Signup and view all the answers

Which type of error is associated with consistent bias in scores due to a particular factor?

<p>Systematic error (D)</p> Signup and view all the answers

Which statement about error variance is correct?

<p>Error variance can either increase or decrease the test score by varying amounts (B)</p> Signup and view all the answers

Which statistical tools are used to assess the Coefficient of Stability?

<p>Pearson R, Spearman Rho (B)</p> Signup and view all the answers

What is the main error associated with Split-Half Reliability?

<p>Item sample: nature of split (B)</p> Signup and view all the answers

Which measure is used for inter-item consistency of dichotomous items with unequal variances?

<p>KR-20 (B)</p> Signup and view all the answers

Which term describes a test that measures a single trait?

<p>Homogeneity (B)</p> Signup and view all the answers

What is the purpose of counterbalancing in test administration?

<p>To avoid carryover effects for parallel forms (A)</p> Signup and view all the answers

Which type of reliability is established when at least two different versions of the test yield almost the same scores?

<p>Parallel Forms/Alternate Forms Reliability (A)</p> Signup and view all the answers

What is the error associated with Internal Consistency (Inter-Item Reliability)?

<p>Item sampling homogeneity (D)</p> Signup and view all the answers

Which error should be corrected by just removing the first test of absentees?

<p>Mortality (D)</p> Signup and view all the answers

Which measure evaluates internal consistencies by focusing on the degree of differences between item scores?

<p>Average Proportional Distance (A)</p> Signup and view all the answers

What should be equal for two forms in Parallel Forms Reliability?

<p>Means and error variances (C)</p> Signup and view all the answers

Which term describes the failure to capture important components of a construct in a test blueprint?

<p>Construct underrepresentation (A)</p> Signup and view all the answers

What happens when test scores are influenced by factors irrelevant to the construct?

<p>Construct-irrelevant variance (D)</p> Signup and view all the answers

Which statistical concept is used to calculate the Content Validity Ratio (CVR)?

<p>Lawshe’s formula (C)</p> Signup and view all the answers

If the CVI is low, what should be done to the items with low CVR values?

<p>Remove or modify (B)</p> Signup and view all the answers

Which type of validity involves assessing whether test scores can predict future performance?

<p>Predictive Validity (D)</p> Signup and view all the answers

What describes the degree to which an additional predictor explains something not explained by predictors already in use?

<p>Incremental Validity (C)</p> Signup and view all the answers

What type of validity covers all other types and is both statistical and logical?

<p>Construct Validity (B)</p> Signup and view all the answers

Which method demonstrates that test scores vary predictably as a function of group membership?

<p>Method of Contrasted Groups (B)</p> Signup and view all the answers

What is indicated by a validity coefficient showing little relationship between test scores and unrelated variables?

<p>Discriminant Evidence (D)</p> Signup and view all the answers

Which type of validity measures the relevancy, validity, and uncontaminated nature of a standard for judgment?

<p>Criterion Validity (C)</p> Signup and view all the answers

Which stage of Factor Analysis involves estimating or extracting factors and deciding how many factors must be retained?

<p>Explanatory Factor Analysis (B)</p> Signup and view all the answers

What is the term used for the revalidation of a test to a criterion based on a different group from the original group?

<p>Cross-Validation (C)</p> Signup and view all the answers

Which type of rating error involves the rater's ratings clustering in the middle of the rating scale?

<p>Central Tendency Error (A)</p> Signup and view all the answers

Which table is used in Utility Analysis to indicate the extent to which a test taker will score within some interval of scores on a criterion measure?

<p>Expectancy Table (B)</p> Signup and view all the answers

Which factor analysis approach tests the degree to which a hypothetical model fits the actual data?

<p>Confirmatory Factor Analysis (A)</p> Signup and view all the answers

What describes the term 'Factor Loading' in the context of factor analysis?

<p>The extent to which a factor determines the test scores (A)</p> Signup and view all the answers

Which approach is used to prevent bias during the development of a test?

<p>Estimated True Score Transformation (A)</p> Signup and view all the answers

What is the term for the procedure that entails comparing the cost and benefits to yield information about the usefulness of an assessment tool?

<p>Utility Analysis (C)</p> Signup and view all the answers

What error might occur due to a rater's inability to discriminate among conceptually distinct aspects of a ratee’s behavior?

<p>Halo Effect (D)</p> Signup and view all the answers

What is the relationship between higher criterion-related validity and utility in the context of psychological assessment?

<p>Higher criterion-related validity increases utility (C)</p> Signup and view all the answers

What does the Confidence Interval in psychological testing represent?

<p>A range of test scores likely to contain true scores (C)</p> Signup and view all the answers

How can the Standard Error of the Difference assist a test user?

<p>By determining if a difference between scores is statistically significant (C)</p> Signup and view all the answers

Which condition will result in a larger confidence interval?

<p>Lower reliability (A)</p> Signup and view all the answers

What is measured by test sensitivity?

<p>True positives (B)</p> Signup and view all the answers

Which type of reliability estimate involves the nature of a test being either homogenous or heterogeneous?

<p>Internal consistency reliability (B)</p> Signup and view all the answers

What does a high selection ratio imply in psychological testing?

<p>A large number of candidates are being hired (D)</p> Signup and view all the answers

Which statement best defines face validity?

<p>Test appears to measure what it claims to the test-taker (A)</p> Signup and view all the answers

In which type of study is conceptual validity most emphasized?

<p>Clinician-based evaluations (D)</p> Signup and view all the answers

What is the primary goal of content validity?

<p>To ensure the assessment instrument samples behavior representative of the construct (A)</p> Signup and view all the answers

Which term is used to describe the proportion of the population that actually possesses the characteristic of interest?

<p>Base rate (D)</p> Signup and view all the answers

Study Notes

Psychological Assessment

  • Definition: A device or procedure designed to measure variables related to psychology
  • Components:
    • Content: Subject matter
    • Format: Form, plan, structure, arrangement, layout
    • Item: A specific stimulus to which a person responds overtly and this response is being scored or evaluated
    • Administration Procedures: One-to-one basis or group administration
    • Score: Code or summary of statement, usually but not necessarily numerical in nature, but reflects an evaluation of performance on a test
    • Scoring: The process of assigning scores to performances
    • Cut-Score: Reference point derived by judgement and used to divide a set of data into two or more classification
    • Psychometric Soundness: Technical quality
    • Psychometrics: The science of psychological measurement
    • Psychometrist or Psychometrician: Refer to professional who uses, analyzes, and interprets psychological data

Types of Psychological Tests

  • Ability or Maximal Performance Test: Assess what a person can do
  • Achievement Test: Measurement of previous learning, used to assess mastery
  • Aptitude Test: Refers to the potential for learning or acquiring a specific skill
  • Intelligence Test: Refers to a person's general potential to solve problems, adapt to changing environments, abstract thinking, and profit from experience
  • Typical Performance Test: Measures usual or habitual thoughts, feelings, and behavior
  • Personality Test: Measures individual dispositions and preferences
  • Structured Personality Test: Provides statement, usually self-report, and requires the subject to choose between two or more alternative responses
  • Projective Personality Test: Unstructured, and the stimulus or response are ambiguous
  • Attitude Test: Elicits personal beliefs and opinions
  • Interest Inventories: Measures likes and dislikes as well as one's personality orientation towards the world of work
  • Other Tests: Speed, Power, Values Inventory, Trade, Neuropsychological, Norm-Referenced, and Criterion-Referenced Tests

Psychological Assessment Methods

  • Interview: Method of gathering information through direct communication involving reciprocal exchange
  • Standardized/Structured Interview: Questions are prepared
  • Non-Standardized/Unstructured Interview: Ideas in depth
  • Semi-Standardized/Focused Interview: May probe further on specific number of questions
  • Non-Directive Interview: Subject is allowed to express his feelings without fear of disapproval
  • Mental Status Examination: Determines the mental status of the patient
  • Intake Interview: Determine why the client came for assessment; chance to inform the client about the policies, fees, and process involved
  • Social Case: Biographical sketch of the client
  • Employment Interview: Determine whether the candidate is suitable for hiring
  • Panel Interview (Board Interview): More than one interviewer participates in the assessment
  • Motivational Interview: Used by counselors and clinicians to gather information about some problematic behavior, while simultaneously attempting to address it therapeutically
  • Portfolio: Samples of one's ability and accomplishment
  • Case History Data: Records, transcripts, and other accounts in written, pictorial, or other form that preserve archival information, official and informal accounts, and other data and items relevant to an assessee
  • Case Study: A report or illustrative account concerning a person or an event that was compiled on the basis of case history data
  • Groupthink: Result of the varied forces that drive decision-makers to reach a consensus

Data Collection and Interpretation

  • Data Collection: Gathering information through various methods
  • Data Interpretation: Analyzing and making sense of the collected data
  • Hit Rate: Accurately predicts success or failure
  • Profile: Narrative description, graph, table, or other representations of the extent to which a person has demonstrated certain targeted characteristics as a result of the administration or application of tools of assessment
  • Actuarial Assessment: An approach to evaluation characterized by the application of empirically demonstrated statistical rules as determining factor in assessors' judgement and actions
  • Mechanical Prediction: Application of computer algorithms together with statistical rules and probabilities to generate findings and recommendations

Levels of Interpretation

  • Level I: Minimal concern with intervening processes, data are primarily treated in a sampling or correlate way
  • Level II: Descriptive Generalizations, hypothetical construct, the assumption of an inner state which goes logically beyond the description of visible behavior
  • Level III: The effort to develop a coherent and inclusive theory of the individual life or a "working image" of the patient

Parties in Psychological Assessment

  • Test Author/Developer: Creates the tests or other methods of assessment
  • Test Publishers: Publish, market, sell, and control the distribution of tests
  • Test Reviewers: Prepare evaluative critiques based on the technical and practical aspects of the tests
  • Test Users: Uses the test of assessment
  • Test Takers: Those who take the tests
  • Test Sponsors: Institutions or government who contract test developers for various testing services

Assumptions About Psychological Testing and Assessment

  • Assumption 1: Psychological Traits and States Exist: Psychological traits and states are assumed to exist and can be measured
  • Assumption 2: Psychological Traits and States can be Quantified and Measured: Traits and states can be measured using tests and other methods
  • Assumption 3: Test-Related Behavior Predicts Non-Test-Related Behavior: The tasks in some tests mimic the actual behaviors that the test user is attempting to understand
  • Assumption 4: Test and Other Measurement Techniques have Strengths and Weaknesses: Competent test users understand and appreciate the limitations of the test they use
  • Assumption 5: Various Sources of Error are Part of the Assessment Process: Error refers to something that is more than expected, and it is a component of the measurement process
  • Assumption 6: Testing and Assessment can be Conducted in a Fair and Unbiased Manner: Tests are tools and can be used properly or improperly
  • Assumption 7: Testing and Assessment Benefit Society: Considering the many critical decisions that are based on testing and assessment procedures, we can readily appreciate the need for tests### Psychological Assessment
  • A field of study that focuses on the development and use of tools to measure and evaluate human behavior, abilities, and characteristics.

Reliability

  • Refers to the consistency of a test's measurements.
  • Mortality: Problems in absences in the second session (just remove the first tests of the absents).
  • Coefficient of Stability: Statistical tool used to measure reliability, e.g., Pearson R, Spearman Rho.
  • Parallel Forms/Alternate Forms Reliability: Established when at least two different versions of the test yield almost the same scores.
  • Split-Half Reliability: Obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once.
  • Internal Consistency (Inter-Item Reliability): Measures the consistency among items within the test.
  • Error: Factors that affect reliability, e.g., Carryover Effects, Practice Effect, Item Sampling.

Error

  • Carryover Effects: Happened when the test-retest interval is short, wherein the second test is influenced by the first test.
  • Practice Effect: Scores on the second session are higher due to their experience of the first session of testing.
  • Scorer Differences: The degree of agreement or consistency between two or more scorers with regard to a particular measure.

Tests

  • Power Tests: Allow test takers to attempt all items.
  • Speed Tests: Contain items of uniform level of difficulty with a time limit.
  • Criterion-Referenced Tests: Designed to provide an indication of where a test taker stands with respect to some variable or criterion.

Theories

  • Classical Test Theory: Everyone has a "true score" on a test, which is affected by random error.
  • Domain Sampling Theory: Estimates the extent to which specific sources of variation under defined conditions are contributing to the test scores.
  • Generalizability Theory: Based on the idea that a person's test scores vary from testing to testing because of the variables in the testing situations.

Item Response Theory

  • The probability that a person with X ability will be able to perform at a level of Y in a test.
  • Focuses on item difficulty.

Latent-Trait Theory

  • A system of assumption about measurement and the extent to which item measures the trait.
  • The computer is used to focus on the range of item difficulty that helps assess an individual's ability level.

Standard Error of Measurement

  • Provides an estimate of the amount of error inherent in an observed score or measurement.
  • Higher reliability, lower SEM.

Validity

  • A judgment or estimate of how well a test measures what it is supposed to measure.
  • Content Validity: Degree of representativeness and relevance of the assessment instrument to the construct being measured.
  • Criterion Validity: A judgment of how adequately a test score can be used to infer an individual's most probable standing on some measure of interest.
  • Construct Validity: Judgment about the appropriateness of inferences drawn from test scores regarding individual standing on a variable called construct.

Test Blueprint

  • A plan regarding the types of information to be covered by items, the number of items tapping each area of coverage, and so forth.

#BLEPP

  • Source: Cohen & Swerdlik (2018), Kaplan & Saccuzzo (2018), Groth & Wright (2016), Psych Pearls.

Additional Concepts

  • Factor Analysis: Designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ.
  • Bias: Factor inherent in a test that systematically prevents accurate, impartial measurement.
  • Rating: Numerical or verbal judgment that places a person or an attribute along a continuum identified by a scale of numerical or word descriptors.
  • Fairness: The extent to which a test is used in an impartial, just, and equitable way.
  • Utility: The practical value of testing to improve efficiency.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Description

A quiz on psychological assessment based on Cohen & Swerdlik (2018), Kaplan & Saccuzzo (2018), and Groth & Wright (2016). Review questions to help prepare for boards.

More Like This

Use Quizgecko on...
Browser
Browser