Podcast
Questions and Answers
What is the acceptable range for the item difficulty index?
What is the acceptable range for the item difficulty index?
What does a high item discrimination index indicate?
What does a high item discrimination index indicate?
What should be done if the standardized factor loading is below the threshold?
What should be done if the standardized factor loading is below the threshold?
What does an item-total/item-rest correlation indicate?
What does an item-total/item-rest correlation indicate?
Signup and view all the answers
What is the meaning of a negative item discrimination index?
What is the meaning of a negative item discrimination index?
Signup and view all the answers
Study Notes
Test Development Overview
- Test development is an umbrella term encompassing all aspects of creating a test.
- Test development involves conceptualization, construction, tryout, item analysis, and revision.
Test Conceptualization
- The process involves tracing thoughts about a test's design.
- An emerging phenomenon or behavior pattern can inspire the test.
- Pilot studies are essential to evaluate items for inclusion in the final test.
Norm-Referenced Tests
- These tests compare test takers' performance to a norm group (similar age/grade).
- Scores indicate how someone performed relative to others.
Criterion-Referenced Tests
- These tests assess a test taker's performance using specific criteria.
- Cut scores often determine if a test taker met the required standard.
- Examples include licensed professions, civil services, etc.
Test Construction
- This involves developing and evaluating a test with a specified psychological function.
- It combines writing test items, formatting, setting rules, and overall test design.
- SCALING involves assigning numbers to reflect attributes/traits in measurements.
- Scaling methods exist for various measurement types. (e.g., rankings of experts, equal-appearing intervals, absolute scaling, Likert scales, Guttman scales, etc)
Test Construction (Continued)
- Item format includes variables such as form, structure, and the arrangement of items.
- Test construction utilizes selected-response (multiple-choice, matching, true/false) and constructed-response (essay, short answer) formats.
- Item pool is the source from which test questions are drawn.
- Item banks are large collections of test questions.
- Item branching dynamically adjusts the test based on test taker responses.
- Computer adaptive tests (CATs) tailor the test content and order based on past answers.
Test Scoring Models
- Cumulative scoring assesses the number of correct answers to reflect a construct.
- Class/category scoring classifies individuals based on responses.
- Ipsative scoring shows performance on different test sections.
Writing Test Items
- Clearly define the measured concept.
- Create a diverse question pool.
- Avoid excessively long items.
- Maintain an appropriate difficulty level for the intended test takers.
- Mix positively and negatively worded items.
Test Tryout
- The test is administered to a representative group similar to the intended target audience.
- A minimum of 20 participants per item is preferred.
- A good test helps discriminate between test takers well.
Item Analysis
-
Item Reliability: Measures the internal consistency of an item. A strong correlation between the item and the total test score is desired.
-
Item analysis includes reliability and validity aspects
-
Item Validity: Determines if the test measures what it's intended to measure; A standardized factor loading is commonly used; an indicator loading value of at least .50 is often required.
-
Item Analysis (Factor Analysis): Constructs aren't directly observable; test taker answers help measure constructs.
-
Item Difficulty: Measures the proportion of respondents answering an item correctly; indices typically range from 0.30-0.70
Item Discrimination
- Measures how effectively an item differentiates high from low scorers.
- A high item discrimination index means upper scorers answer the item correctly, and low scorers don't.
- A value above .30 is usually preferred.
Test Revision
- A stage in test development where an existing test is adapted or a new edition is created or modified
- This process can apply to newly developed and existing tests.
- Cross-validation/rotation Estimation/Out of Sample Testing validates tests on new groups.
- Co-validation validates tests on the same sample group across multiple tests.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
Explore the intricacies of test development, including the crucial steps of conceptualization, construction, and evaluation. Understand the differences between norm-referenced and criterion-referenced tests, as well as the significance of pilot studies in refining test items. This quiz aims to enhance your knowledge of psychological assessment methods.