Podcast
Questions and Answers
In the context of test score interpretation, what nuanced distinction delineates the responsibilities of test developers versus test users when validity evidence for a specific interpretation is absent?
In the context of test score interpretation, what nuanced distinction delineates the responsibilities of test developers versus test users when validity evidence for a specific interpretation is absent?
- Test developers must retroactively generate validity evidence for any interpretation, irrespective of its intended use, whereas test users are absolved of responsibility if the interpretation is deemed 'common'.
- Test developers and users share joint responsibility in all circumstances, necessitating collaborative generation of validity evidence before any interpretation is attempted, regardless of prior validation efforts.
- Test developers should caution against unsupported interpretations, and test users are then responsible for justifying any such interpretation with a rationale and new evidence. (correct)
- Test developers are primarily responsible for articulating potential interpretations, while test users bear the onus of justifying unsupported interpretations through supplementary data collection alone.
When contemplating the use of a test for assigning individuals to alternative treatment modalities, under what precise conditions should evidence of differential outcomes be considered mandatory rather than discretionary?
When contemplating the use of a test for assigning individuals to alternative treatment modalities, under what precise conditions should evidence of differential outcomes be considered mandatory rather than discretionary?
- Evidence of differential outcomes is exclusively mandatory when the test is the sole determinant in assigning individuals, irrespective of whether a common criterion exists for evaluating treatment efficacy.
- Evidence of differential outcomes is required only when the alternative treatments are mutually exclusive and designed to address fundamentally distinct psychological constructs.
- Evidence of differential outcomes is inherently discretionary, contingent solely on the availability of resources and the risk tolerance of the practitioner, rather than any stringent methodological necessity.
- Evidence of differential outcomes becomes mandatory when the treatments can be reasonably compared on a common criterion, making it feasible to support such evidence. (correct)
In the sphere of test validation, what methodological consideration becomes critically paramount when validation evidence is substantively predicated on the opinions rendered by expert judges or raters regarding the test's content or construct alignment?
In the sphere of test validation, what methodological consideration becomes critically paramount when validation evidence is substantively predicated on the opinions rendered by expert judges or raters regarding the test's content or construct alignment?
- The focal point should be the exhaustive documentation of the procedures governing the selection of expert judges and the methods employed in eliciting their judgments or ratings. (correct)
- The predominant objective should be to obfuscate the specific aims of the test from the expert judges to circumvent potential Hawthorne effects in their evaluations.
- The primary concern should revolve around the minimization of inter-rater reliability, thereby necessitating statistical adjustments to account for potential biases introduced by subjective evaluations.
- Emphasis should be placed on ensuring that expert opinions are triangulated with empirical data, thereby mitigating the inherent subjectivity associated with qualitative assessments.
When a test undergoes modifications to mitigate accessibility barriers intending to measure the construct, what evidentiary standard must be met to ensure the ongoing validity of score interpretations?
When a test undergoes modifications to mitigate accessibility barriers intending to measure the construct, what evidentiary standard must be met to ensure the ongoing validity of score interpretations?
Under what explicit circumstances is a test user ethically and methodologically obligated to scrutinize the validity of score interpretations specifically for test takers with limited proficiency in the test's language?
Under what explicit circumstances is a test user ethically and methodologically obligated to scrutinize the validity of score interpretations specifically for test takers with limited proficiency in the test's language?
What is the most critical consideration when standardized tests or procedures are modified for specific subgroups of test-takers to ensure comparability of scores between the original and modified versions?
What is the most critical consideration when standardized tests or procedures are modified for specific subgroups of test-takers to ensure comparability of scores between the original and modified versions?
When evaluating the reliability/precision of test scores, what critical element must be explicitly delineated alongside a rationale for its adoption, considering the nuances inherent in varying testing scenarios?
When evaluating the reliability/precision of test scores, what critical element must be explicitly delineated alongside a rationale for its adoption, considering the nuances inherent in varying testing scenarios?
How should test developers or users address the potential for differential prediction across relevant subgroups when using criterion-related validity evidence for test score-based predictions of future performance?
How should test developers or users address the potential for differential prediction across relevant subgroups when using criterion-related validity evidence for test score-based predictions of future performance?
In educational testing, what multifaceted approach should be implemented when individual student scores from disparate tests are juxtaposed, ensuring a holistic educational decision-making process?
In educational testing, what multifaceted approach should be implemented when individual student scores from disparate tests are juxtaposed, ensuring a holistic educational decision-making process?
When a test is designed or used for multiple purposes in educational settings, what comprehensive evidentiary standard must be satisfied?
When a test is designed or used for multiple purposes in educational settings, what comprehensive evidentiary standard must be satisfied?
What is the primary responsibility of test users when they contemplate altering the mode of administration or the language used in administering a test, deviating from the standardized protocol?
What is the primary responsibility of test users when they contemplate altering the mode of administration or the language used in administering a test, deviating from the standardized protocol?
In what precise manner should users of tests who conduct program evaluations meticulously delineate the demographic characteristics of the population that the program is actually designed to serve?
In what precise manner should users of tests who conduct program evaluations meticulously delineate the demographic characteristics of the population that the program is actually designed to serve?
When tests are selected for use in program evaluation or accountability settings, what conditions must be met regarding the description of intended uses and expected consequences?
When tests are selected for use in program evaluation or accountability settings, what conditions must be met regarding the description of intended uses and expected consequences?
When multiple test scores or test scores and non-test information are integrated when making a decision, what information is essential?
When multiple test scores or test scores and non-test information are integrated when making a decision, what information is essential?
Prior to development and implementation of an employment or credentialing test, what specific documentation must be produced?
Prior to development and implementation of an employment or credentialing test, what specific documentation must be produced?
For test security, what should the documentation entail?
For test security, what should the documentation entail?
When raw scores are interpretable, what should happen?
When raw scores are interpretable, what should happen?
When norms are used to characterize test takers, what should be defined and support the intended use or interpretation?
When norms are used to characterize test takers, what should be defined and support the intended use or interpretation?
What elements should a test specification do?
What elements should a test specification do?
When tests require constructed responses, what must test developers provide?
When tests require constructed responses, what must test developers provide?
What aspects of the validity evidence should test users know?
What aspects of the validity evidence should test users know?
When should one refrain from using a test?
When should one refrain from using a test?
What should those who make tests do to remove construct-relevant barriers for relevant groups?
What should those who make tests do to remove construct-relevant barriers for relevant groups?
When should test takers be explicitly told of accommodations?
When should test takers be explicitly told of accommodations?
When should supporting test documents be provided?
When should supporting test documents be provided?
Flashcards
Standard 1.0
Standard 1.0
Each intended test score interpretation should be articulated, and appropriate validity evidence should be provided.
Standard 1.1
Standard 1.1
Test developer should clearly state how scores are intended to be interpreted and used.
Standard 1.2
Standard 1.2
A rationale, evidence, and theory should support each intended score interpretation.
Standard 1.3
Standard 1.3
Signup and view all the flashcards
Standard 1.4
Standard 1.4
Signup and view all the flashcards
Standard 1.5
Standard 1.5
Signup and view all the flashcards
Standard 1.6
Standard 1.6
Signup and view all the flashcards
Standard 1.7
Standard 1.7
Signup and view all the flashcards
Standard 1.8
Standard 1.8
Signup and view all the flashcards
Standard 1.9
Standard 1.9
Signup and view all the flashcards
Standard 1.10
Standard 1.10
Signup and view all the flashcards
Standard 1.11
Standard 1.11
Signup and view all the flashcards
Standard 1.12
Standard 1.12
Signup and view all the flashcards
Standard 1.13
Standard 1.13
Signup and view all the flashcards
Standard 1.14
Standard 1.14
Signup and view all the flashcards
Standard 1.15
Standard 1.15
Signup and view all the flashcards
Standard 1.16
Standard 1.16
Signup and view all the flashcards
Standard 1.17
Standard 1.17
Signup and view all the flashcards
Standard 1.18
Standard 1.18
Signup and view all the flashcards
Standard 1.19
Standard 1.19
Signup and view all the flashcards
Standard 1.20
Standard 1.20
Signup and view all the flashcards
Standard 7.1
Standard 7.1
Signup and view all the flashcards
Standard 7.7
Standard 7.7
Signup and view all the flashcards
Standard 8.0
Standard 8.0
Signup and view all the flashcards
Standard 9.1
Standard 9.1
Signup and view all the flashcards
Standard 9.10
Standard 9.10
Signup and view all the flashcards
Study Notes
- Here are your study notes
Chapter 1. Validity
- Standard 1.0: It's important to clearly state how test scores should be understood for a specific purpose, and to back up these interpretations with solid evidence.
- Standard 1.1: Test creators must clearly explain how test scores should be interpreted and applied.
- Standard 1.2: Each way you interpret test scores for a specific use needs a rationale, plus a summary of supporting evidence and theory.
- Standard 1.3: Warn users if a common interpretation lacks validity evidence or clashes with available data, and caution against unsupported interpretations.
- Standard 1.4: Users must justify unvalidated interpretations of test scores with a rationale and new evidence if necessary.
- Standard 1.5: When suggesting a specific outcome from test score interpretation, provide the reasons and supporting evidence.
- Standard 1.6: If testing is promoted for indirect benefits beyond score interpretation, explain the reasoning behind expecting those benefits.
- Standard 1.7: If practice or coaching is said to have little impact on test performance, document how test performance changes with such instruction.
- Standard 1.8: Describe the group of test takers used for validity evidence in detail, including socio-demographic and developmental traits.
- Standard 1.9: Fully describe the process for selecting experts and gathering their judgments when validation relies on expert opinions.
- Standard 1.10: When using statistical analyses for validity evidence, detail data collection conditions so users can judge relevance to their context.
- Standard 1.11: Justify test content based on how well it represents the intended population and the construct it measures.
- Standard 1.12: Provide theoretical or empirical evidence if score interpretation relies on test takers' psychological or cognitive processes.
- Standard 1.13: Provide evidence about the test's internal structure if score interpretation depends on relationships between test items or parts.
- Standard 1.14: Support any interpretation of subscores, score differences, or profiles with rationale and relevant evidence.
- Standard 1.15: When interpreting performance on specific or small subsets of items, provide rationale and supporting evidence.
- Standard 1.16: Offer a rationale for selecting additional variables when validity evidence includes empirical analyses of item responses with other data.
- Standard 1.17: Report information on the suitability and technical quality of criteria when validation depends on links between test scores and criterion variables.
- Standard 1.18: Provide information on criterion performance levels associated with specific test score levels when asserting test performance predicts criterion performance.
- Standard 1.19: Include additional relevant variables in predictor-criterion relationship analyses when using test scores with other variables to predict outcomes.
- Standard 1.20: Report uncertainty indices for effect size measures used to draw inferences beyond the sample data.
- Standard 1.21: Report both adjusted and unadjusted coefficients, the procedure used, and all statistics when making statistical adjustments.
- Standard 1.22: Ensure test and criterion variables are comparable to those in summarized studies when using meta-analysis for test-criterion relationship evidence.
- Standard 1.23: Clearly describe meta-analytic evidence, including methodological choices and corrections for artifacts.
- Standard 1.24: Provide evidence of differential outcomes when recommending a test for assigning people to alternative treatments with comparable outcomes.
- Standard 1.25: Investigate whether unintended consequences arise from test sensitivity to other traits or failure to represent the intended construct.
Chapter 2. Reliability/Precision and Errors of Measurement
- Standard 2.0: Provide appropriate reliability/precision evidence for each intended score use interpretation.
- Standard 2.1: State the range of replications for evaluating reliability/precision and the rationale for this choice.
- Standard 2.2: Ensure reliability/precision evidence aligns with testing procedures and intended score use interpretations.
- Standard 2.3: Report estimates of relevant reliability/precision indices for each total score, subscore, or score combination.
- Standard 2.4: Provide reliability/precision data, including standard errors, for differences between individual or group scores.
- Standard 2.5: Reliability estimation procedures should align with the test structure.
- Standard 2.6: Avoid interpreting a reliability or generalizability coefficient interchangeably with indices addressing other variability types, unless measurement error definitions are equivalent.
- Standard 2.7: Provide evidence of interrater consistency and within-examinee consistency over repeated measurements when subjective judgment is involved in test scoring.
- Standard 2.8: Gather and report reliability/precision data for local scoring of constructed-response tests when adequate samples are available.
- Standard 2.9: Report reliability/precision evidence for both long and short test versions, preferably based on independent administrations with separate test taker samples.
- Standard 2.10: Provide separate reliability/precision analyses for scores under major variations in tests or test administration procedures if sample sizes are adequate.
- Standard 2.11: Provide reliability/precision estimates as soon as feasible for each relevant subgroup the test is recommended for.
- Standard 2.12: Provide reliability/precision data for each age or grade-level subgroup if a test is used across several grades or ages and has separate norms.
- Standard 2.13: Provide the standard error of measurement, both overall and conditional, in units of each reported score.
- Standard 2.14: Report conditional standard errors of measurement at multiple score levels unless the standard error is consistent across score levels.
- Standard 2.15: Investigate and report the extent and impact of differences if conditional standard errors of measurement or test information functions vary substantially across subgroups.
- Standard 2.16: Estimate the percentage of consistently classified test takers across two procedure replications when using a test for classification decisions.
- Standard 2.17: Ensure tested groups are representative of a larger population when average test scores are the interpretive focus.
- Standard 2.18: Subsets of items can be assigned randomly to different subsamples of examinees when measuring group instead of individual performance.
- Standard 2.19: Clearly describe and express each method of quantifying score reliability/precision in appropriate statistical terms.
- Standard 2.20: Report both adjusted and unadjusted coefficients and the adjustment procedure if reliability coefficients are adjusted for range restriction or variability.
Chapter 3. Fairness in Testing
- Standard 3.0: Design all testing steps to minimize construct-irrelevant variance and promote valid score interpretations for all examinees.
- Standard 3.1: Those responsible for test development, revision, and administration should design all steps of the testing process to promote valid score interpretations for intended score uses for the widest possible range of individuals and relevant subgroups in the intended population.
- Standard 3.2: Test developers are responsible for developing tests that measure the intended construct and for minimizing the potential for tests' being affected by construct-irrelevant characteristics
- Standard 3.3: Include relevant subgroups in validity, reliability/precision, and other preliminary test construction studies.
- Standard 3.4: Treat all test takers comparably during test administration and scoring.
- Standard 3.5: Specify and document test administration and scoring provisions to remove construct-irrelevant barriers for all test-taker subgroups.
- Standard 3.6: Examine validity evidence for score interpretations for intended uses for individuals from relevant subgroups where credible evidence indicates differing score meanings.
- Standard 3.7: Evaluate the possibility of differential prediction for relevant subgroups when using criterion-related validity evidence for test score predictions.
- Standard 3.8: Collect and report evidence of score interpretation validity for relevant subgroups when tests require scoring constructed responses.
- Standard 3.9: Develop and provide test accommodations to remove construct-irrelevant barriers to examinees' ability to demonstrate standing on target constructs.
- Standard 3.10: Document standard provisions for using test accommodations and monitor accommodation implementation.
- Standard 3.11: Obtain and document validity evidence for intended uses of changed test scores when the test is altered to remove accessibility barriers.
- Standard 3.12: Describe methods used to establish adequacy of adaptation and document empirical or logical evidence for validity of test score interpretations when translating a test.
- Standard 3.13: Administer tests in the language most relevant and appropriate for the test's purpose.
- Standard 3.14: Interpreters should follow standardized procedures, being fluent in the test's language and content and the examinee's native language and culture when testing requires interpreters.
Chapter 4. Design and Development
- Standard 4.0: Design tests and testing programs to support the validity of score interpretations for intended uses.
- Standard 4.1: Test specifications should describe the purpose(s) of the test, the definition of the construct or domain measured, the intended examinee population, and interpretations for intended uses.
- Standard 4.2: Define the test content, length, item formats, desired psychometric properties, and item/section order in addition to describing intended uses.
- Standard 4.3: Document administrator, scoring, and reporting rules used in computer-adaptive, multistage-adaptive, or other algorithm-driven tests.
- Standard 4.4: Document content and psychometric specifications when creating different test versions with specification changes.
- Standard 4.5: Identify permissible variations in administration conditions if conditions are allowed to vary across test takers or groups.
- Standard 4.6: Have relevant experts review test specifications to evaluate their appropriateness for intended score uses and fairness for intended test takers.
- Standard 4.7: Document the procedures used to develop, review, try out, and select items for the item pool.
- Standard 4.8: The test review process needs to include empirical analyses and/or the use of expert judges to review items and scoring criteria.
- Standard 4.9: Document the procedures used to select the sample(s) of test takers as well as the resulting characteristics of the sample(s) when item or test form tryouts are conducted
- Standard 4.10: Document the model used when a test developer evaluates item psychometric properties.
- Standard 4.11: Conduct cross-validation studies when selecting items or tests primarily based on empirical relationships rather than content or theoretical factors.
- Standard 4.12: Document the extent to which a test's content domain represents the domain defined in the test specifications.
- Standard 4.13: Investigate sources of irrelevant variance when credible evidence suggests it could affect test scores.
Chapter 5. Scores, Scales, Norms, Score Linking, and Cut Scores
- Standard 5.0: Derive test scores to support interpretations for the proposed test uses.
- Standard 5.1: Provide clear explanations of scale score characteristics, meaning, intended interpretation, and limitations.
- Standard 5.2: Describe the scale construction procedures and their rationale clearly.
- Standard 5.3: Explicitly caution test users if specific score scale misinterpretations are likely.
- Standard 5.4: Describe and justify raw score meanings, intended interpretations, and limitations as you would for scale scores, when raw scores are intended to be directly interpretable.
- Standard 5.5: Explain the rationale for recommended score interpretations clearly when raw or scale scores are designed for criterion-referenced interpretation.
- Standard 5.6: Conduct periodic checks on the stability of the scale on which scores are reported for testing programs maintaining a common scale over time.
- Standard 5.7: Provide evidence of score comparability on changed versions with scores on the original versions when standardized tests or procedures are changed for subgroups.
- Standard 5.8: Norms, if used, should be tied to clearly described populations.
- Standard 5.9: Reports of norming studies should include specification of the population sampled, sampling procedures and participation rates, sample weighting, the dates of testing, and descriptive statistics.
- Standard 5.10: Define the statistics used to summarize examinee groups and the norms to which these statistics are referred, and must support intended use or interpretation.
- Standard 5.11: Renorm tests frequently to permit continued accurate and appropriate score interpretations, as long as the test remains in print, it is the test publisher's responsibility.
- Standard 5.12: Provide a clear rationale and supporting evidence for using scale scores earned on alternate test forms interchangeably.
- Standard 5.13: Provide detailed technical information on the method used and accuracy of equating functions when form-to-form score equivalence claims are based on equating procedures.
- Standard 5.14: Describe methods of establishing such equivalence when equating studies rely on the statistical equivalence of examinee groups receiving different forms.
- Standard 5.15: Present anchor test characteristics and similarity to the forms being equated, including content specifications and empirical relationships among test scores when the equating studies use an anchor test design.
- Standard 5.16: When test scores are based on model-based psychometric procedures, documentation should indicate that the scores have comparable meaning over alternate sets of test items.
- Standard 5.17: Provide direct evidence of score comparability and specify the examinee population when linking scores on tests that cannot be equated.
- Standard 5.18: Describe the construction, intended interpretation, and limitations of linking scores when linking procedures are used to relate scores on tests or test forms that are not closely parallel.
- Standard 5.19: Provide evidence to show no distortions of scale scores, cut scores, or norms for different versions or score linkings between them when creating tests by taking a subset or rearranging items.
- Standard 5.20: Identify test specification changes from one version to the next, and indicate that converted scores for the two versions may not be strictly equivalent even when statistical procedures have been used.
- Standard 5.21: Document the rationale and procedures used to establish cut scores clearly when proposed score interpretations involve one or more cut scores.
Chapter 6. Test Administration, Scoring, Reporting, and Interpretation
- Standard 6.0: Assessment instruments should have established procedures for test administration, scoring, reporting, and interpretation to support useful score interpretations.
- Standard 6.1: Test administrators should follow the standardized procedures for test administration and scoring specified.
- Standard 6.2: Test takers should be informed in advance of established procedures for requesting and receiving accommodations.
- Standard 6.3: Document and report changes or disruptions to standardized test administration procedures or scoring.
- Standard 6.4: The testing environment should furnish reasonable comfort with minimal distractions to avoid construct-irrelevant variance.
- Standard 6.5: Test takers should be provided appropriate instructions, practice, and support to reduce construct-irrelevant variance.
- Standard 6.6: Make reasonable efforts to ensure test score integrity by eliminating opportunities for test takers to attain scores fraudulently.
- Standard 6.7: Test users are responsible for protecting the security of test materials at all times.
Chapter 7. Supporting Documentation for Tests
- Standard 7.0: Information relating to tests should be clearly documented so that those who use tests can make informed decisions regarding which test to use for a specific purpose, how to administer the chosen test, and how to interpret test scores.
- Standard 7.1: Document test rationale, recommended uses, support for such uses, and score interpretation assistance.
- Standard 7.2: Document the intended test population and specifications.
- Standard 7.3: Test documents should cite a representative set of studies when available and appropriately shared
- Standard 7.4: Test documentation should summarize test development procedures, the results of statistical analyses that were used in the development of the test, evidence of the reliability/precision of scores and the validity of their recommended interpretations, and the methods for establishing performance cut scores.
- Standard 7.5: Record the relevant characteristics of individuals or groups who participated in data collection associated with test development or validation.
- Standard 7.6: When a test is available in more than one language, the test documentation should provide information on the procedures that were employed to translate and adapt the test.
- Standard 7.7: Test documents should specify user qualifications needed to administer, score, and accurately interpret a test.
- Standard 7.8: Test documentation should include detailed instructions on how a test is to be administered and scored.
- Standard 7.9: Explain the steps for protecting test materials and preventing information exchange during administration sessions if test security is critical.
- Standard 7.10: Accompany tests designed for test takers to score and interpret with scoring instructions and interpretive materials in understandable language.
- Standard 7.11: Interpretive materials for tests should provide examples illustrating prospective test takers' diversity.
- Standard 7.12: When test scores are used to make predictions about future behavior, the evidence supporting those predictions should be provided to the test user.
- Standard 7.13: Make supporting documents available to the appropriate people in a timely manner.
- Standard 7.14: Test documentation should be amended, supplemented, or revised when substantial changes are made to the test.
Chapter 8. The Rights and Responsibilities of Test Takers
- Standard 8.0: Test takers can have adequate information to help then properly prepare to produce accurate score interpretations.
- Standard 8.1: All test takers can have information about test content and purposes before testing.
- Standard 8.2: Provide test takers with enough information about the test, testing process, intended use, scoring criteria, policy, accommodations, and confidentiality, consistent with getting valid responses.
- Standard 8.3: Supply information about the characteristics of each format when the test taker is offered a choice of test format.
- Standard 8.4: Obtain informed consent before testing begins, except (a) when testing without consent is mandated by law or governmental regulation, (b) when testing is conducted as a regular part of school activities, or (c) when consent is clearly implied, such as in employment settings.
- Standard 8.5: Policies for the release of test scores with identifying information should be carefully considered and clearly communicated to those who have access to the scores.
- Standard 8.6: Protect test data maintained or transmitted in data files from improper access, use, or disclosure with physical, technical, and administrative protections.
- Standard 8.7: Choose labels reflecting intended inferences and describe them precisely when score reporting assigns testers into categories.
- Standard 8.8: Test takers should have timely access to a report of test scores when making decisions or recommendations, unless waived or prohibited by law.
- Standard 8.9: Inform test takers that cheating is unacceptable and may result in sanctions.
- Standard 8.10: Notify a test taker if their score report is expected to be delayed due to irregularities.
- Standard 8.11: The type of evidence and general procedures used to investigate irregularities should be explained to all test takers.
- Standard 8.12: Test takers are entitled to fair treatment and a dispute resolution process regarding charges of testing irregularities.
Chapter 9. The Rights and Responsibilities of Test Users
- Standard 9.0: It is the test users responsibility for knowing validity evidence to interpret scores, common use consequences
- Standard 9.1: Only qualified individuals should be delegated to use test through appropriate training and certification
- Standard 9.2: Study test study materials prior to adoptions or use of a published test
- Standard 9.3: Understand evaluations validity is based on scores
- Standard 9.4: When testing for a purpose, the user is responsible for selection
- Standard 9.5: User should be alert for scoring errors
- Standard 9.6: User should be alert for score misinterpretations
- Standard 9.7: Users should verify data interpretations when significant population changes occur.
- Standard 9.8: Responsible parties need to support information to minimize misinterpretation of data.
- Standard 9.9: User needs rationale when alternating test format, and scores need to be valid.
- Standard 9.10: Test user should not rely on computer generated interpreations.
- Standard 9.11: Validate scores with limited profiency
- Standard 9.12: Strict adherence criteria should be used to describe the status of a examinee
- Standard 9.13: Test taker should not be interpreted with other relevant information
- Standard 9.14: Inform users with test accommodation, and provided appropriate methods.
- Standard 9.15: Those with legitimate interests should know about scoring responses, administration, and purposes
- Standard 9.16: Report test result to test takers and be prepared to get information in a timely manner.
- Standard 9.17: Inform test takers about process and rights if integrity is concerned.
- Standard 9.18: Give users test opportunities, indicate scores for who get reports
- Standard 9.19: Users protect privacy of exames and institutions
- Standard 9.20: Estbalish time policy and apply time over time when sharing with the public.
- Standard 9.21: Protect the safety and security of the tests including copyright.
- Standard 9.22: Test users must notify the test takers of electronic administration policies.
Chapter 10. Psychological Testing and Assessment
- Standard 10.1: Users should confine to training and experience
- Standard 10.2: Select test and should logical analysis of assessments
- Standard 10.3: Supervision verifies knowledge and skill for administrations
- Standard 10.4: Combined batteries fit assessments.
- Standard 10.5: Test should be suitable of the test takers.
- Standard 10.6: Professionals should choose test with abnormal diagnosis in mind
- Standard 10.7: Give test introductory
- Standard 10.8: Calibration accurate setting should followed from instructions
- Standard 10.9: Construct technology use should fit capabilities and components.
Chapter 11: Workplace Testing and Credentialing
- Standard 11.1: Develop statement with interoperations of test scores with specified uses
- Standard 11.2: Need clear test through domain through content
- Standard 11.3: Test requires a support interpretation and a content with occupation requirements.
- Standard 11.4: Making inferences from vaidity from valididty evidence.
- Standard 11.5: Local studies interoperations with the relation.
- Standard 11.6: Technical feasbility needs to depend on validations
- Standard 11.7: Evidence with criteron with constuct organizational
- Standard 11.8: Influenced finding and indentify artifacts
- Standard 11.9: Perviously validation study if its was the most current.
- Standard 11.10: Linking score to job scores and likelihoods
- Standard 11.11: Substainality same situation
- Standard 11.12: Relationship relys and test has criteron construct
- Standard 11.13: Should be defined and credentialing can occupation
- Standard 11.14: Reliability consistent that has the decisions
Chapter 12. Educational Testing and Assessment
- Standard 12.1: Described tests, school, state or other.
- Standard 12.2: Provide evidences with fairness, precision, use.
- Standard 12.3: Responsible process for testing experiences to be for all.
- Standard 12.4: Knowledge in target domain.
- Standard 12.5: Local needs be tested.
- Standard 12.6: Scoring algorithms should be tested.
- Standard 12.7: Tested scoring can be adverse.
- Standard 12.8: Content needs to be learned
Chapter 13. Uses of Tests for Program Evaluation, Policy Studies, and Accountability
- Standard 13.1: There should be clear descriptions of programs.
- Standard 13.2: The limitations of the scores should be reported
- Standard 13.3: The method should be justified
- Standard 13.4: The avaliabilty and reliability should be made available
- Standard 13.5: Step should be taken to promote accurate interpretations
- Standard 13.6: Performance of the test groups.
- Standard 13.7: Evaluation of the consequences
- Standard 13.8: Test should identity positive and negative consequences
- Standard 13.9: In conjunction to other information
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.