Validity and Reliability PDF
Document Details
![CompactMagnesium](https://quizgecko.com/images/avatars/avatar-6.webp)
Uploaded by CompactMagnesium
Tarlac National High School
Tags
Summary
This document explains the concepts of validity and reliability in measurement. It covers different types of validity, such as content validity, face validity, construct validity, and criterion-related validity, including concurrent and predictive validity. The document also discusses reliability, including test-retest, split-half, and inter-rater reliability.
Full Transcript
VALIDITY AND RELIABILITY Validity denotes the extent to which an instrument is measuring what it is supposed to measure. Content Validity Whether the individual items of a test represent what you actually want to assess under CONTENT-RELATED Face Validity Also known as logical validity whi...
VALIDITY AND RELIABILITY Validity denotes the extent to which an instrument is measuring what it is supposed to measure. Content Validity Whether the individual items of a test represent what you actually want to assess under CONTENT-RELATED Face Validity Also known as logical validity which involves an analysis of whether the instrument is using a valid scale. The researcher determines the face validity by looking at the features of the instrument. It includes the size of the font or typeface, spacing, size of the paper used, and other necessary details that will not distract the respondents while answering the questionnaire. under CONTENT-RELATED Construct Validity The extent to which a test measures a theoretical construct or concept it is intended to measure Criterion -Related Validity A method for assessing the validity of an instrument by comparing its scores with another criterion known already to be a measure of the same trait or skill. under CRITERION-RELATED Concurrent Validity The extent to which the results of a particular test or measurement align with those of an established test conducted at the same time. Predictive Validity The extent to which a procedure allows accurate predictions about a subject’s future behavior. Reliability The consistency of measurements Produces similar scores across various conditions and situations, including different evaluators and testing environments. Test -Retest Reliability Suggests that subjects tend to obtain the same score when tested at different times. Split -Half Reliability Sometimes referred to as internal consistency Indicates that subjects’ scores on some trials consistently match their scores on other trials Interrater Reliability Involves having two raters independently observe and record specified behaviors, such as hitting, crying, yelling, and getting out of the seat, during the same time period A specific behavior the observer is looking to record Alternate Forms Reliability Also known as parallel-forms reliability Obtained by administering two equivalent tests to the same group of examinees Items are matched for difficulty on each test It is necessary that the time frame between giving the two forms be as short as possible THE END