Readability and Validity of Instruments PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document details various aspects of measurement instrument design, focusing on concepts of readability and reliability. It includes discussions of different types of reliability and validity, such as content validity, construct validity, and convergent validity. The document recommends using various methods to assess the validity of measurement tools. It is aimed at researchers and educational professionals.
Full Transcript
Readability level: conducted to determine the participants ability to read and understand the items on a instrument - Researchers should report the level of education needed to read the instrument - Readability must be appropriate to promote reliability and validity of an instrument Reliabi...
Readability level: conducted to determine the participants ability to read and understand the items on a instrument - Researchers should report the level of education needed to read the instrument - Readability must be appropriate to promote reliability and validity of an instrument Reliability: the reliability of a measurement method or instrument that indicates the consistency of the measurement it obtains of an attribute, concept or situation in a study or clinical practice Similar terms: This means how consistent and dependable it is when measuring something like a person's health or a specific situation. If a measurement is reliable it will give you similar results every time you use it under the same conditions. Example weighing yourself on a scale. Example of reliability: CES-D. Developed to diagnose depression in mental health patients A scale that has low reliability values is considered unreliable and results in increased measurement error. Reliability testing: Examines the amount of random error in an instrument that is used in a study ( reliability decreases as random error increases) Stability reliability: is concerned with the consistency of repeated measures of the same variable or attribute with the same scale or measurement method over time. Similar terms: when you measure something using the same method or tool multiple times, you want to get similar results. Example blood pressure. If you check your blood pressure several times with the same device, reliable measurement will show that your blood pressure readings stay close to each other. Its all about making sure that the measurements are STABLE and trustworthy. Adequate test retest reliability: participants complete the scale in a similar way from one time to the next ICC. Equivalence Reliability: compares 2 versions of the same scale or instrument measuring the same event Person R value below 0.70 should generate concern about the reliability of the data. Best to be 0.80-0.90 Alternate forms reliability: compares 2 versions of a test or scale. Example: alternate forms reliability can be used to test reliability of multiple forms of the graduate record exam. Internal consistency: used with multi item scales, where each item on a scale is correlated with all other items on the scale to determine consistency of measurement Example: cronbach's alpha coefficient ( can be used to calculate the error for a scale with a specific population) Validity: if it accurately reflects the concept it was developed to measure. Content validity: examines the extent to which the measurement method includes all the major elements relevant to the concept being measured. The evidence for content validity of an instrument or scale includes 1. How well the items of the scale reflect the description of the concept in the literature 2. The content experts evaluation of the relevance of items on the scale that might be reported as an index 3. The study participants responses to scale items Construct validity: focuses on determining whether the instrument actually measures the theoretical construct. Ensures the test truly represents the concept it stands for. Convergent validity: comparing a newer instrument with an existing instrument that measure the same concept. Divergent validity: scores from an existing instrument are correlated with the scores from an instrument measuring an opposite concept. Validity from contrasting groups: test by identifying groups that are expected to know or to have contrasting scores on an instrument and then asking groups to complete it If two groups have contrasting scores the validity of the instrument is strengthened. Successive verification validity: achieved when an instrument is used in several studies with a variety of study participants in various settings. Older sources for scales often used in studies. Criterion-related validity: by using a study participant's score on an instrument or scale to infer his or her performance on a criterion.