REVIEWER IN PSYCHOLOGICAL ASSESSMENT_ Aspects of Validity PDF
Document Details
Uploaded by WellManneredFourier
Bulacan State University
Tags
Related
- Hau Psychology Society Psychological Assessment Midterm Reviewer PDF
- 3006PSY Psychological Assessment of Individual Differences PDF
- MMPI-3 Procedures for Administration and Scoring PDF
- Psychological Testing Presentation PDF
- Introduction to Psychological Assessment in the SA Context 6th edition PDF
- Psychological Assessment PDF
Summary
This document discusses various aspects of validity in psychological assessment, including face validity, content-related validity, criterion-related validity, and construct validity. It explains the meaning and importance of each type of validity, providing examples and highlighting the relationship between validity and reliability.
Full Transcript
REVIEWER IN PSYCHOLOGICAL ASSESSMENT: Aspects of Validity Face “Validity” The mere appearance that a measure has validity, it is a judgment concerning how relevant the test items appear to be. Face validity is really not validity at all because it does not offer...
REVIEWER IN PSYCHOLOGICAL ASSESSMENT: Aspects of Validity Face “Validity” The mere appearance that a measure has validity, it is a judgment concerning how relevant the test items appear to be. Face validity is really not validity at all because it does not offer evidence to support conclusions drawn from test scores. Nevertheless, face validity is important as appearances can help motivate test takers because they can see that the test is relevant. A test’s lack of face validity could contribute to a lack of confidence in the perceived effectiveness of the test—with a consequential decrease in the testtaker’s cooperation or motivation to do his or her best. Content-Related Validity Considers the adequacy of representation of the conceptual domain the test is designed to cover A judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample. it is the only type of evidence (besides face validity) that is logical rather than statistical. In looking for content validity evidence, we attempt to determine whether a test has been constructed adequately. Determination of content validity evidence is often made by expert judgment. Construct underrepresentation ○ The failure to capture important components of a construct Construct-irrelevant variance ○ Scores are influenced by factors irrelevant to the construct Content validity - everything that you want to measure should be present. Criterion-Related Validity Measure of how well a test corresponds with a particular criterion. Judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest—the criterion. Concurrent Validity An index of the degree to which a test score is related to some criterion measure obtained at the same time Examples: ○ How well a new test on a specific construct is related to a more established form of the same construct ○ Job samples i.e. one method of assessment is correlated with other forms of assessment in employee selection Predictive Validity The forecasting function of tests; it measures the relationship between the test scores and a criterion measure obtained at a future time. Validity Coefficient The relationship between a test and a criterion is usually expressed as a correlation called a validity coefficient. This coefficient tells the extent to which the test is valid for making statements about the criterion. Typically, Pearson r is used to determine the validity between the two measures. Construct Validity Construct - an informed, scientific idea developed or hypothesized to describe or explain behavior ○ We are trying to measure things that are intangible. Constructs are unobservable, presupposed (underlying) traits that a test developer may invoke to describe test behavior or criterion performance. The researcher investigating a test’s construct validity will formulate hypotheses about the expected behavior of high scorers and low scorers on the test. A judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct. Construct validation involves assembling evidence about what a test means. This is done by showing the relationship between a test and other tests and measures. This process is typically required when no instrumentation is yet to measure a specific construct, or when the criterion is not well defined. The result of your test should reflect your behaviour. ○ Ex: a person who is high in intelligence should be good in problem solving, has leadership skills, etc. Convergent Validity Convergent Evidence When a measure correlates well with other tests believed to measure the same construct Discriminant Validity Discriminant Evidence, Divergent Validity Demonstration of uniqueness To demonstrate discriminant evidence for validity, a test should have low correlations with measures of unrelated constructs, or evidence for what the test does not measure. If a construct does have nothing to do with what the test is trying to measure. Validity and Reliability Attempting to define the validity of a test will be futile if the test is not reliable. Reliability and validity are related concepts. ○ It is difficult to obtain evidence for validity unless a measure has reasonable reliability. ○ On the other hand, a measure can have high reliability without supporting evidence for its validity. Reliability→Validity