🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

CORRELATION AND REGRESSION CORRELATION In correlational analysis, we ask whether two Variables covary. In other words, does Y get larger As X gets larger? For example, does the patient feel Dizzier when the doctor increases the dose of a Drug? Do people get more diseases when they are Under more...

CORRELATION AND REGRESSION CORRELATION In correlational analysis, we ask whether two Variables covary. In other words, does Y get larger As X gets larger? For example, does the patient feel Dizzier when the doctor increases the dose of a Drug? Do people get more diseases when they are Under more stress? CORRELATION Correlational analysis is designed Primarily to examine linear Relationships between variables. A correlation coefficient is a mathematical Index that describes the direction and Magnitude of a relationship. POSITIVE CORRELATION High scores on Y are associated with high scores on X, And low scores on Y correspond to low scores on X. NEGATIVE CORRELATION Higher scores on Y are associated with lower scores on X, and lower scores on Y are associated with higher Scores on X. NO CORRELATION REGRESSION Simple regression is used to examine the relationship Between one dependent and one independent variable. After Performing an analysis, the regression statistics can be used To predict the dependent variable when the independent Variable is known. 9 MEASURING CORRELATION COEFFICIENTS 1) PEARSON PRODUCT CORRELATION It is the most commonly used because most often we want to find The correlation between two continuous variables. Continuous Variables such as height, weight, and intelligence can take on Any values over a range of values 2) SPEARMAN RHO Spearman’s rho is a method of correlation for finding the Association between two sets of ranks. The rho coefficient (r) is Easy to calculate and is often used when the individuals in a. Sample can be ranked on two variables but their actual scores Are not known or have a normal distribution. 12 3) BISERIAL COEFFICIENT Biserial correlation expresses the relationship Between a continuous variable and an artificial Dichotomous variable |4) POINT BISERIAL COEFFICIENT Biserial correlation expresses the relationship Between a continuous variable and a true Dichotomous variable 4) PHI COEFFICIENT When both variables are dichotomous and at Least one of the dichotomies is “true,” then the Association between them can be estimated Using the phi coefficient. 5) TETRACHORIC CORRELATION If both dichotomous variables are artificial We might use a special correlation coefficient Known as the tetrachoric correlation B THE CORRELATION CAUSATION PROBLEMS Just because two variables are correlated does Not necessarily imply that one has Caused the other. For example, a correlation between aggressive behavior and the number of Hours spent viewing television does not mean that excessive viewing of Television causes aggression. This relationship could mean that an Aggressive child might prefer to watch a lot of television. A correlation alone does not prove causality, Although it might lead to other research that is Designed to establish the causal relationships Between variables. PSYCHOLOGICAL TESTING AND ASSESMENT NORMS & BASIC STATISTICS FOR TESTING Psychological Test: TESTS are devices used to Translate observations intoNumbers Why do we need Statistics? ) To describe (DESCRIPTIVE STATISTICS) 2) To provide inferences (INFERENTIAL STATISTICS) 1) DESCRIPTIVE STATISTICS - Consists methods used to provide a Concise description of a collection of Quantitative information. Numbers provide convenient summaries And allow us to evaluate some Observations relative to others. PORULATJON 2) INFERENTIAL STATISTICS - Inferential statistics are methods Used to make inferences from Observations of a small group of People known as a sample to a Larger group of individuals known As a population. INERENCE INFERENTIAL STATISTICS 1. Exploratory data analysis - Detective work of gathering and displaying cues 2. Confirmatory data analysis - clues are evaluated against a rigid Statistical rules. “Data don’t make any sense, We will have to resort to statistics.” SCALES OF MEASUREMENT MEASUREMENT İs the application Of rules for assigning numbers To objects. The rules are the Specific procedures used to Transform qualities of attributes PROPERTIES OF SCALES -Magnitude is the property of “moreness.” A scale has the property of Magnitude if we can say that a particular instance of the attribute Represents more, less, or equal amounts of the given quantity than does Another instance. -A scale has the property of equal intervals if the difference Between two points at any place on the scale has the same meaning As the difference between two other points that differ by the same Number of scale units. -An absolute O is obtained when nothing of the property being Measured exists. For example, if you are measuring heart rate and Observe that your patient has a rate of 0 and has died, then you would Conclude that there is no heart rate at all. 1. NOMINAL SCALE Nominal scale does not have the Property of magnitude, equal Intervals, or an absolute 0. Nominal Scales are really not scales at all; Their only purpose is name or label. 2. ORDINAL SCALE A scale with the property of Magnitude but not equal intervals or An absolute 0 is an ordinal scale. This Scale allows you to rank individuals or Objects but not to say anything about The meaning of the differences Between the ranks. 3. INTERVAL SCALE When a scale has the properties of Magnitude and equal intervals but Not absolute 0, we refer to it as an Interval scale. 4. RATIO SCALE A scale that has all three properties (magnitude, equal intervals, and an Absolute 0) is called a ratio scale. PSYCHOLOGICAL TESTING AND ASSESSMENT RELIABILITY By Camae Gangcuangco, RPm IN EVERYDAY CONVERSATION, RELIABILITY IS A SYNONYM FOR DEPENDABILITY OR CONSISTENCY. In the language of psychometrics reliability refers to consistency in measurement. And whereas in everyday conversation reliability always connotes something positive, in the psychometric sense it really only refers to something that is consistent-not necessarily consistently good or bad, but simply consistent. RELIABILITY COEFFICIENT A reliability coefficient is an index of Reliability, a proportion that indicates the ratio Between the true score variance on a test and The total variance. CLASSICAL TEST THEORY A score on an ability test is presumed to reflect not only the Testtaker’s true score on the ability being measured but also Error. If we use X to represent an observed score, T to Represent a true score, and E to represent error, then the fact That an observed score equals the true score plus error may be Expressed as follows: X=T + E A STATISTIC USEFUL IN DESCRIBING SOURCES OF TEST SCORE VARIABILITY IS THE VARIANCE (THE STANDARD DEVIATION SQUARED). Variance from true Differences is true Variance, and variance From irrelevant, random Sources is error variance. MEASUREMENT ERROR Refers to all of the factors associated With the process of measuring some Variable, other than the variable being Measured. RANDOM ERROR Is a source of error in measuring a Targeted variable caused by Unpredictable fluctuations and Inconsistency of other variable in The measurement process. SYSTEMATIC ERROR Refers to a source of error in Measuring a variable that is Typically constant or proportionate To what is presumed to be the true Value of the measurement. SOURCES OF ERROR VARIANCE Sources of error variance include test Construction, administration, scoring, And/or interpretation. TEST CONSTRUCTION One source of variance during Test construction is item Sampling or content sampling, Terms that refer to variation Among items within a test as Well as to variation among Items between tests TEST ADMINISTRATION Sources of error variance that occur During test administration may Influence the testtakers attention or Motivation. To those influences are the source of One kind of error variance. (TEST ENVIRONMENT, TESTTAKER VARIABLES, EXAMINER-RELATED VARIABLES) TEST SCORING & ADMINISTRATION Scorers and scoring systems are Potential sources of error variance. A Test may employ objective- type items Amenable to computer scoring of Well-documented reliability. Yet even Then, the possibility of a technical glitch Contaminating the data is possible. RELIABILITY ESTIMATES 1. Test-Retest reliability estimates 2. Parallel Forms/ Alternate forms 3. Internal Consistency - Split-half Reliability - Inter-item consistency Odd-even reliability 4. The Kuder-Richardson Formulas 5. Coefficient Alpha 6. Average Proportional Distance (APD) 7. Inter-scorer Reliability RELIABILITY ESTIMATES TEST-RETEST RELIABILITY ESTIMATE Test-retest reliability is an estimate of Reliability obtained by correlating pairs of Scores from the same people on two Different administrations of the same test. The longer the time that passes, the greater The likelihood that the reliability coefficient Will be lower. When the interval between Testing is greater than six months, the Estimate of test-retest reliability is often Referred to as the coefficient of stability. PARALLEL/ALTERNATE FORMS RELIABILITY ESTIMATES The degree of the relationship between Various forms of a test can be evaluated by Means of an alternate-forms or Parallel-forms coefficient of reliability, Which is often termed the coefficient of Equivalence. SPLIT-HALF INTER-ITEM RELIABILITY CONSISTENCY An estimate of split-half reliability is refers to the degree of correlation among all Obtained by correlating two pairs of the items on a scale. A measure of inter-item Scores obtained from equivalent halves of a consistency is calculated from a single, Single test administered once. Administration of a single form of a test. An Step 1. Divide the test into equivalent halves. Index of inter-item consistency, in turn, is Step 2. Calculate a Pearson r between useful in assessing the homogeneity of the Scores on the two halves of the test. Step 3. Adjust the half-test reliability using test. The Spearman-Brown formula. TESTS ARE SAID TO BE HOMOGENEOUS IF THEY CONTAIN ITEMS THAT MEASURE A SINGLE TRAIT. A HETEROGENEOUS (OR NONHOMOGENEOUS) TEST IS COMPOSED OF ITEMS THAT MEASURE MORE THAN ONE TRAIT. RELIABILITY ESTIMATES COEFFICIENT ALPHA KimKUDER-RICHARDSON In contrast to KR-20, which is appropriately FORMULA (KR-20) used only on tests with dichotomous items, Coefficient alpha is appropriate for use on KR-20 is the statistic of choice for tests containing nondichotomous items. Determining the inter- item consistency Of dichotomous items, primarily those items AVERAGE That can be scored right or wrong. PROPORTIONAL The KR-21 formula may be used if there is DISTANCE (APD) Reason to assume that all the Test items have approximately the same A measure used to evaluate the internal Degree of difficulty. Consistency of a test that focuses on the Degree of difference that exists betweon Item scores. RELIABILITY ESTIMATES INTER-SCORER Variously referred to as scorer reliability, Judge reliability, observer Reliability, and inter-rater reliability, inter Scorer reliability is the degree of Agreement or consistency between two or More scorers (or judges or raters) With regard to a particular measure. The correlation coefficient is referred to As a coefficient of inter-scorer reliability. USING AND INTERPRETING A COEFFICIENT OF RELIABILITY HOW HIGH SHOULD THE COEFFICIENT OF RELIABILITY BE?” “ON A CONTINUUM RELATIVE TO THE PURPOSE AND IMPORTANCE OF THE DECISIONS TO BE MADE ON THE BASIS OF SCORES ON THE TEST.” Reliability is a mandatory attribute in all tests we use. However, we Need more of it in some tests, and we will admittedly allow for less of It in others. If a test score carries with it life-or-death implications, then We need to hold that test to some high standards-including relatively High standards with regard to coefficients of reliability. If a test score Is routinely used in combination with many other test scores and Typically accounts for only a small part of the decision process, that Test will not be held to the highest standards of reliability. THE NATURE OF THE TEST HOMOGENEITY OF THE TEST Recall that a test is said to be homogeneous in Items if it is functionally uniform throughout. Tests Designed to measure one factor, such as one Ability or one trait, are expected to be Homogeneous in items. For such tests, it is Reasonable to expect a high degree of internal Consistency. By contrast, if the test is Heterogeneous in items, an estimate of internal Consistency might be low relative to a more Appropriate estimate of test-retest. VALIDITYŲ Validity, as applied to a test, is a judgment or estimate of how well a test measures what it purports to measure in a particular context. More specifically, it is a judgment based on evidence about the appropriateness of inferences drawn from test scores. (COHEN) Validity can be defined as the agreement between a test score or measure and the quality it is believed to measure. Validity is sometimes defined as the answer to the question, "Does the test measure what it is supposed to measure?" (KAPLAN) Inherent in a judgment of an instrument's validity is a judgment of how useful it is fora particular purpose with a particular population of people. As a shorthand, assessors may refer to a particular test as a “valid test.” However, what is really meant is that the test has been Shown to be valid for a particular use with a particular population. Of testtakers at a particular time. No test or measurement technique is “universally valid” for all time, for all Uses, with all types of testaker Populations. VALIDATION Is the process of gathering and evaluating evidence about Validity. Both the test developer and the test user may play a Role in the validation of a test for a specific purpose. It is the Test developer’s responsibility to supply validity evidence in The test manual. It may sometimes be appropriate for test Users to conduct their own validation studies with their Own groups of testtakers. Such local validation studies may yield insights regardingA particular population of testtakers As compared to the norming sample described in a test Manual. Local validation studies are absolutely necessary When the test user plans to alter in some way the Format, Instructions, language, or content of the test. ASPECTS OF VALIDITY ASPECTS OF VALIDITY ) FACE VALIDITY 2) CONTENT VALIDITY 3) CRITERION VALIDITY 1) PREDICTIVE 2) CONCURRENT 3) CONSTRUCT VALIDITY 1) cONVERGENT 2) DISCRIMINANT 1) FACE VALIDITY Face validity is the mere appearance that a measure has Validity. We often say a test has face validity if the Items seem to be reasonably related to the perceived Purpose of the test. 2) CONTENT VALIDITY Content-related evidence for validity of a test or measure considers the Adequacy of representation of the conceptual domain the test is designed to Cover. (KAPLAN) Content validity describes a judgment of how adequately a test samples Behavior representative of the universe of behavior that the test was Designed to sample. (COHEN) 1 In looking for content validity evidence, we attempt to Determine whether a test has been constructed Adequately. For example, we ask whether the items are A fair sample of the total potential content. Establishing Content validity evidence for a test requires good logic, Intuitive skills, and perseverance. The content of the Items must be carefully evaluated. Two new concepts that are relevant to Content validity: Construct underrepresentation describes the failure To capture important components of a construct. Construct-irrelevant variance occurs when scores Are influenced by factors irrelevant to the Construct. 3) CRITERION VALIDITY Criterion validity evidence tells us just how well a test Corresponds with a particular criterion. Such evidence is Provided by high correlations between a test and a Well-defined criterion measure. A criteri on is the standard Against which the test is compared. (KAPLAN) 14 Concurrent validity is an index of the degree to Which a test score is related to some criterion Measure obtained at the same time (concurrently). Predictive validity is an index of the degree to Which a test score predicts some criterion Measure. (predictor variable and criterion) 1) Validity Coefficient The relationship between a test and a criterion Is usually expressed as a correlation Called a validity coefficient. This coefficient Tells the extent to which the test is valid For making statements about the criterion. Criterion-related validity evidence obtained in one Situation may not be generalized to other similar Situations. Generalizability refers to the evidence that the Findings obtained in one situation can Be generalized that is, applied to other situations. This is an issue of empirical study rather than Judgment. 2) Expectancy Data Using a score obtained on some test(s) or measure(s), expectancy Tables illustrate the likelihood that the testtaker will score within some Interval of scores on a criterion measure-an interval that may be seen As “passing,” “acceptable,” and so on. An expectancy table shows the Percentage of people within specified test-score intervals who Subsequently were placed in various categories of the criterion (forExample, placed in “passed” category or “failed” category). Incremental Validity The degree to which an additional Predictor explains something about the Criterion measure that is not explained by Predictors already in use. Base rate is the extent to which a particular trait, behavior, characteristic, or Attribute exists in the population (expressed as a proportion) Hit rate may be defined as the proportion of people a test accurately identifies as Possessing or exhibiting a particular trait, behavior, characteristic, or atribute. Miss rate may be defined as the proportion of people the test fails to identify as Having, or not having, a particular characteristic or attribute. Here, a miss amounts to An inaccurate predition. False positive is a miss wherein the test predicted that the testtaker did possess that Particular characterisic or attribute being measured when in fact the testaker did not. False negative is a miss wherein the test predicted that the testtaker did not possess The particular characteristic or attribute being measured when the testtaker actually Did. 3) CONSTRUCT VALIDITY Construct validity evidence is established through a series of activities In which a researcher simultaneously defines some construct and Develops the instrumentation to measure it. This process is required When “no criterion or universe of content is accepted as entirely Adequate to define the quality to be measured” (KAPLAN) 2 Construct validity is a judgment about the appropriateness of Inferences drawn from test scores regarding individual standings on a Variable called a construct. A construct is an informed, scientific idea Developed or hypothesized to describe or explain behavior. (COHEN) Intelligence is a construct that may be invoked to describe why a student Performs well in school. Anxiety is a construct that may be invoked to Describe why a psychiatric patient paces the floor. Other examples of Constructs are job satisfaction, personality, bigotry, clerical aptitude, Depression, motivation, self-esteem, emotional adjustment, potential Dangerousness, executive potential, creativity. CONVERGENT VALIDITY When a measure correlates well with other tests believed to Measure the same construct, convergent evidence for validity is Obtained. DISCRIMINANT VALIDITY To demonstrate discriminant evidence for validity, a test Should have low correlations with measures of unrelated Constructs, or evidence for what the test does not measure. RELIABILITY AND VALIDITY RELATIONSHIP Attempting to define the validity of a test Will be futile if the test is not reliable. Because validity coefficients are not usually Expected to be exceptionally high, A modest correlation between the true scores On two traits may be missed if the test for each Of the traits is not highly reliable. Because validity coefficients are not usually Expected to be exceptionally high, A modest correlation between the true scores On two traits may be missed if. Sometimes we cannot demonstrate that a Reliable test has meaning. In other words, we can have reliability Without validity. However, it is logically Impossible to demonstrate that An unreliable test is valid.

Use Quizgecko on...
Browser
Browser