🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

RD Topic 2.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 7 Topic #2 VARIABLES AND MEASUREMENT VARIABLES Some definitions of variables include the following: 1. symbol that can assume a ra...

EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 7 Topic #2 VARIABLES AND MEASUREMENT VARIABLES Some definitions of variables include the following: 1. symbol that can assume a range of numerical values. 2. some property of an organism/event that has been measured. 3. an aspect of a testing condition that can change or take on different characteristics with different conditions. 4. an attribute of a phenomena. Types Of Variables 1. Independent and Dependent variables A. An independent variable is the condition manipulated or selected by the researcher to determine its effect on behavior. The independent variable is the ANTECEDENT variable and has at least 2 forms or levels that defines the variation. B. A dependent variable is a measure of the behavior of the participant that reflects the effects of the independent variable. A variable is not ALWAYS an independent or dependent variable. The definition of a variable as independent or dependent will change (or stay the same) as a function of the particular study. 2. Continuous and Discrete variables A. A continuous variable is one that falls along a continuum and is not limited to a certain number of values (distance or time). B. A discrete variable is one that falls into separate categories with no intermediate values possible (e.g., male/female, alive/dead, French/Dutch). EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 8 A distinction can be drawn between naturally and artificially discrete variables (e.g., the male/female dichotomy of sex is natural, while the young/old dichotomy of age is artificial) Commonly used methods to generate artificially discrete variables are: (1) mean split, (2) median split, and (3) extreme groups. Continuous and discrete distinctions are important because this information usually influences the choice of statistical procedures or tests. A. Pearson's correlation—assumes that both variables are continuous. B. Point–biserial—most appropriate when one variable is measured in the form of a true dichotomy, and we cannot assume a normal distribution. C. Biserial—most appropriate when one variable is measured in the form of an artificial dichotomy, and we can assume a normal distribution. D. Phi coefficient ( )—is used when both variables are measured as dichotomies. 3. Quantitative and Qualitative variables A. A quantitative variable is one that varies in amount (e.g., reaction time or speed of response). B. A qualitative variable is one that varies in kind (e.g., college major or sex). The distinction between quantitative and qualitative variables can be rather fine at times (e.g., normal/narcissistic and introversion/extroversion). It is usually, but NOT always true that quantitative variables are continuous, whereas qualitative variables are discrete. EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 9 MEASUREMENT Definition of measurement—the assignment of numbers to events or objects according to rules that permit important properties of the events or objects to be represented by properties of the number system. Measurement is closely associated with the concept of operational definitions. To scientifically study anything, we must be able to measure it. To do so, we first operationally define the construct, and then use measurement rules to quantify it. This permits us to study the construct. The key is that properties of the events are represented by properties of the number system. Levels of measurement A scale is a measuring device to assess a person's score or status on a variable. The five basic types of scales (levels of measurement) are: 1. Labels When numbers are used as a way of keeping track of things without any suggestion that the numbers can be subjected to mathematical analyses Examples include social security numbers; participant ID number 2. Nominal scale Grouping objects or people without any specified quantitative relationships among the categories Examples include coding all males as 1; and females as 2. 3. Ordinal scale Most common type = rank order People or objects are ordered form "most" to "least" with respect to an attribute There is no indication of "how much" in an absolute sense, any of the objects possess the attribute There is no indication of how far apart the objects are with respect to the attribute EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 10 Rank ordering is basic to all higher forms of measurement and conveys only meager information Examples include college football polls, top 50 "Best Places to Work". 4. Interval scale Measures how much of a variable or attribute is present Rank order of persons/objects is known with respect to an attribute How far apart the persons/objects are from one another with respect to the attribute is known (i.e., intervals between persons/objects is known) Does not provide information about the absolute magnitude of the attribute for any object or person Examples include how well you like this course, where 1 = do not like at all, and 5 = like very much. 5. Ratio scale Has properties of preceding 3 scales in addition to a true zero–point Rank order of persons/objects is known in respect to an attribute How far apart the persons/objects are from one another with respect to the attribute is known (i.e., intervals between persons/objects is known). Ratio scales are extremely rare in the behavioral and organization sciences The distance from a true zero–point (or rational zero) is known for at least one of the objects/persons Examples includes Kelvin temperature has a nonarbitrary zero point (0K = particles have zero kinetic energy), speed (no motion) Evaluation of Measurement Methods and Instruments The extent to which data obtained from method fit a mathematical model. A. Reliability—consistency over time, place, occasion, etc. B. Validity—extent to which a method measures what it is supposed to measure. EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 11 Reliability Reliability—refers to the consistency of scores obtained by the same person when examined with the same test (or equivalent forms) on different occasions, time, places, etc. For a measurement to be of any use in science, it must have both reliability and validity. Reliability, like validity, is based on correlations. Correlation (reliability [rxx] and validity [rxy]) coefficients [rxy] can be computed by the formula: Correlation coefficients measure the degree of relationship or association between two variables. A correlation coefficient is a point on a scale ranging from –1.00 to +1.00. The closer this number is to either of these limits, the stronger the relationship between the two variables. Methods for Assessing The Reliability of a Test All things being equal, the more items a test has, the more reliable it is going to be. 1. Test–retest reliability—repeated administration of the same test. 2. Alternate–forms reliability—measure of the extent to which 2 separate forms of the same test are equivalent. 3. Split–half, odd–even (or random split) reliability—The primary issue here is one of obtaining comparable halves. 4. Kuder–Richardson (KR20) reliability and coefficient alpha—These are measures of inter–item consistency, (i.e., the consistency of responses to all items on the test). This is an indication of the extent to which each item on the test measures the same thing as every other item. The more homogeneous the domain (test), the higher the inter–item consistency. KR20 is used for right/wrong, true/false items. Cronbach alpha is used for Likert–type scales. 5. Scorer reliability or inter–rater reliability—the extent to which 2 or more raters are consistent. EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 12 Test and Measurement Validity The validity of a test concerns WHAT it measures and HOW WELL it does so. It tells us what can be inferred from test scores. The validity of a test cannot be reported in general terms. Validity depends on the USE of the test; no test can be said to have "high" or "low" validity in the abstract. Test validity must be established with reference to the particular use for which the test is being considered (i.e., the appropriateness of inferences drawn from data). For example, the SAT may be valid for predicting performance in college but will it validly predict aggressive behavior? Validity is a key—maybe the key criterion in the evaluation of a test or measure. The validity of a test or measure is the extent to which inferences drawn from the test scores are appropriate. Note: The square root of a test's reliability sets the upper limit of its validity. "Types" of Test Validity Criterion–related Content–related Construct–related 1. Criterion–related validity—effectiveness of a test in predicting an individual's behavior in specific situations. That is, the test or measure is intended as an indicator or predictor of some other behavior (that typically will not be observed until some future date). With criterion–related procedures, performance on the test, predictor, or measure is checked against a criterion (i.e., a direct and independent measure of that which the test is designed to predict). As mentioned earlier, validity is assessed using a correlation coefficient. As such, validity coefficients can range from –1.0 to +1.0. The absolute value is used to compare different validity coefficients in terms of magnitude. EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 13 Types of criterion–related validity A. Concurrent B. Predictive C. Postdictive Differences between these "types" of criterion–related validity have to do with differences in time–frames in the collection of criterion and predictor data. 2. Content–related validity—For some tests and measures, validity depends primarily on the adequacy with which a specified content domain is sampled. Content–related validity involves the degree to which a predictor covers a representative sample of the behavior being assessed (e.g., classroom tests). Content–related validity involves a systematic examination of test content to determine whether it covers a representative sample of the behavior domain being measured. Content–related validity is typically rational and nonempirical, in contrast to criterion–related validity which is empirical. Content validity is based on expert judgment and will not be evaluated based on a correlation coefficient. The content domain to be tested should be fully described in advance in very specific terms. 3. Construct–related validity—The construct–related validity of a test or measure is the extent to which the test may be said to measure a theoretical construct or trait. A construct is a label for a theoretical dimension on which people are thought to differ. A construct represents a hypothesis (usually only half–formed) that a variety of behaviors will correlate with one another in studies of individual differences and/or will be similarly affected by experimental treatments. Types of construct–related validity A. Convergent validity—different measures of the same construct should be correlated or related to each other. B. Divergent or Discriminant validity—different measures of different constructs should not be correlated or related to each other. EDWARDS—RESEARCH METHODS LECTURE NOTES VARIABLES AND MEASUREMENT—PAGE 14 The MULTI–TRAIT/MULTI–METHOD MATRIX is the best method for assessing the construct–related validity of a test or measure. In the example below, the Edwards Workplace Stress measure is the measure being validated. METHOD Paper–and–pencil Physiological T Edwards Workplace Stress Cortisol levels, heart rate, R Scale blood pressure Stress A I A B T C D Physiological indicators of Verbal ability Wesman Personnel brain neural information Classification Test (verbal processing efficiency subscale) (EEG—brain wave patterns) as measure of verbal ability A and B = converge (convergent validity) A and C = diverge (divergent/discriminant validity) C and D = converge (convergent validity) B and D = diverge (divergent/discriminant validity) A and D = diverge (divergent/discriminant validity) 4. Face validity—Face validity has to do with the extent to which a test or measure looks like it measures what it is supposed to; the test-taker is in the best position to evaluate the face validity of a test or measure.

Tags

research methods variables measurement psychology
Use Quizgecko on...
Browser
Browser