Assessment as an Integral Part of Teaching PDF

Summary

This document provides an overview of assessment types, quality, and trends in teaching and learning. It discusses diagnostic, formative, and summative assessments, as well as traditional and authentic assessments. The document also explores different types of validity in assessment, including criterion-related validity, construct validity, and content validity.

Full Transcript

Assessment as an Integral Part of Teaching OVERVIEW OF ASSESSMENT TYPES, QUALITY, AND TRENDS 1. Assessment in the Context of Teaching and Learning EFFECTIVE ASSESSMENT GUIDES TEACHING STRATEGIES AND LEARNER DEVELOPMENT. 1.1 Diagnostic Assessment  Diagnostic assessments identify students' pre-...

Assessment as an Integral Part of Teaching OVERVIEW OF ASSESSMENT TYPES, QUALITY, AND TRENDS 1. Assessment in the Context of Teaching and Learning EFFECTIVE ASSESSMENT GUIDES TEACHING STRATEGIES AND LEARNER DEVELOPMENT. 1.1 Diagnostic Assessment  Diagnostic assessments identify students' pre-existing knowledge, skills, and areas of difficulty. Examples include pre-tests or interviews before instruction. 1.2 Formative Assessment  Formative assessments are conducted during instruction to provide feedback and guide teaching decisions. Examples: quizzes, class activities, and reflections. 1.3 Summative Assessment  Summative assessments evaluate learning outcomes at the end of an instructional period. Examples include final exams, projects, and standardized tests. 2. Traditional and Authentic Assessment TRADITIONAL FOCUSES ON STANDARD MEASURES, WHILE AUTHENTIC ASSESSMENT INVOLVES REAL-WORLD TASKS. 2.1.1 Selected-response Type  Selected-response assessments ask students to select an answer from given options, like multiple-choice questions (MCQs). 2.1.2 Constructed-response Type  Constructed-response assessments require students to create their own answers, like essays, short answers, or problem-solving tasks. 2.2 Authentic Assessment  Authentic assessments involve tasks that reflect real-life challenges, such as portfolios, presentations, and performance tasks. 3. Norm and Criterion- referenced Assessment NORM-REFERENCED COMPARES STUDENTS TO PEERS, WHILE CRITERION- REFERENCED ASSESSES BASED ON SPECIFIC STANDARDS. 3.1 Norm-referenced Assessment  Norm-referenced assessments rank students in comparison to their peers. Example: SAT exams. 3.2 Criterion-referenced Assessment  Criterion-referenced assessments evaluate performance against specific learning objectives. Example: competency-based tests. 4. Contextualized and Decontextualized Assessments CONTEXTUALIZED ASSESSMENTS ARE SITUATED IN REAL-WORLD CONTEXTS, WHILE DECONTEXTUALIZED FOCUS ON ABSTRACT, ISOLATED SKILLS. 4.1 Contextualized Assessments  Contextualized assessments apply learning to real-life situations. Example: problem-solving in business or everyday scenarios. 4.2 Decontextualized Assessments  Decontextualized assessments focus on specific skills or content without real-world application. Example: basic arithmetic tests. 5. Marks of Quality Assessment QUALITY ASSESSMENTS ALIGN WITH ACTIVE LEARNING, VALIDITY, RELIABILITY, AND FAIRNESS. 5.1 In Accordance with Active Learning and Motivation  Assessments should support active learning and intrinsic motivation. Example: self-assessment tools to promote reflection. 5.2 Validity in Assessment  Validity ensures that the assessment measures what it is intended to measure. Example: a reading test measuring comprehension, not just decoding. WHY EVALUATE TESTS? ✓ TO MAKE SURE THAT A TEST MEASURES THE SKILL, TRAIT, OR ATTRIBUTE IT IS SUPPOSED TO MEASURE ✓ TO YIELD REASONABLE CONSISTENT RESULTS FOR THE SAME INDIVIDUAL ✓ TO MEASURE WITH REASONABLE DEGREE OF ACCURACY. A GOOD TEST MUST FIRST OF ALL BE VALID. Some Factors that Affect the Validity of a Test 1. Reading Vocabulary and Sentence Structure 2. Pattern of the answers 3. Arrangement of the test items 4. Poorly Constructed test items 5. Ambiguity PROPERTIES OF VALIDITY VALIDITY HAS THREE IMPORTANT PROPERTIES: 1. VALIDITY IS RELATIVE TERM. For example, a test of statistical ability will be valid only for measuring statistical ability , because it is put only to the use of measuring that ability. It will be worthless to use for other measuring like history and geography. Etc. PROPERTIES OF VALIDITY VALIDITY HAS THREE IMPORTANT PROPERTIES: 2. VALIDITY IS NOT A FIXED PROPERTY OF THE TEST 3. VALIDITY, LIKE RELIABILITY, IS A MATTER OF DEGREE AND NOT AN ALL-OR-NONE PROPERTY. Types of Validity Types of Validity 1. Criterion- related Validity 2. Construct Validity 3. Content or Curricular Validity Types of Validity 1. Criterion- related Validity 2. Construct Validity 3. Content or Curricular Validity 4. Face Validity Criterion- related Validity Criterion-related validity is a very common and popular type of test validity. As its name implies, criterion-related validity is one which is obtained by comparing (or correlating) the test scores with scores obtained on a criterion available at present or to be available in the future. Criterion- related Validity Also referred to as instrumental validity, it states that the criteria should be clearly defined by the tester in advance. It has to take into account other tester criteria to be standardized and it also needs to demonstrate the accuracy of a measure or procedure compared to another measure or procedure which has already been demonstrated to be valid. Two Subtypes of Criterion Validity i ). Predictive validity ii). Concurrent validity Predictive Validity o also called empirical validity or statistical validity. o evaluates the capability of a measurement or assessment to foretell future occurrences or results. o the criterion measure against this type of validity is important because the outcome of the subject is predicted Predictive Validity Example: Suppose you want to find out whether a college entrance math test can predict a student’s future performance in an engineering study program. A student’s GPA is a widely accepted marker of academic performance and can be used as a criterion variable. To assess the predictive validity of the math test, you compare how students scored in that test to their GPA after the first semester in the engineering program. If high test scores were associated with individuals who later performed well in their studies and achieved a high GPA, then the math test would have strong Concurrent Validity ▪ is the degree to which the test agrees or correlates with a criterion set up as an acceptable measure ▪ the criterion is always available at the time of testing ▪ it is applicable to test employed for the diagnosis of existing status rather than for the prediction of further outcome Concurrent Validity Example: Let’s say a group of nursing students take two final exams to assess their knowledge. One exam is a practical test and the second exam is a paper test. If the students who score well on the practical test also score well on the paper test, then concurrent validity has occurred. If, on the other hand, students who score well on the practical test score poorly on the paper test (and vice versa), then you have a problem with concurrent validity. In this particular example, you would question the ability of either test to assess knowledge. Types of Validity 1. Criterion- related Validity 2. Construct Validity 3. Content or Curricular Validity Construct Validity  The term “construct validity” was first introduced in 1954 in the Technical Recommendation of the American Psychological Association and since then has been frequently used by measurement theorists. Construct Validity  is the extent to which the test measures a theoretical trait. This involves such test as those of understanding, appreciation, and interpretation of data. Examples are intelligence and mechanical aptitude tests. The process of validation involves the following steps: I. Specifying the possible different measures of the construct: here the investigator defines the construct in clear words and also states one or many supposed measures of that construct. The process of validation involves the following steps: I. Specifying the possible different measures of the construct: here the investigator defines the construct in clear words and also states one or many supposed measures of that construct. For example, one wants to specify the different measures of the construct “intelligence”. The investigator first to define the term “intelligence” and in the light of the definition he would be expected to specify the different measures.  Number of specification may be made are: ✓ Quick decision in difficult task, ability to learn, The process of validation involves the following steps: II. Determining the extent of correlation between all or some of the measures of construct: The second step is to determining whether or not those well-specified measures actually lead to the measurement of the concerned construct. This is done through correlation with each other. If the measures of correlation become high then we get much evidence that they are measuring the same The process of validation involves the following steps: II. Determining whether or not all or some measure act as if they were measuring the construct: The next step is to determine whether or not such measure behave with reference to another variables of interest in an expected manner. If they behave expected manner, it means they providing evidence for the construct validity. Content Validity Content validity refers to the connections between the test items and the subject-related tasks. Content Validity Psychometrics are of the view that content validity requires both item validity and sampling validity. Content Validity Psychometrics are of the view that content validity requires both item validity and sampling validity. Item validity is concerned with whether the test items represent measurement in the intended content area. Sampling validity (sometimes called logical validity) is how well the test covers all of the areas you want it to Content Validity The techniques that are commonly used for defining the intended content of a test is the establishment of a table of Content Validity Table of Specifications (TOS) is a tool used to ensure that a test or assessment measures the content and thinking skills that the test intends to measure. Thus, when used appropriately, it can provide response content and construct (i.e., Content Validity Table of Specifications (TOS) The purpose of a Table of Specifications is to identify the achievement domains being measured and to ensure that a fair and representative sample of questions appear on the test. Content Validity Table of Specifications (TOS) A Table of Specifications allows the teacher to construct a test which focuses on the key areas and weights those different areas based on their importance. Example

Use Quizgecko on...
Browser
Browser