🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

PED 4 Reviewer.docx

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Basic Concepts in Assessment of Learning TEST An instrument, vehicle, or tool for measuring sample of behavior by passing a set of questions in uniform manner. It is used in measuring knowledge. TESTING It is the administration of test and the method (procedure) used to measure the level of...

Basic Concepts in Assessment of Learning TEST An instrument, vehicle, or tool for measuring sample of behavior by passing a set of questions in uniform manner. It is used in measuring knowledge. TESTING It is the administration of test and the method (procedure) used to measure the level of achievement or performance of the learners. MEASUREMENT A process of quantifying the degree to which someone/something possesses a given trait such as quality, characteristics, or feature. Assigning of number or obtaining numerical description to a performance, product, skill, or behavior of a student, based on a pre-determined procedure or set of criteria. ASSESSMENT It is the process of getting feedback from a student about his/her learning. It involves gathering and interpreting information about a student level of attainment of learning goals. It includes paper and pencil test, extended responses (example essays) and performance assessment are usually referred to as \"authentic assessment\" task (example presentation of research work). EVALUATION A process of making judgements about the quality of a performance, product, skill, or behavior of a student. It includes using some basis to judge worth or value. It is the process of getting feedback from the teacher about the student\'s performance based on the result of the assessment. Types of Measurement: Norm-referenced It is a test designed to measure the performance of a student compared with other students. Each individual is compared with other examinees and assigned a score-usually expressed as percentile, a grade equivalent score or a stanine. Types of Measurement: Norm-referenced It is \"grading on a curve\" which refers to the process of adjusting student grades in order to ensure that a test or assignment has the proper distribution throughout the class The purpose is to rank each student with respect to the achievement of others in broad areas of knowledge and to discriminate high and low achievers. Types of Measurement: Criterion-referenced It is a test designed to measure the performance of students with respect to some particular criterion or standard. Each individual is compared with a pre-determined set of standard for acceptable achievement. The performance of the other examinees are irrelevant. Types of Measurement: Criterion-referenced A student\'s score is usually expressed as a percentage and student achievement is reported for individual skills. The purpose is to determine whether each student has achieved specific skills or concepts. And to find out how much students know before instruction begins and after it has finished. Assessment FOR Learning Includes 3 types of assessment done before and during instruction. 1.) Placement Done prior (before) to instruction. Its purpose is to assess the needs of the learners to have basis in planning for a relevant instruction. The results of this assessment place in students in specific learning groups to facilitate teaching and learning. 2.) Formative Done during instruction. It measures the student\'s grasp of material that is currently being taught. It can also measure readiness. 3.) Diagnostic Done during instruction. This is used to determine student\'s recurring or persistent difficulties. It identifies the weakness of an individual\'s achievement in any field which serves as basis for remedial instruction. Assessment OF Learning This is done after instruction. 1.) Summative This is done after instruction. It is used to certify what students know and can do and level of their proficiency or competency. Its results reveal whether or not instruction have successfully achieved the curriculum outcomes. The information from assessment of learning is usually expressed as marks or letter grades. The results of which are communicated to the students, parents, and other stakeholders for decision making. Assessment AS Learning This is done for teachers to understand their role of assessing FOR and OF learning. 1.) Teachers should undergo trainings It requires teachers to undergo training on how to assess learning and be equipped with the following competencies needed in performing their work as assessors. Modes of Assessment MODE Traditional DESCRIPTION The objective paper-and- pencil test which usually assesses low- level thinking skills. EXAMPLES Standardized Test Teacher- made Test ADVANTAGES Administration is easy because students can take the rest at the same time. Scoring is objective. DISADVANTAGES Presentation of instrument is time consuming. Prone to cheating. MODE Performance DESCRIPTION A mode of assessment that requires actual demonstration of skills or creation of products of learning. EXAMPLES Practical Test Oral Test Projects ADVANTAGES Preparation of the instrument is easy relatively easy. Measuresbehaviors that cannot be deceived. DISADVANTAGES Scoring tends to be subjective without rubrics. Administrative is time consuming. MODE Portfolio DESCRIPTION A process of gathering multiple indicators of student progress to support course goals in dynamic, ongoing and collaborative process. EXAMPLES Working Portfolios Show Portfolios Documentar y Portfolios ADVANTAGES Measures student\'s growth and development. Intelligence- fair DISADVANTAGES Development is time consuming. Rating tends to be subjective without rubrics. Different Types of Test Speed Test items have same difficulty taken with time limit. Power Test items in increasing difficulty with no time limit. Diagnostic Test created to identify the weaknesses and strengths of students. Achievement Test - describes what a person has learned. Aptitude Test used to predict the likelihood of a student\'s success in a course. Standardized Test made by experts and has a high validity. Teacher-made Test made by teachers with low validity and sometimes prone to errors. Norm-referenced Test one versus the whole class or takers. Criterion-referenced Test one versus a criterion or set criteria. Objective Tests - yields with consistent results/answers. Subjective Test - yields with different results/answers. Lesson 1 Principles of High Quality Classroom Assessment 1\. Clear and Appropriate Learning Targets 2\. Appropriate Assessment Methods 3\. Balanced 4\. Validity 5\. Reliability 6\. Fairness 7\. Practicality and Efficiency 8\. Assessment should be a continuous process 9\. Authenticity 10.Communication 11\. Positive Consequences 1.) Clear and Appropriate Learning Targets Learning targets should be clearly stated, specific and centers on what is truly important. LEARNING TARGETS Knowledge Student\'s mastery of substantive subject matter. Reasoning Student\'s ability to use knowledge to reason and solve problems. Skills Student\'s ability to demonstrate achievement-related skills. Products Student\'s ability to achievement-related products. Affective/Disposition Student\'s attainment of affective states such as attitudes, values, interest, and self-efficacy. Objective Supply Short Answer Completion Test Objective Selection Matching Type True or False MCQ (Multiple Choice Question) Essay Restricted Response Extended Response Performance Based Presentations Projects Athletics Demonstrations Portfolios Oral Question Oral Examination Interviews Self-Report Survey Inventories Types of Test According to Format 1.) Selective Response - provides choices for the answer. Multiple Choice consists of a stem which describes the problem and 3 or more alternatives which give the suggested solutions. The incorrect alternatives are the distractors. True-False or Alternative Response - consists of declarative statement that one has to mark true or false, correct or incorrect, yes or no, fact or opinion, and the like. Matching Type - consists of two parallel columns: Column A, the column of the premises from which a match is sought; Column B, the column of responses from which the selection is made. 2.) Supply Test Short Answer - uses a direct question that can be answered by a word, phrase, a number, or a symbol. Completion Test - it consists of an incomplete statement. 3.) Essay Test Restricted Response - limits the content of the response by restricting the scope of the topic. Extended Response allows the students to select any factual information that they think is pertinent, to organize their answers in accordance with their best judgement. 3.) Balanced A balanced assessment sets target in all sets in domains of learning (cognitive, affective, and psychomotor) or domains of intelligences (verbal-linguistics, logic mathematical, bodily kinesthetic, visual-spatial, musical-rhythmic, interpersonal- social, intrapersonal-introspection, physical world-natural- existential-spiritual). A balanced assessment makes use of both traditional and alternative assessment. 4.) Validity It is the degree to which the assessment instrument measures what it intends to measure. It also refers to the usefulness of the instrument for a given purpose. It is the most important criterion of a good assessment instrument. Ways in Establishing Validity 1.) Face Validity is done by examining the physical appearance of the instrument. 2.) Content Validity - is done through a careful and critical examination of the objectives of assessment so that it reflects the curricular objectives. 3.) Criterion-related Validity Also called as concrete validity, refers to a test\'s correlation with a concrete outcome. It is established statistically such that a set of scores revealed by the measuring instrument is correlated with the scores obtained in another external predictor or measure. For example, a company could administer a sales personality test to its sales staff to see if there is an overall correlation between their test scores and a measure of their productivity. It has two purposes: a.) Concurrent Validity - describes the present status of the individual by correlating the sets of scores obtained from two measures given concurrently.Example: Relate the reading test results with pupil\'s average grades in reading given by the teacher. b.) Predictive Validity - describes the future performance of an individual by correlating the sets of scores obtained from two measures given at a longer time interval. Example: The entrance examination scores in a test administered to a freshmen class at the beginning of the school year is correlated with the average grades at the end of the school year. Factors Influencing the Validity of an Assessment Instrument Unclear Directions. Reading vocabulary and sentence structures are too difficult. Ambiguity. Inadequate time limits. Test items inappropriately for the outcomes being measured. Poorly constructed test items. Test too short. Improper arrangement of items. Identifiable pattern of answer. 5.) Reliability It refers to the consistency of scores obtained by the same person when retested using the same instrument or its parallel or when compared with other students who took the same test. Methods to Measure Reliability 1.) Test-Retest Measures stability. It is the reliability of a test measured over time. In other words, give the same test twice to the same people at different times to see if the scores are the same. For example, test on a Monday, then again the following Monday. 2.) Equivalent Forms (Parallel Form) Measures equivalence. Two tests that are equivalent in the sense that they contain the same kinds of items of equal difficulty but not the same items are administered to the same individuals. 3.) Split Half Measures internal consistency. Split a test into two halves. For example, one half may be composed of even-numbered questions while the other half is composed of odd-numbered questions. Administer each half to the same individual. Find the correlation between the scores for both halves. The higher the correlation between the two halves, the higher the internal consistency of the test or survey. Ideally you would like the correlation between the halves to be high because this indicates that all parts of the test are contributing equally to what is being measured. When to Use Split-Half Reliability The split-half reliability method is an easy method to carry out if you want to measure internal consistency, but it should only be used if the following two conditions are present: 1\. The test has a large number of questions. Split-half reliability works best for tests that have a large number of questions (e.g. 100 questions) because the number we calculate for the correlation will be more reliable. 2\. All of the questions on the test or survey measure the same construct or knowledge area. If a particular test measures several different constructs like leadership skills, communications skills, programming skills, and other professional skills, then a split-half reliability would not be appropriate to use since many of the responses are not expected to be correlated anyway. Improving Test Reliability Test Length - in general, a longer test is more reliable that a shorter one because longer test sample the instructional objectives more adequately. Spread of Scores - the type of students taking the test can influence reliability. A group of students with heterogeneous ability will produce a large spread of test scores than a group with homogenous ability. Item Difficulty-in general, test composed of items of moderate or average difficulty (0.30 to 0.70) will have more influence on reliability than those composed primarily of easy of very difficult items. Item Discrimination- in general, test composed of more discriminating items will have greater reliability that those composed of less discriminating items. Time limits - since all students do not function at the same pace, a time factor adds another criterion to the test that causes discrimination, thus improving reliability. 6.) Fairness It provides all students with an equal opportunity to demonstrate achievement. Students: Are given equal opportunity to learn. Are free from teacher stereotypes. Are free from biased assessment tasks and procedures. 7.) Practicality and Efficiency FACTORS TO CONSIDER: Teacher Familiarity of the Method Ease of Scoring Time Required Ease of Interpretation Complexity of the Administration Cost 8.) Assessment should be a continuous process It takes place in all phases of instruction. 9.) Authenticity Meaningful performance task. Clear standards and public criteria. Quality products and performance. Positive interaction between the assessee and assessor. Emphasis on meta-cognition and self-evaluation. Learning that transfers. 10.) Communication Assessment targets and standards Assessment results 11.) Positive Consequences It should motivate students to learn. It should help teachers improve the effectiveness of their instruction. Lesson 2 PRODUCTIVE AND UNPRODUCTIVE USES OF TEST PRODUCTIVE USES OF TEST 1\. LEARNING ANALYSIS 2\. IMPROVEMENT OF CURRICULUM 3\. IMPROVEMENT OF TEACHER 4\. IMPROVEMENT OF INSTRUCTIONAL MATERIALS 5\. INDIVIDUALIZATION 6\. PLACMENT 7\. SELECTION 8\. GUIDANCE AND COUNSELLING 9\. RESEARCH 10\. SELLING AND INTERPRETING THE SCHOOL TO THE COMMUNITY 11\. IDENTIFICATION OF EXCEPTIONAL CHILDREN 12\. EVALUATION OF LEARNING LEARNING ANALYSIS Test are used to identify the reason or causes why students do not learn and the solutions to help them learn. IMPROVEMENT OF CURRICULUM If the entire class does poorly, the curriculum needs to be revised or special units need to be developed for the class to continue. IMPROVEMENT OF INSTRUCTIONAL MATERIALS Test measure how effective instructional materials are in bringing about intended changes. INDIVIDUALIZATION Effective test always indicate differences in students\' learning. These can serve as bases for individual help. SELECTION When enrollment opportunity is limited, a test can be used to screen those who are more qualified. PLACEMENT Test can be used to determine to which category a student belongs. GUIDANCE and COUNSELLING Results from appropriate test, particularly standardized tests, can help teachers and counselors guide students in assessing future academic and career possibilities. RESEARCH Test can be a feedback tools to find affective methods of teaching and learn more about students, their interest, goals and achievements. Selling and interpreting the school to the community Effective test help the community understand what the students are learning, since test items are representative. Test can also be used to diagnose general school wide weaknesses and the strengths that require community or government support. Identification of exceptional children. Tests can reveal exceptional students inside the classroom. More often than not, these students are overlooked and left unattended. Evaluation of learning program Ideally, test should evaluate the effectiveness of each element in a learning program, not just blanket the information of the total learning environment. UNPRODUCTIVE USES OF TEST 1\. GRADING 2\. LABELING 3\. THREATENING 4\. UNANNOUNCED TESTING 5\. RIDICULING 6\. TRACKING 7\. ALLOCATING GRADING Test should not be used as the only determinants in grading a students. Most tests do not accurately reflect a student\'s performance or true abilities. LABELING Negative labels may lead the students to believe the label and act accordingly. Positive labels may lead the students to underachieve or avoid standing out as different or become overconfident and not exert effort anymore. THREATENING Tests lose their validity when used as the disciplinary measures. UNANNOUNCED TESTING Surprise test are generally not recommended. Creates anxiety on the part of the students particularly those who are already fearful of test. It does not give students the adequate time to prepare. It does not promote efficient learning or higher achievement. RIDICULING Using test to deride students. TRACKING Students are grouped into categories according to deficiencies as revealed by tests without continuous reevaluation. ALLOCATING OF FUNDS Test are exploit to solicit for funding. TYPE OF TEST ACCORDING TO MODE OF RESPONSE 1\. Oral test (viva voce) Answers are spoken Used to measure oral communication skills Used to check students\' understanding of concepts, theories and procedures 2\. Written Test Are activities wherein students either select or provide a response to a prompt It can be administered to a large group at one time Can measure students\' written communication skills Used to assess lower and higher levels of cognition 3\. Performance Test Activities that require students to demonstrate their skills or ability to perform specific actions. Tasks are designed to be authentic, meaningful, in- depth and multidimensional. Week 5 INSTRUCTIONAL OBJECTIVES Instructional objectives should be stated in behavioral terms. They must be SMART S-pecific M-easurable A-ttainable R-esult-oriented T-ime-bounded It is consists of two essential components: behavior and content Behavior component tells what a learner is expected to perform (expressed in verb form) Content component specifies the topic or subject matter a student is expected to learn (expressed as noun phrase) Examples 1\. Solve a equation. system of linear 2\. Identify the parts of a sentence. 3\. Name the parts of the body. 4\. Describe the function of the digestive system. It should be noted that behavioral objectives are observable and measurable. The use of the five senses sight, hearing, smell, taste, and touch makes the objectives observable. Measurable means that objectives can be translated into objective test items. List of Some Observable and Non-observable behaviors: OBSERVABLE draw build list recite add NON-OBSERVABLE understand appreciate value know be familiar An instructional objective also contains two optional components: condition and criterion level. ∆Condition is the situation which learning will take place It may be materials, tools, places or other resources which can facilitate the learning process. ∆Criterion level refers to the acceptable level of performance (standard). It tells how well a particular behavior is to be done. It could be stated in terms of percentage, number of items answered correctly, completion of a task within a prescribed time limit, and a completion of a task to a certain extent or degree of frequency. Example: Given a world map, - condition locate - behavior ten Asian Countries with - content 90% correctness. - criterion LEARNING OUTCOMES Learning outcomes are the end results of instructional objectives. NOT all action verbs specify learning outcomes, sometimes they specify learning activities (means to an end). LEARNING OUTCOMES (end) 1\. Listed the four primary colors 2\. Recited the poem \"A Tree\" 3\. Drawn the parts of the nervous system 4\. Proven trigonometric identities LEARNING ACTIVITIES (means) 1\. Studied the four primary colors 2\. Practiced the poem \"A Tree\" 3\. Watched a film about the nervous system 4\. Memorized the different trigonometric identities TAXONOMY OF INSTRUCTIONAL OBJECTIVES Benjamin S. Bloom (1956), a well-known American psychologist and educator, and his associates prepared taxonomy of instructional objectives categorized into three domains: cognitive, psychomotor, and affective. Cognitive Domain (HEAD) called for outcomes of mental activity such as memorizing, reading, problem solving, analyzing, synthesizing and drawing conclusions. Cognitive Domain (Knowledge) It is consists of objectives that relate to mental or thinking processes. These objectives are arranged hierarchically from the lowest and the simplest to the highest and the most complex forms. EVALUATION SYNTHESIS ANALYSIS APPLICATION COMPREHENSION KNOWLEDGE BENJAMIN BLOOM CREATING EVALUATING ANALYSING APPLYING UNDERSTANDING REMEMBERING LORIN ANDERSON LEARNING 1 Knowledge \"involves the recall of specifics and yuniversals, the recall of methods and processes, or the recall of a pattern, structure, or setting.\" Comprehension \"refers to a type of understanding or apprehension such that the individual knows what is being communicated and can make use of the material or idea being communicated without necessarily relating it to other material or seeing its fullest implications.\" Application refers to the \"use of abstractions in particular and concrete situations.\" Analysis represents the \"breakdown of a communication into its constituent elements or parts such that the relative hierarchy of ideas is made clear and/or the relations between ideas expressed are made explicit.\" Synthesis involves the \"putting together of elements and parts so as to form a whole.\" Evaluation engenders \"judgments about the value of material and methods for given purposes.\" ![](media/image2.jpeg) Psychomotor Domain (HAND) is characterized by the progressive levels of behaviors from observation to mastery of physical skills. Psychomotor Domain (Skills) In the early seventies, E. Simpson, Dave and A.S. Harrow recommended categories for the Psychomotor Domain which included physical coordination, movement and use of the motor skills body parts. Internalizing PRACTICING IMITATING OBSERVING Affective Domain (HEART) describes the learning objectives that emphasize a feeling tone, an emotion, or a degree of acceptance or rejection. Affective Domain (Attitude) The affective domain refers to the way in which we deal with actions emotionally, such as feelings, appreciation, enthusiasm, motivation, values, and attitude. The taxonomy is ordered into 5 levels he person progresses towards internalization in which the attitude or feeling consistently guides a person\'s behavior. Affective Domain (Attitude) The affective domain refers to the way in which we deal with actions emotionally, such as feelings, appreciation, enthusiasm, motivation, values, and attitude. The taxonomy is ordered into 5 levels he person progresses towards internalization in which the attitude or feeling consistently guides a person\'s behavior. ![](media/image4.jpeg)

Use Quizgecko on...
Browser
Browser