The Art and Science of Teaching PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This book presents a framework for effective teaching, balancing research-based data with understanding individual student needs. It's a practical guide to instructional design with ten key questions to consider for planning successful lessons.

Full Transcript

Education The Art and Science of Teaching T...

Education The Art and Science of Teaching T Art hough classroom instructional strategies should clearly be based on sound science and research, knowing when to use them and with whom is more of an art. In The Art and Science of Teaching: A Comprehensive Framework for Effective The Instruction, author Robert J. Marzano presents a model for ensuring quality teaching that balances the necessity of research-based data with the equally vital need to understand the strengths and weaknesses of individual students. He articulates his framework in the form of 10 questions that represent a logical planning sequence for successful Science instructional design: What will I do to establish and communicate learning goals, track student progress, and celebrate success? What will I do to help students effectively interact with new knowledge? and Teaching What will I do to help students practice and deepen their understanding of new knowledge? of What will I do to help students generate and test hypotheses about new knowledge? What will I do to engage students? What will I do establish or maintain classroom rules and procedures? What will I do to recognize and acknowledge adherence and lack of adherence to classroom rules and procedures? What will I do to establish and maintain effective relationships with students? What will I do to communicate high expectations for all students? What will I do to develop effective lessons organized into a cohesive unit? MARZANO For classroom lessons to be truly effective, educators must examine every component of the teaching process with equal resolve. Filled with charts, rubrics, and organizers, this A COMPREHENSIVE FRAMEWORK FOR EFFECTIVE INSTRUCTION methodical, user-friendly guide will help teachers examine and develop their knowledge and skills, so they can achieve a dynamic fusion of art and science that results in excep- tional teaching and outstanding stud­ent achievement. $26.95 U.S. Association for Supervision and Curriculum Development Alexandria, Virginia USA Many ASCD members received this book as a member benefit Robert J. MARZANO BROWSE EXCERPTS FROM ASCD BOOKS: upon its initial release. www.ascd.org/books Learn more at: www.ascd.org/memberbooks STUDY GUIDE ONLINE ArtandScienceofTeaching CVR2.ind1 1 7/5/07 10:08:01 AM Art The and Science Teaching of A COMPREHENSIVE FRAMEWORK FOR EFFECTIVE INSTRUCTION ArtandScienceofTeaching TP.indd 2 6/27/07 1:58:59 PM Many ASCD members received this book as a member benefit upon its initial release. Learn more at: www.ascd.org/memberbooks ArtandScienceofTeaching TP.indd 3 6/27/07 1:59:00 PM Art The and Science Teaching of A COMPREHENSIVE FRAMEWORK FOR EFFECTIVE INSTRUCTION Robert J. MARZANO Association for Supervision and Curriculum Development Alexandria, Virginia USA ArtandScienceofTeaching TP.indd 1 6/27/07 1:58:59 PM ® Association for Supervision and Curriculum Development 1703 N. Beauregard St. Alexandria, VA 22311 1714 USA Phone: 800-933-2723 or 703-578-9600 Fax: 703-575-5400 Web site: www.ascd.org E-mail: [email protected] Author guidelines: www.ascd.org/write Gene R. Carter, Executive Director; Nancy Modrak, Director of Publishing; Julie Houtz, Director of Book Editing & Production; Ernesto Yermoli, Project Manager; Reece Quiñones, Senior Graphic Designer; Circle Graphics, Typesetter; Vivian Coss, Production Specialist Copyright © 2007 by the Association for Supervision and Curriculum Development (ASCD). All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission from ASCD. Readers who wish to duplicate material copyrighted by ASCD may do so for a small fee by contacting the Copyright Clearance Center (CCC), 222 Rosewood Dr., Danvers, MA 01923, USA (phone: 978-750-8400; fax: 978-646-8600; Web: www.copyright.com). For requests to reprint rather than photocopy, contact ASCD’s permissions office: 703-575-5749 or [email protected]. Translation inquiries: [email protected]. Printed in the United States of America. Cover art copyright © 2007 by ASCD. ASCD publications present a variety of viewpoints. The views expressed or implied in this book should not be interpreted as official positions of the Association. ASCD Member Book, No. FY07-8 (July 2007, PC). ASCD Member Books mail to Premium (P), Compre- hensive (C), and Regular (R) members on this schedule: Jan., PC; Feb., P; Apr., PCR; May, P; July, PC; Aug., P; Sept., PCR; Nov., PC; Dec., P. PAPERBACK ISBN: 978-1-4166-0571-3 ASCD product #107001 Also available as an e-book through ebrary, netLibrary, and many online booksellers (see Books in Print for the ISBNs). Quantity discounts for the paperback edition only: 10–49 copies, 10%; 50+ copies, 15%; for 1,000 or more copies, call 800-933-2723, ext. 5634, or 703-575-5634. For desk copies: [email protected]. Library of Congress Cataloging-in-Publication Data Marzano, Robert J. The art and science of teaching : a comprehensive framework for effective instruction / Robert J. Marzano. p. cm. Includes bibliographical references. ISBN 978-1-4166-0571-3 (pbk. : alk. paper) 1. Effective teaching— United States. 2. Classroom management—United States. 3. Teaching— Aids and devices. 4. Learning, Psychology of. I. Title. LB1025.3M3387 2007 371.102—dc22 2007005994 __________________________________________________________ 18 17 16 15 14 13 12 11 10 09 08 07 1 2 3 4 5 6 7 8 9 10 11 12 10408-00_FM.indd iv 6/27/07 1:27:22 PM To Richard Strong: The best teacher I ever saw 10408-00_FM.indd v 6/27/07 1:27:22 PM 10408-00_FM.indd vi 6/27/07 1:27:22 PM Introduction: A Question Answered.........................................1 Chapter 1: What will I do to establish and communicate learning goals, track student progress, and celebrate success?..............................9 Chapter 2: What will I do to help students effectively interact with new knowledge?........................................29 Chapter 3: What will I do to help students practice and deepen their understanding of new knowledge?.................................58 Chapter 4: What will I do to help students generate and test hypotheses about new knowledge?.....................................86 Chapter 5: What will I do to engage students?............................98 Chapter 6: What will I do to establish or maintain classroom rules and procedures?......................................117 Chapter 7: What will I do to recognize and acknowledge adherence and lack of adherence to classroom rules and procedures?..................131 Chapter 8: What will I do to establish and maintain effective relationships with students?...................................149 Chapter 9: What will I do to communicate high expectations for all students?... 162 Chapter 10: What will I do to develop effective lessons organized into a cohesive unit?.......................................174 10408-00_FM.indd vii 6/27/07 1:27:22 PM viii The Art and Science of Teaching Afterword...........................................................191 References..........................................................193 Index..............................................................213 About the Author.....................................................221 10408-00_FM.indd viii 6/27/07 1:27:22 PM Introduction: A Question Answered Strange as it might sound to modern-day educators, there was a time in the not too distant past when people questioned the importance of schools and teachers. Specifically, the 1966 report entitled Equality in Educational Oppor- tunity and commonly referred to as the Coleman report in deference to its senior author (Coleman et al., 1966) involved more than 640,000 students in grades 1, 3, 6, 9, and 12 and concludes the following: “Taking all these results together, one implication stands above all: that schools bring little to bear on a child’s achievement that is independent of his background and general social context” (p. 235). This was a devastating commentary on the potential (or lack thereof) of schools and teachers to positively influence student achievement. In general, these results were interpreted as strong evidence that schools (and by inference the teachers within them) make little difference in the academic lives of students. Since then a number of studies have provided evidence for a different conclu- sion (for a discussion, see Marzano, 2003b). Indeed, those studies demonstrate that effective schools can make a substantial difference in the achievement of stu- dents. In the last decade of the 20th century, the picture of what constitutes an effective school became much clearer. Among elements such as a well-articulated curriculum and a safe and orderly environment, the one factor that surfaced as the single most influential component of an effective school is the individual teachers within that school. 1 10408-01_Intro.indd 1 6/27/07 1:27:46 PM 2 The Art and Science of Teaching Many studies have quantified the influence an effective teacher has on stu- dent achievement that is relatively independent of anything else that occurs in the school (for discussions see Haycock, 1998; Marzano, 2003b; Nye, Konstan- topoulos, & Hedges, 2004). Of these studies, the one by Nye, Konstantopoulos, and Hedges is the most compelling because it involved random assignment of students to classes controlled for factors such as the previous achievement of stu- dents, socioeconomic status, ethnicity, gender, class size, and whether or not an aide was present in class. The study involved 79 elementary schools in 42 school districts in Tennessee. Among a number of findings, the study dramatically answers the question of how much influence the individual classroom teacher has on student achieve- ment. Nye and colleagues (2004) summarize the results as follows: These findings would suggest that the difference in achievement gains between hav- ing a 25th percentile teacher (a not so effective teacher) and a 75th percentile teacher (an effective teacher) is over one-third of a standard deviation (0.35) in reading and almost half a standard deviation (0.48) in mathematics. Similarly, the difference in achievement gains between having a 50th percentile teacher (an average teacher) and a 90th percentile teacher (a very effective teacher) is about one-third of a standard deviation (0.33) in reading and somewhat smaller than half a standard deviation (0.46) in mathematics.... These effects are certainly large enough effects to have policy significance. (p. 253) Figures I.1 and I.2 depict Nye and colleagues’ findings. Figure I.1 indicates that students who have a teacher at the 75th percentile in terms of pedagogical competence will outgain students who have a teacher at the 25th percentile by 14 percentile points in reading and 18 percentile points in mathematics. Figure I.2 indicates that students who have a 90th percentile teacher will outgain students who have a 50th percentile teacher by 13 percentile points in reading and 18 percentile points in mathematics. Again, Nye and colleagues (2004) note that these differences are significant enough to imply a need for policy changes. It is important to remember that the Nye study was conducted in lower elementary grades. However, given the statistical controls employed and the consistency of their findings with other studies at different grade levels, one can conclude that the question as to whether effective teachers make a significant difference in student achievement has been answered. They do! Whereas Nye and colleagues’ (2004) study was not intended to identify specific characteristics of effective teachers, this book is. However, just as Nye’s team qualified its findings, I too must qualify the recommendations made in this book. Notice that it is titled The Art and Science of Teaching. In this text I present 10408-01_Intro.indd 2 6/27/07 1:27:46 PM Introduction 3 FIGURE I.1 Greater Achievement Gains for Students with a 75th Percentile Teacher Versus a 25th Percentile Teacher 100 90 80 70 60 50 40 30 18 Percentile 14 Percentile Points 20 Points 10 0 Reading Mathematics FIGURE I.2 Greater Achievement Gains for Students with a 90th Percentile Teacher Versus a 50th Percentile Teacher 100 90 80 70 60 50 40 30 18 Percentile 13 Percentile Points 20 Points 10 0 Reading Mathematics 10408-01_Intro.indd 3 6/27/07 1:27:46 PM 4 The Art and Science of Teaching a fair amount of research. One might conclude from this that I believe teaching to be a science. It is certainly true that research provides us with guidance as to the nature of effective teaching, and yet I strongly believe that there is not (nor will there ever be) a formula for effective teaching. This is not an unusual claim. Many researchers and those who try to apply research (a category into which I place myself) would probably agree. Commenting on educational research in the 1970s and 1980s, Willms (1992) notes, “I doubt whether another two decades of research will... help us specify a model for all seasons—a model that would apply to all schools in all communities at all times” (p. 65). A similar sentiment is credited to the famous mathematical statistician George Box, who is reported to have said that all mathematical models are false but some are useful (de Leeuw, 2004). In effect, Box warned that mathematical models that form the basis of all quantitative research are approximations only of reality, yet they can help us understand the underlying dynamics of a specific situation. Reynolds and Teddlie (2000) address the issue in the following way: “Sometimes the adoption of ideas from research has been somewhat uncritical; for example, the numerous attempts to apply findings from one specific context to another entirely different context when research has increasingly demonstrated significant contextual differences” (p. 216). Even though the comments of Willms (1992) and Reynolds and Teddlie (2000) address the broader issue of school reform, they are quite applicable to research on classroom instruction. No amount of further research will provide an airtight model of instruction. There are simply too many variations in the situations, types of content, and types of students encountered across the K–12 continuum. Riehl (2006) offers an interesting perspective as a result of her contrast of edu- cational research with medical research. She notes that medical research employs a variety of methodologies that range from randomized clinical trials to single subject case studies. But the findings from these studies are anything but absolute. She explains: “Even the seemingly most determinant causal association in medi- cine (such as the relationship between smoking and lung cancer) is really just a probability” (p. 26). Riehl further comments: When reported in the popular media, medical research often appears as a blunt instrument, able to obliterate skeptics or opponents by the force of its evidence and arguments.... Yet repeated visits to the medical journals themselves can leave a much different impression. The serious medical journals convey the sense that medical research is an ongoing conversation and quest, punctuated occasionally by important findings that can and should alter practice, but more often characterized by continuing investigations. These investigations, taken cumulatively, can inform 10408-01_Intro.indd 4 6/27/07 1:27:46 PM Introduction 5 the work of practitioners who are building their own local knowledge bases on medi- cal care. (pp. 27–28) The individual medical practitioner must sift through a myriad of studies and opinions to build a local knowledge base for interacting with patients. So too must the practitioner in education. Educational research is not a blunt instrument that shatters all doubt about best practice. Rather it provides general direction that must be interpreted by individual districts, schools, and teachers in terms of their unique circumstances. In short, research will never be able to identify instructional strategies that work with every student in every class. The best research can do is tell us which strategies have a good chance (i.e., high probability) of working well with stu- dents. Individual classroom teachers must determine which strategies to employ with the right students at the right time. In effect, a good part of effective teaching is an art—hence the title, The Art and Science of Teaching. Viewing teaching as part art and part science is not a new concept. Indeed, in his article entitled “In Pursuit of the Expert Pedagogue,” Berliner (1986) ulti- mately concludes that effective teaching is a dynamic mixture of expertise in a vast array of instructional strategies combined with a profound understanding of the individual students in class and their needs at particular points in time. In effect, in different words, Berliner characterized effective teaching as part art and part science more than two decades ago. A Confluence of Previous Works To a great extent this text represents the confluence of suggestions from a number of previous works in which I have been involved. Specifically, the text What Works in Schools (Marzano, 2003b) presents a framework for understanding the charac- teristics of effective schools and effective teachers within those schools. Three gen- eral characteristics of effective teaching are articulated in that framework: 1. Use of effective instructional strategies 2. Use of effective classroom management strategies 3. Effective classroom curriculum design The book Classroom Instruction That Works (Marzano, Pickering, & Pol- lock, 2001) and its related text A Handbook for Classroom Instruction That Works (Marzano, Norford, Paynter, Pickering, & Gaddy, 2001) address the first general characteristic. The book Classroom Management That Works (Mar- zano, 2003a) and its related text A Handbook for Classroom Management That Works (Marzano, Gaddy, Foseid, Foseid, & Marzano, 2005) address the second 10408-01_Intro.indd 5 6/27/07 1:27:46 PM 6 The Art and Science of Teaching characteristic of effective teaching. The third characteristic is addressed in a chapter in What Works in Schools but not in a separate text. From the outset, I tried to acknowledge that these three characteristics are highly interdepen- dent and that to separate them is an artificial distinction. As noted in the book Classroom Instruction That Works, We need to make one final comment on the limitations of the conclusions that educa- tors can draw from reading this book. Although the title of this book speaks to instruc- tion in a general sense, you should note that we have limited our focus to instructional strategies. There are certainly other aspects of classroom pedagogy that affect student achievement. In fact, we might postulate that effective pedagogy involves three related areas: (1) the instructional strategies used by the teacher, (2) the management tech- niques used by the teacher, and (3) the curriculum designed by the teacher. (Marzano, Pickering, & Pollock, 2001, pp. 9–10) In short, the components of effective pedagogy can be symbolized in part as shown in Figure I.3. This book combines my previous works on classroom instruction and manage- ment just cited along with information from Classroom Assessment and Grading That Work (Marzano, 2006). It does so in the context of a comprehensive framework of effective teaching. It is a framework offered as a model of what I believe every district or school should develop on its own. Specifically, I recommend that schools and districts generate their own models using this one as a starting point. There are other models, in addition to this one, districts and schools might consult in their efforts (see, for example, Good and Brophy, 2003; Mayer, 2003; Stronge, 2002). FIGURE I.3 Three Components of Effective Classroom Pedagogy Effective Classroom Pedagogy Use of Use of Use of Effective Effective Effective Instructional Management Classroom Strategies Strategies Curriculum Design Strategies 10408-01_Intro.indd 6 6/27/07 1:27:46 PM Introduction 7 The comprehensive model offered in this book is articulated in the form of 10 design questions. They are listed in Figure I.4. The remaining chapters address these questions in some detail. They repre- sent a logical planning sequence for effective instructional design. Question 10 is an omnibus question in that it organizes the previous nine into a framework for thinking about units of instruction and the lessons within those units. FIGURE I.4 Instructional Design Questions 1. What will I do to establish and communicate learning goals, track student progress, and celebrate success? 2. What will I do to help students effectively interact with new knowledge? 3. What will I do to help students practice and deepen their understanding of new knowledge? 4. What will I do to help students generate and test hypotheses about new knowledge? 5. What will I do to engage students? 6. What will I do to establish or maintain classroom rules and procedures? 7. What will I do to recognize and acknowledge adherence and lack of adherence to classroom rules and procedures? 8. What will I do to establish and maintain effective relationships with students? 9. What will I do to communicate high expectations for all students? 10. What will I do to develop effective lessons organized into a cohesive unit? © 2005 by Marzano & Associates. All rights reserved. 10408-01_Intro.indd 7 6/27/07 1:27:46 PM 10408-01_Intro.indd 8 6/27/07 1:27:46 PM 1 W hat will I do to establish and communicate learning goals, track student progress, and celebrate success? Arguably the most basic issue a teacher can consider is what he or she will do to establish and communicate learning goals, track student progress, and celebrate success. In effect, this design question includes three distinct but highly related ele- ments: (1) setting and communicating learning goals, (2) tracking student progress, and (3) celebrating success. These elements have a fairly straightforward relation- ship. Establishing and communicating learning goals are the starting place. After all, for learning to be effective, clear targets in terms of information and skill must be established. But establishing and communicating learning goals alone do not suffice to enhance student learning. Rather, once goals have been set it is natural and neces- sary to track progress. This assessment does not occur at the end of a unit only but throughout the unit. Finally, given that each student has made progress in one or more learning goals, the teacher and students can celebrate those successes. In the Classroom Let’s start by looking at a classroom scenario as an example. Mr. Hutchins begins his unit on Hiroshima and Nagasaki by passing out a sheet of paper with the three learning goals for the unit: Goal 1. Students will understand the major events leading up to the development of the atomic bomb, starting with Einstein’s publication of the theory of special relativity in 1905 and ending with the develop- ment of the two bombs Little Boy and Fat Man in 1945. 9 10408-02_CH01.indd 9 6/27/07 1:28:16 PM 10 The Art and Science of Teaching Goal 2. Students will understand the major factors involved in making the decision to use atomic weapons on Hiroshima and Nagasaki. Goal 3. Students will understand the effects that using atomic weapons had on the outcome of World War II and the Japanese people. At the bottom of the page is a line on which students record their own goal for the unit. To facilitate this step, Mr. Hutchins has a brief whole-class discus- sion and asks students to identify aspects of the content about which they want to learn more. One student says: “By the end of the unit I want to know about the Japanese Samurai.” Mr. Hutchins explains that the Samurai were warriors centuries before World War II but that the Samurai spirit definitely was a part of the Japanese view of combat. He says that sounds like a great personal goal. For each learning goal, Mr. Hutchins has created a rubric that spells out spe- cific levels of understanding. He discusses each level with students and explains that these levels will become even more clear as the unit goes on. Throughout the unit, Mr. Hutchins assesses students’ progress on the learning goals using quiz- zes, tests, and even informal assessments such as brief discussions with students. Each assessment is scored using the rubric distributed on the first day. As formative information is collected regarding student progress on these goals, students chart their progress using graphs. At first some students are dis- mayed by the fact that their initial scores are quite low—1s and 2s on the rubric. But throughout the unit students see their scores gradually rise. They soon realize that even if you begin the unit with a score of 0 for a particular learning goal, you can end up with a score of 4. By the end of the unit virtually all students have demonstrated that they have learned, even though everyone does not end up with the same final score. Progress is celebrated for each student. For each learning goal, Mr. Hutchins recognizes those students who gained one point on the scale, each student who gained two points on the scale, and so on. Virtually every student in class has a sense of accomplishment by the unit’s end. Research and Theory As demonstrated by the scenario for Mr. Hutchins’s class, this design question includes a number of components, one of which is goal setting. Figure 1.1 sum- marizes the findings from a number of synthesis studies on goal setting. To interpret these findings, it is important to understand the concept of an effect size. Briefly, in this text an effect size tells you how much larger (or smaller) you might expect the average score to be in a class where students use a particular 10408-02_CH01.indd 10 6/27/07 1:28:16 PM Chapter 1 11 FIGURE 1.1 Research Results for Goal Setting Average Number of Percentile Synthesis Study Focus Effect Effect Sizes Gain Size Wise & Okey, General effects of setting goals or 3 1.37 41 1983a objectives 25 0.48 18 Lipsey & Wilson, General effects of setting goals or 204 0.55 21 1993b objectives General effects of setting goals or Walberg, 1999 21 0.40 16 objectives a Two effect sizes are listed because of the manner in which effect sizes are reported. Readers should consult that study for more details. b The review includes a wide variety of ways and contexts in which goals might be used. strategy as compared to a class where the strategy is not used. In Figure 1.1 three studies are reported, and effect sizes are reported for each. Each of these studies is a synthesis study, in that it summarizes the results from a number of other stud- ies. For example, the Lipsey and Wilson (1993) study synthesizes findings from 204 reports. Consider the average effect size of 0.55 from those 204 effect sizes. This means that in the 204 studies they examined, the average score in classes where goal setting was effectively employed was 0.55 standard deviations greater than the average score in classes where goal setting was not employed. Perhaps the easiest way to interpret this effect size is to examine the last column of Fig- ure 1.1, which reports percentile gains. For the Lipsey and Wilson effect size of 0.55, the percentile gain is 21. This means that the average score in classes where goal setting was effectively employed would be 21 percentile points higher than the average score in classes where goal setting was not employed. (For a more detailed discussion of effect sizes and their interpretations, see Marzano, Waters, & McNulty, 2005.) One additional point should be made about the effect sizes reported in this text. They are averages. Of the 204 effect sizes, some are much larger than the 0.55 average, and some are much lower. In fact, some are below zero, which indi- cates that the classrooms where goals were not set outperformed the classrooms where goals were set. This is almost always the case with research regarding instructional strategies. Seeing effect sizes like those reported in Figure 1.1 tells us that goal setting has a general tendency to enhance learning. However, educators 10408-02_CH01.indd 11 6/27/07 1:28:16 PM 12 The Art and Science of Teaching must remember that the goal-setting strategy and every other strategy mentioned in this book must be done well and at the right time to produce positive effects on student learning. As illustrated in Mr. Hutchins’s scenario, feedback is intimately related to goal setting. Figure 1.2 reports the findings from synthesis studies on feedback. Notice that the effect sizes in Figure 1.2 tend to be a bit larger than those reported in Figure 1.1. This makes intuitive sense. Goal setting is the beginning step only in this design question. Clear goals establish an initial target. Feedback provides students with information regarding their progress toward that target. Goal setting and feed- back used in tandem are probably more powerful than either one in isolation. In fact, without clear goals it might be difficult to provide effective feedback. Formative assessment is another line of research related to the research on feedback. Teachers administer formative assessments while students are learning FIGURE 1.2 Research Results for Feedback Number of Average Percentile Synthesis Study Focus Effect Sizes Effect Size Gain Bloom, 1976 General effects of feedback 8 1.47 43 Lysakowski & General effects of feedback 39 1.15 37 Walberg, 1981a Lysakowski & General effects of feedback 94 0.97 33 Walberg, 1982 Haller, Child, & General effects of feedback 115 0.71 26 Walberg, 1988b Tennenbaum & General effects of feedback 16 0.66 25 Goldring, 1989 Bangert-Drowns, Kulik, Kulik, & General effects of feedback 58 0.26 10 Morgan, 1991 Kumar, 1991c General effects of feedback 5 1.35 41 Walberg, 1999 General effects of feedback 20 0.94 33 Haas, 2005 General effects of feedback 19 0.55 21 a Reported in Fraser, Walberg, Welch, & Hattie, 1987. b Feedback was embedded in general metacognitive strategies. c The dependent variable was engagement. 10408-02_CH01.indd 12 6/27/07 1:28:17 PM Chapter 1 13 new information or new skills. In contrast, teachers administer summative assess- ments at the end of learning experiences, for example, at the end of the semester or the school year. Major reviews of research on the effects of formative assessment indicate that it might be one of the more powerful weapons in a teacher’s arsenal. To illustrate, as a result of a synthesis of more than 250 studies, Black and Wiliam (1998) describe the impact of effective formative assessment in the following way: The research reported here shows conclusively that formative assessment does improve learning. The gains in achievement appear to be quite considerable, and as noted earlier, amongst the largest ever reported for educational interventions. As an illustration of just how big these gains are, an effect size of 0.7, if it could be achieved on a nationwide scale, would be equivalent to raising the mathematics attainment score of an “average” country like England, New Zealand, or the United States into the “top five” after the Pacific rim countries of Singapore, Korea, Japan, and Hong Kong. (p. 61) One strong finding from the research on formative assessment is that the frequency of assessments is related to student academic achievement. This is demonstrated in the meta-analysis by Bangert-Drowns, Kulik, and Kulik (1991). Figure 1.3 depicts their analysis of findings from 29 studies on the frequency of assessments. To interpret Figure 1.3, assume that we are examining the learning of a particular student who is involved in a 15-week course. (For a discussion of how this figure was constructed, see Mar- zano, 2006, Technical Note FIGURE 1.3 Achieved Gain Associated with Number 2.2.) Figure 1.3 depicts the of Assessments over 15 Weeks increase in learning one might expect when differ- Number of Assessments Effect Size Percentile Gain ing quantities of formative 0 0 0 assessments are employed 1 0.34 13.5 during that 15-week ses- sion. If five assessments are 5 0.53 20.0 employed, a gain in student 10 0.60 22.5 achievement of 20 percen- 15 0.66 24.5 tile points is expected. If 25 assessments are admin- 20 0.71 26.0 istered, a gain in student 25 0.78 28.5 achievement of 28.5 per- 30 0.82 29.0 centile points is expected, Note: Effect sizes are from data reported by Bangert-Drowns, Kulik, & and so on. This same phe- Kulik, 1991. nomenon is reported by 10408-02_CH01.indd 13 6/27/07 1:28:17 PM 14 The Art and Science of Teaching Fuchs and Fuchs (1986) in their meta-analysis of 21 controlled studies. They report that providing two assessments per week results in an effect size of 0.85 or a percentile gain of 30 points. A third critical component of this design question is the area of research on reinforcing effort and providing recognition for accomplishments. Reinforcing effort means that students see a direct link between how hard they try at a par- ticular task and their success at that task. Over the years, research has provided evidence for this intuitively appealing notion, as summarized in Figure 1.4. Among other things, reinforcing effort means that students see a direct rela- tionship between how hard they work and how much they learn. Quite obvi- ously, formative assessments aid this dynamic in that students can observe the increase in their learning over time. Providing recognition for student learning is a bit of a contentious issue—at least on the surface. Figure 1.5 reports the results of two synthesis studies on the effects of praise on student performance. The results reported by Wilkinson (1981) are not very compelling, in that praise does not seem to have much of an effect is student achievement. The 6 percentile point gain shown in those studies is not that large. On the other hand, the results reported by Bloom (1976) are noteworthy; a 21 percentile point gain is considerable. A plausible reason for the discrepancy is that these two studies were very general in nature, in that praise was defined in a wide variety of ways across studies. FIGURE 1.4 Research Results for Reinforcing Effort Number of Average Percentile Synthesis Study Focus Effect Sizes Effect Size Gain Stipek & Weisz, 1981 Reinforcing efforta 17 0.54 21 Schunk & Cox, 1986 Reinforcing effort 3 0.93 32 Kumar, 1991 b Reinforcing effort 6 1.72 46 8 1.42 42 Hattie, Biggs, & Purdie, 2 0.57 22 Reinforcing effort 1996c 2 2.14 48 2 0.97 33 aThese studies also dealt with students’ sense of control. bThe dependent variable was engagement. c Multiple effect sizes are listed because of the manner in which effect sizes are reported. Readers should con- sult that study for more details. 10408-02_CH01.indd 14 6/27/07 1:28:17 PM Chapter 1 15 FIGURE 1.5 Research Results on Praise Number of Average Percentile Synthesis Study Focus Effect Sizes Effect Size Gain Bloom, 1976 General effects of praise 12 0.54 21 Wilkinson, 1981a General effects of praise 14 0.16 6 a Reported in Fraser et al., 1987. Other synthesis studies—particularly research on the effects of reward on intrinsic motivation—have been more focused in their analyses. Figure 1.6 sum- marizes findings from two major synthesis studies on the topic. Among other things, both studies in Figure 1.6 examined the impact of what is commonly referred to as extrinsic rewards on what is referred to as intrinsic motivation. Both are somewhat fuzzy concepts that allow significant variation in how they are defined. (For a discussion, see Cameron & Pierce, 1994.) Consid- ered at face value though, external reward is typically thought of as some type of token or payment for success. Intrinsic motivation is necessarily defined in contrast to extrinsic motivation. According to Cameron and Pierce (1994): Intrinsically motivated behaviors are ones for which there is no apparent reward except the activity itself (Deci, 1971). Extrinsically motivated behaviors, on the other hand, refer to behaviors in which an external controlling variable [such as reward] can be readily identified. (p. 364) The average effect sizes in Figure 1.6 show an uneven pattern—two effect sizes are below zero, and two effect sizes are above zero. However, the two effect FIGURE 1.6 Research Results on Rewards Measure Used to Assess Number of Average Percentile Synthesis Study Intrinsic Motivation Effect Sizes Effect Size Gain Cameron & Pierce, Free-choice behavior 57 – 0.06 –2 1994 Interest/attitude 47 0.21 8 Deci, Koestner, Free-choice behavior 101 – 0.24 –9 & Ryan, 2001 Interest/attitude 84 0.04 2 10408-02_CH01.indd 15 6/27/07 1:28:17 PM 16 The Art and Science of Teaching sizes below zero are for studies that used free-choice behavior as the measure of intrinsic motivation. Typically these studies examine whether students (i.e., subjects) tend to engage in the task for which they are being rewarded even when they are not being asked to do the task. In both synthesis studies, the effect of extrinsic reward on free-choice behavior was negative. In contrast, positive effects (albeit small for the Deci, Koestner, & Ryan, 2001, study) are reported when the measure of intrinsic motivation is students’ interest. Typically student interest is assessed by some form of self-report. The contradictory findings for student interest versus student free-choice behavior do not provide any clear direction, but they do demonstrate the highly equivocal nature of the research on rewards and intrinsic motivation. A possible answer is found, however, by examining more carefully the distinction between free-time behavior and interest, as shown in Figure 1.7. This research indicates that when verbal rewards are employed (e.g., positive comments about good performance, acknowledgments of knowledge gain) the trend is positive when intrinsic motivation is measured either by interest/attitude or by free-choice behavior. Even these results must be interpreted cautiously. Certainly, factors such as the age of students and the context in which rewards (verbal or otherwise) are given can influence their effect on students. It is safe to say, however, that when used appropriately verbal rewards and perhaps FIGURE 1.7 Influence of Abstract Versus Tangible Rewards Measure Used to Assess Number of Average Percentile Synthesis Study Intrinsic Motivation Effect Sizes Effect Size Gain Verbal on interest/attitude 15 0.45 17 Cameron & Pierce, Verbal on free time 15 0.42 16 1994 Tangible on interest/attitude 37 0.09 4 Tangible on free time 51 –0.20 –8 Verbal on interest/attitude 21 0.31 12 Deci, Koestner, Verbal on free time 21 0.33 13 & Ryan, 2001 Tangible on interest/attitude 92 –0.34 –13 Tangible on free time 70 –0.07 –3 10408-02_CH01.indd 16 6/27/07 1:28:17 PM Chapter 1 17 also tangible rewards can positively affect student achievement. Deci, Ryan, and Koestner (2001) share the following observations: As our research and theory have always suggested, there are ways of using even tangible rewards that are less likely to have a negative effect and may, under limited circum- stances, have a positive effect on intrinsic motivation. However, the use of rewards as a motivational strategy is clearly a risky proposition, so we continue to argue for thinking about educational practices that will engage students’ interest and support the develop- ment of their self-regulation. We believe that it is an injustice to the integrity of our teachers and students to simply advocate that educators focus on the use of rewards to control behavior rather than grapple with the deeper issues of (a) why many students are not interested in learning within our educational system and (b) how intrinsic moti- vation and self-regulation can be promoted among these students. (p. 50) Action Steps Action Step 1. Make a Distinction Between Learning Goals and Learning Activities or Assignments Even though the term learning goal is commonly used by practitioners, there appears to be some confusion as to its exact nature. For example, consider the following list, which typifies learning goals one might find in teachers’ planning books: Students will successfully complete the exercises in the back of Chapter 3. Students will create a metaphor representing the food pyramid. Students will be able to determine subject/verb agreement in a variety of simple, compound, and complete sentences. Students will understand the defining characteristics of fables, fairy tales, and tall tales. Students will investigate the relationship between speed of air flow and lift provided by an airplane wing. Some of these statements—the first, second, and last—involve activities as opposed to learning goals. As the name implies, activities are things students do. As we will see in Design Questions 2, 3, and 4, activities are a critical part of effective teaching. They constitute the means by which the ends or learning goals are accom- plished. However, they are not learning goals. A learning goal is a statement of what students will know or be able to do. For example, Figure 1.8 lists learning goals for science, language arts, mathemat- ics, and social studies, which differ from the related activities. The learning goals presented in Figure 1.8 have a distinct format that empha- sizes the knowledge students would potentially gain. Teachers provide the related activities to help students attain those learning goals. I will explain how some 10408-02_CH01.indd 17 6/27/07 1:28:17 PM 18 The Art and Science of Teaching FIGURE 1.8 Learning Goals and Activities Subject Learning Goals Activities Science Students will understand that Students will watch the video The sun is the largest body in the solar on the relationship between system. the earth and the moon and The moon and earth rotate on their axes. the place of these bodies The moon orbits the earth while the earth in the solar system. orbits the sun. Language Arts Students will be able to Students will observe the Sound out words that are not in their sight teacher sounding and blending vocabulary but are known to them. a word. Mathematics Students will be able to Students will practice solving Solve equations with one variable. 10 equations in cooperative groups. Social Studies Students will understand Students will describe what The defining characteristics of the barter the United Sates might be like system. if it were based on the barter system as opposed to a monetary system. activities are designed to introduce students to new content in Chapter 2 how some activities are designed to help students practice and deepen their under- standing of new content in Chapter 3, and how some activities are designed to help students generate and test hypotheses about content in Chapter 4. Teachers would most likely use the science and language arts activities in Figure 1.8 to introduce new content to students. The mathematics activity would most likely serve as a practice activity. The social studies activity would most likely promote generating and testing hypotheses. In general, I recommend that learning goals be stated in one of the following formats: Students will be able to______________________. or Students will understand____________________. These formats represent different types of knowledge and have been suggested by those who have constructed taxonomies of learning (Anderson et al., 2001; Marzano & Kendall, 2007). The reason for the two formats is that content knowledge can be organized into two broad categories: declarative knowledge and procedural knowledge. 10408-02_CH01.indd 18 6/27/07 1:28:17 PM Chapter 1 19 Chapter 4 addresses these two types of knowledge in some depth. Briefly, though, declarative knowledge is informational in nature. Procedural knowledge involves strategies, skills, and processes. In Figure 1.8, the learning goals for science and social studies are declarative or informational in nature. Hence they employ the stem “students will understand....” The mathematics and language arts goals are procedural or strategy oriented. Hence they employ the stem “students will be able to....” Occasionally a learning goal involves a substantial amount of declarative and procedural knowledge. In such cases, the following format can be useful: Students will understand ___________ and be able to ___________. To illustrate, the following 3rd grade learning goal for number sense includes both declarative and procedural knowledge: “Students will understand the defin- ing characteristics of whole numbers, decimals, and fractions with like denomi- nators, and will be able to convert between equivalent forms as well as represent factors and multiples of whole numbers through 100.” Action Step 2. Write a Rubric or Scale for Each Learning Goal Once learning goals have been established, the next step is to state them in rubric format. There are many different approaches to designing rubrics. The one presented here is explained in depth in the book Classroom Assessment and Grading That Work (Marzano, 2006) and has some research supporting its utility (see Flicek, 2005a, 2005b; Marzano, 2002). For reasons articulated in Classroom Assessment and Grading That Work, I prefer to use the term scale as opposed to the term rubric. Figure 1.9 shows what I refer to as the simplified scale. FIGURE 1.9 Simplified Scale Score 4.0: In addition to Score 3.0, in-depth inferences and applications that go beyond what was taught. Score 3.0: No major errors or omissions regarding any of the information and/or processes (simple or complex) that were explicitly taught. Score 2.0: No major errors or omissions regarding the simpler details and processes but major errors or omissions regarding the more complex ideas and processes. Score 1.0: With help, a partial understanding of some of the simpler details and processes and some of the more complex ideas and processes. Score 0.0: Even with help, no understanding or skill demonstrated. © 2004 by Marzano & Associates. All rights reserved. 10408-02_CH01.indd 19 6/27/07 1:28:18 PM 20 The Art and Science of Teaching The simplified scale contains five whole-point values only—4.0, 3.0, 2.0, 1.0, and 0.0—as contrasted with a more detailed scale that has half-point scores— 3.5, 2.5, 1.5, and 0.5. Although the simplified scale is generally less precise than the complete scale, I have found it a good starting place for teachers who are not familiar with using scales of this design. Additionally, in some situations half- point scores are difficult to discern or simply do not make much sense. To demonstrate how the scale shown in Figure 1.9 can be used, assume that a health teacher wishes to score an assessment on the topic of obesity. The lowest score value on the scale is a 0.0, representing no knowledge of the topic—even with help the student demonstrates no understanding. A score of 1.0 indicates that with help the student shows partial knowledge of the simpler details and processes as well as the more complex ideas and processes regarding obesity. To be assigned a score of 2.0, the student independently demonstrates under- standing and skill related to the simpler details and processes but not the more complex ideas and processes regarding obesity. For example, the student knows the general definition of obesity and some of the more obvious causes. A score of 3.0 indicates that the student demonstrates understanding of the simple and complex content that was taught in class. For example, the student understands the relationship between obesity and the chances of developing diseases such as heart disease as an adult. Additionally, the student understands risk factors for becoming obese as an adult even if you are not obese as a child. Finally, a score of 4.0 indicates that the student demonstrates inferences and applications that go beyond what was taught in class. For example, the student is able to identify his or her risk for becoming obese and personal actions necessary to avoid obesity, even though those actions were not specifically addressed in class. The simplified scale has intuitive appeal and is easy to use. However, measure- ment theory tells us that the more values a scale has, the more precise the measure- ment (Embretson & Reise, 2000). To illustrate, assume that a teacher used a scale with only two values—pass and fail—to score a test. Also assume that to pass the test students had to answer 60 percent of the items correctly. In this scenario, the student who answered all items correctly would receive the same score (pass) as the student who answered 60 percent of the items correctly. Similarly, the student who answered no items correctly would receive the same score (fail) as the student who answered 59 percent of the items correctly. In general, the more score points on a scale, the more precise that scale can be. Figure 1.10 presents the complete scale. The scale in Figure 1.10 has half-point scores, whereas the scale in Figure 1.9 does not. The half-point scores are set off to the right to signify that they describe stu- dent response patterns between the whole-point scores and therefore allow for more 10408-02_CH01.indd 20 6/27/07 1:28:18 PM Chapter 1 21 FIGURE 1.10 Complete Scale Score 4.0: In addition to Score 3.0 performance, in-depth inferences and applications that go beyond what was taught. Score 3.5: In addition to Score 3.0 performance, partial success at inferences and applications that go beyond what was taught. Score 3.0: No major errors or omissions regarding any of the information and/or processes (simple or complex) that were explicitly taught. Score 2.5: No major errors or omissions regarding the simpler details and processes and partial knowledge of the more complex ideas and processes. Score 2.0: No major errors or omissions regarding the simpler details and processes but major errors or omissions regarding the more complex ideas and processes. Score 1.5: Partial knowledge of the simpler details and processes but major errors or omissions regarding the more complex ideas and processes. Score 1.0: With help, a partial understanding of some of the simpler details and processes and some of the more complex ideas and processes. Score 0.5: With help, a partial understanding of some of the simpler details and processes but not the more complex ideas and processes. Score 0.0: Even with help, no understanding or skill demonstrated. © 2004 by Marzano & Associates. All rights reserved. precision in scoring an assessment. The half-point scores allow for partial credit to be assigned to items. To illustrate, a score of 3.0 indicates that a student has answered all items or tasks correctly that involve simpler details and processes as well as all items or tasks that involve more complex ideas and processes. A score of 2.0 indicates that the student has answered all items or tasks correctly that involve simpler details and processes but has missed all items or tasks that involve more complex ideas and pro- cesses. However, what score should be assigned if a student has answered all items or tasks correctly regarding simpler details and processes and some items or tasks correctly involving more complex ideas and processes or has received partial credit on those items or tasks? Using the simplified scale a teacher would have to assign a score of 2.0. Using the complete scale a teacher would assign a score value of 2.5. The second option allows for much more precision of measurement. The complete scale, then, is a logical extension of the simplified scale. Teachers can use them interchangeably. When the type of assessment allows for determining 10408-02_CH01.indd 21 6/27/07 1:28:18 PM 22 The Art and Science of Teaching partial credit, the teacher uses the complete scale. When the type of assessment does not allow for determining partial credit, the simplified scale is used. The generic scales depicted in Figures 1.9 and 1.10 are easily translated into scales for specific learning goals. To illustrate, consider Figure 1.11, which shows a scale for the previously mentioned 3rd grade learning goal for number sense. FIGURE 1.11 Scale for Number Sense in 3rd Grade Score 4.0 In addition to Score 3.0 performance, in-depth inferences and applications that go beyond what was taught. Score 3.5 In addition to Score 3.0 performance, partial success at inferences and applications that go beyond what was taught. Score 3.0 The student demonstrates number sense by ordering and comparing whole numbers (millions), decimals (thousandths), and fractions with like denominators converting between equivalent forms of fractions, decimals, and whole numbers finding and representing factors and multiples of whole numbers through 100 The student exhibits no major errors or omissions. Score 2.5 No major errors or omissions regarding the simpler details and processes and par- tial knowledge of the more complex ideas and processes. Score 2.0 The student exhibits no major errors or omissions regarding the simpler details and processes: basic terminology, for example— millions thousandths like denominator factor multiple basic solutions, for example— 5.15 is greater than 5.005 3/4 is the same as 0.75 4 is a factor of 12 However, the student exhibits major errors or omissions regarding the more complex ideas and processes stated in score 3.0. Score 1.5 Partial knowledge of the simpler details and processes but major errors or omis- sions regarding the more complex ideas and processes. Score 1.0 With help, a partial understanding of some of the simpler details and processes and some of the more complex ideas and processes. Score 0.5 With help, a partial understanding of some of the simpler details and processes but not the more complex ideas and processes. Score 0.0 Even with help, no understanding or skill demonstrated. Source: Adapted from Marzano & Haystead, in press. 10408-02_CH01.indd 22 6/27/07 1:28:18 PM Chapter 1 23 The scale in Figure 1.11 is basically identical to the generic form of the complete scale in Figure 1.10 except that the score values 3.0 and 2.0 identify specific ele- ments. Although it is also possible to fill in specific elements for the score value of 4.0, I have found that many school and district leaders wish to leave this up to individual teachers. For a more detailed discussion, the reader should consult Classroom Assessment and Grading That Work (Marzano, 2006). When learner goals have been articulated in scale format as in Figure 1.11, the teacher and students have clear direction about instructional targets as well as descriptions of levels of understanding and performance for those targets. Action Step 3. Have Students Identify Their Own Learning Goals One way to enhance student involvement in an instructional unit’s subject matter is to ask students to identify something that interests them beyond the teacher- identified learning goals. During a unit on habitats, for example, a particular student might decide that she wants to find out about a particular animal—the falcons she sometimes sees flying over the field next to her bedroom. Even though personal applications might not seem obvious to students at first, a little guidance can go a long way in demonstrating to students that they can relate their own interests to the content addressed in class. To illustrate, a teacher once shared with me a personal goal a student had identified during a mathematics unit on polynomials. The student wanted to know what types of polynomials were used when rating quarterbacks in football. As a result of some Internet research, the student identified and could explain three formulas for rating quarterbacks: National Football League Quarterback Rating Formula a = (((Comp/Att) × 100) − 30) / 20 b = ((TDs/Att) × 100) / 5 c = (9.5 − ((Int/Att) × 100)) / 4 d = ((Yards/Att) − 3) / 4 a, b, c, and d cannot be greater than 2.375 or less than 0. QB Rating = (a + b + c + d) / 0.06 Arena Football League Quarterback Rating Formula a = (((Comp/Att) × 100) − 30) / 20 b = ((TDs/Att) × 100) / (20/3) c = (9.5 − ((Int/Att) × 100)) / 4 d = ((Yards/Att) − 3) / 4 a, b, c, and d cannot be greater than 2.375 or less than 0. QB Rating = (a + b + c + d) / 0.06 10408-02_CH01.indd 23 6/27/07 1:28:18 PM 24 The Art and Science of Teaching National Collegiate Athletic League Quarterback Rating Formula a = (Comp/Att) × 100 b = (TDs/Att) × 100 c = (Int/Att) × 100 d = Yards/Att QB Rating = a + (3.3 × b) − (2 × c) + (8.4 × d) Key: Comp = pass completions, Att = pass attempts, TDs = completed touchdown passes, Int = interceptions thrown, Yards = passing yards. Once students have identified their personal goals, they should write them in a format similar to the one used by the teacher: When this unit is completed I will better understand_________. or When this unit is completed I will be able to________________. Students might also use a simplified version of the scale to keep track of their progress: 4. I did better than I thought I would do. 3. I accomplished my goal. 2. I didn’t accomplish everything I want to, but I learned quite a bit. 1. I tried but didn’t really learn much. 0. I didn’t really try to accomplish my goal. Action Step 4. Assess Students Using a Formative Approach As described in the research and theory section, formative assessment is not only a powerful measurement tool but also a powerful instructional tool because it allows students to observe their own progress. As I explained, formative assessments are used while students are learning new content. In the case of a unit of instruction, formative assessments are used from the beginning to the end. The scale discussed in Action Step 2 is designed specifically for formative assessment because each score on the scale describes specific progress toward a specific learning goal. That is, a score of 4.0 indicates that the student has gone beyond the information and skill taught by the teacher. A score of 3.0 indicates that the student has learned the target knowledge as articulated by the teacher. A score of 2.0 indicates that the student understands or can perform the simpler information and skills relative to the learn- ing goal but not the more complex information or processes. A score of 1.0 indicates 10408-02_CH01.indd 24 6/27/07 1:28:18 PM Chapter 1 25 that on his or her own the student does not demonstrate understanding of or skill regarding the learning goal, but with help the student does. Finally a score of 0.0 indicates that even with help the student does not demonstrate understanding or skill relative to the learning goal. To design a formative assessment for a particular learning goal, a teacher must ensure that the assessment contains items or tasks that apply to levels 2.0, 3.0, and 4.0. For example, reconsider the scale for number sense reported in Figure 1.11. To design an assessment regarding this topic, the teacher would make sure she has items that represent score values of 4.0, 3.0, and 2.0. She would in- clude some items or tasks on the test that require students to order and compare whole numbers to the millions, decimals to thousandths, and fractions with like denominators. She would include items or tasks that require students to convert between equivalent forms of fractions, decimals, and whole numbers. Likewise she would include some items or tasks that require students to represent factors and multiples of whole numbers through 100. Success on these tasks would indicate a score value of 3.0. To determine whether students should receive a score value of 2.0, the teacher would include items that address simpler aspects of the learning goal. She might assess knowledge of basic terminology such as millions, thousandths, like denominator, factor, and multiple. Finally, to determine whether students deserve a score value of 4.0, she would include items or tasks that go beyond what she had addressed in class. For example, she might include items or tasks that require stu- dents to convert composite numbers that had not been addressed in class. Scoring assessments designed around the simplified or complete scale is a matter of examining the pattern of responses for each student. (For a detailed discussion, see Classroom Assessment and Grading That Work [Marzano, 2006].) In the beginning of the unit, students would most likely receive low scores on these assessments. However, by the end of the unit students should show growth in their scores. This is at the heart of formative assessment—examining the gradual increase in knowledge for specific learning goals throughout a unit. Action Step 5. Have Students Chart Their Progress on Each Learning Goal Because formative assessments are designed to provide a view of students’ learn- ing over time, one useful activity is to have students chart their own progress on each learning goal. To do so, the teacher provides a blank chart for each learning goal that resembles the one shown in Figure 1.12. The chart in Figure 1.12 has already been filled out. The first column repre- sents an assessment given by the teacher on October 5. This student received a score of 1.5 on that assessment. The second column represents the assessment on 10408-02_CH01.indd 25 6/27/07 1:28:18 PM 26 The Art and Science of Teaching FIGURE 1.12 Student Progress Chart Keeping Track of My Learning Name: I. H. Learning Goal: Probability My score at the beginning: 1.5 My goal is to beat 3 by Nov. 30 Specific things I am going to do to improve: Work 15 min. three times a week Measurement Topic: Probability 4 3 2 1 0 a b c d e f g h i j a. Oct. 5 f. Nov. 26 b. Oct.12 g. c. Oct. 20 h. d. Oct. 30 i. e. Nov. 12 j. Source: Reprinted from Marzano, 2006, p. 90. October 12. This student received a score of 2.0 on that assessment; and so on. Having each student keep track of his or her scores on learning goals in this fash- ion provides them with visual views of their progress. It also allows for powerful discussions between teacher and students. The teacher can discuss progress with each student regarding each learning goal. Also, in a tracking system such as this one the student and teacher are better able to communicate with parents regarding the student’s progress in specific areas of information and skill. Finally, note that the chart has places for students to identify the progress they wish to make and the things they are willing to do to make that progress. Action Step 6. Recognize and Celebrate Growth One of the most powerful aspects of formative assessment is that it allows stu- dents to see their progress over time, as depicted in Figure 1.12. In a system like 10408-02_CH01.indd 26 6/27/07 1:28:19 PM Chapter 1 27 this one, virtually every student will succeed in the sense that each student will increase his or her knowledge relative to specific learning goals. One student might have started with a score of 2.0 on a specific learning goal and then increased to a score of 3.5; another student might have started with a 1.0 and increased to a 2.5—both have learned. Knowledge gain, then, is the currency of student success in a formative assessment system. Focusing

Use Quizgecko on...
Browser
Browser