Agricultural-Management-Practices-March-2018 PDF

Document Details

PoshApostrophe1982

Uploaded by PoshApostrophe1982

Lehlabile Secondary School

2018

The Statistical Information and Research (SIR) Unit

Tags

agricultural management practices examination questioning cognitive demand educational assessment

Summary

This document is an exemplar book on effective questioning in agricultural management practices, compiled by the Statistical Information and Research (SIR) Unit in March 2018.

Full Transcript

Exemplar Book on Effective Questioning Agricultural Management Practices Compiled by the Statistical Information and Research (SIR) Unit March 2018 PREFACE The National Senior Certificate (NSC) examinations are set a...

Exemplar Book on Effective Questioning Agricultural Management Practices Compiled by the Statistical Information and Research (SIR) Unit March 2018 PREFACE The National Senior Certificate (NSC) examinations are set and moderated in part using tools which specify the types of cognitive demand and the content deemed appropriate for Agricultural Management Practices at Grade 12 level. Until recently, the level of cognitive demand made by a question was considered to be the main determinant of the overall level of cognitive challenge of an examination question. However, during various examination evaluation projects conducted by Umalusi from 2008-2012, evaluators found the need to develop more complex tools to distinguish between questions which were categorised at the same cognitive demand level, but which were not of comparable degrees of difficulty. For many subjects, for each type of cognitive demand a three-level degree of difficulty designation, easy, moderate and difficult was developed. Evaluators first decided on the type of cognitive process required to answer a particular examination question, and then decided on the degree of difficulty, as an attribute of the type of cognitive demand, of that examination question. Whilst this practice offered wider options in terms of easy, moderate and difficult levels of difficulty for each type of cognitive demand overcame some limitations of a one-dimensional cognitive demand taxonomy, other constraints emerged. Bloom’s Taxonomy of Educational Objectives (BTEO) (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) and the Revised Bloom’s Taxonomy are based on the assumption that a cumulative hierarchy exists between the different categories of cognitive demand (Bloom et al., 1956; Bloom, Hastings & Madaus, 1971). The practice of ‘levels of difficulty’ did not necessarily correspond to a hierarchical model of increasing complexity of cognitive demand. A key problem with using the level of difficulty as an attribute of the type of cognitive demand of examination questions is that, questions recognised at a higher level of cognitive demand are not necessarily categorised as more difficult than other questions categorised at lower levels of cognitive demand. For example, during analyses a basic recognition or ii recall question could be considered more difficult than an easy evaluation question. Research further revealed that evaluators often struggled to agree on the classification of questions at so many different levels. The finer categorization for each level of cognitive demand and the process of trying to match questions to pre-set definitions of levels of difficulty made the process of making judgments about cognitive challenge overly procedural. The complex two-dimensional multi-level model also made findings about the cognitive challenge of an examination very difficult for Umalusi Assessment Standards Committee (ASC) to interpret. In an Umalusi Report, Developing a Framework for Assessing and Comparing the Cognitive Challenge of Home Language Examinations (Umalusi, 2012), it was recommended that the type and level of cognitive demand of a question and the level of a question’s difficulty should be analysed separately. Further, it was argued that the ability to assess cognitive challenge lay in experts’ abilities to recognise subtle interactions and make complicated connections that involved the use of multiple criteria simultaneously. However, the tacit nature of such judgments can make it difficult to generate a common understanding of what constitutes criteria for evaluating the cognitive challenge of examination questions, despite descriptions given in the policy documents of each subject. The report also suggested that the Umalusi external moderators and evaluators be provided with a framework for thinking about question difficulty which would help them identify where the main sources of difficulty or ease in questions might reside. Such a framework should provide a common language for evaluators and moderators to discuss and justify decisions about question difficulty. It should also be used for building the capacity of novice or less experienced moderators and evaluators to exercise the necessary expert judgments by making them more aware of key aspects to consider in making such judgments. iii The revised Umalusi examination moderation and evaluation instruments for each subject draw on research and literature reviews, together with the knowledge gained through the subject workshops. At these workshops, the proposed revisions were discussed with different subject specialists to attain a common understanding of the concepts, tools and framework used; and to test whether the framework developed for thinking about question difficulty ‘works’ for different content subjects. Using the same framework to think about question difficulty across subjects will allow for greater comparability of standards across subjects and projects. An important change that has been made to the revised examination evaluation instrument is that the analysis of the type of cognitive demand of a question and analysis of the level of difficulty of each question are now treated as two separate judgments involving two different processes. Accordingly, the revised examination evaluation instrument now includes assessment of difficulty as well as cognitive demand. iv LIST OF ABBREVIATIONS Abbreviation Full name ASC Assessment Standards Committee BTEO Bloom’s Taxonomy of Educational Objectives CAPS Curriculum Assessment Policy Statement DBE Department of Basic Education FET Further Education and Training IEB Independent Examinations Board IKS Indigenous Knowledge System NSC National Senior Certificate NQF National Qualifications Framework QAA Quality Assurance of Assessment QCC Qualifications, Curriculum and Certification SIR Statistical Information and Research v LIST OF TABLES TABLE 1: LEVELS AND TYPES OF COGNITIVE DEMAND FOR AGRICULTURAL MANAGEMENT PRACTICES ACCORDING TO THE CAPS DOCUMENT 9 TABLE 2: EXAMPLES OF QUESTIONS AT LEVEL 1: KNOWLEDGE 10 TABLE 3: EXAMPLES OF QUESTIONS AT LEVEL 2: COMPREHENSION AND APPLICATION 12 TABLE 4: EXAMPLES OF HIGHER ORDER QUESTIONS: LEVEL 3 – ANALYSIS, SYNTHESIS AND EVALUATION 16 TABLE 5: LEVELS OF DIFFICULTY OF EXAMINATION QUESTIONS 23 TABLE 6: FRAMEWORK FOR THINKING ABOUT QUESTION DIFFICULTY 26 TABLE 7: EXAMPLES OF QUESTIONS AT DIFFICULTY LEVEL 1 – EASY 36 TABLE 8: EXAMPLES OF QUESTIONS AT DIFFICULTY LEVEL 2 – MODERATELY CHALLENGING 40 TABLE 9: EXAMPLES OF QUESTIONS AT DIFFICULTY LEVEL 3 – DIFFICULT 45 TABLE 10: EXAMPLES OF QUESTIONS AT DIFFICULTY LEVEL 3 – VERY DIFFICULT 50 vi ACKNOWLEDGEMENTS This Agricultural Management Practices exemplar book is informed by Umalusi Research Reports of previous years, especially the report by Reeves (Umalusi, 2012) titled ‘Developing a framework for assessing and comparing the cognitive challenge of Home Language examinations’. In addition, Agricultural Management Practices subject experts and practitioners are acknowledged for their contribution to the content of this exemplar book. Included in this group are: Umalusi External Moderators and Maintaining Standards Subject Teams and Team Leaders; together with the South African Comprehensive Assessment Institute and the Independent Examinations Board (IEB) Examiners and Internal Moderators. We also acknowledge the contributions of the members of the Umalusi Quality Assurance of Assessment (QAA); Qualifications, Curriculum and Certification (QCC) and Statistical Information and Research (SIR) Units. We specifically acknowledge the contribution made by the individuals listed below:  Ms Agnes Mohale, who was responsible for the management and coordination of the Exemplar Books Project.  Dr Cheryl Reeves, who was responsible for developing the framework that underpinned the design of the exemplar books.  Mr Thapelo Rangongo, Ms Sisanda Loni and Ms Shannon Doolings for their assistance and support in the administration of the project.  The review team included the following members: Dr Fourten Khumalo and Mr Stanley Gcwensa. This exemplar book was prepared by Mr Thebeyamotse Tshabang. vii TABLE OF CONTENTS PREFACE ii LIST OF ABBREVIATIONS v LIST OF TABLES vi ACKNOWLEGEMENTS vii 1. INTRODUCTION 1 2. CONTEXT 2 3. PURPOSE OF THE EXEMPLAR BOOK 3 4. MODERATION AND EVALUATION OF ASSESSMENT 4 5. COGNITIVE DEMANDS IN ASSESSMENT 6 6. EXPLANATIONS AND EXAMPLES OF QUESTIONS ASSESSED AT THE DIFFERENT COGNITIVE DEMAND LEVELS IN THE AGRICULTURAL MANAGEMENT PRACTICES TAXONOMY ACCORDING TO CAPS 8 7. ANALYSING THE LEVEL OF DIFFICULTY OF EXAMINATION QUESTIONS 21 7.1 Question difficulty is assessed independently of the type and level of cognitive demand 22 7.2 Question difficulty is assessed at four levels of difficulty 23 7.3 Question difficulty is determined against the assumed capabilities of the ideal ‘envisaged’ Grade 12 Agricultural Management Practices NSC examination candidate 24 7.4 Question difficulty is determined using a common framework for thinking about question difficulty 25 7.5 Question difficulty entails distinguishing unintended sources of difficulty or ease from intended sources of difficulty or ease 33 7.6 Question difficulty entails identifying differences in levels of difficulty within a single question 35 8. EXAMPLES OF QUESTIONS ASSESSED AT DIFFERENT LEVELS OF DIFFICULTY 35 9. CONCLUDING REMARKS 56 REFERENCES 57 viii 1 INTRODUCTION The rules of assessment are essentially the same for all types of learning because, to learn is to acquire knowledge or skills, while to assess is to identify the level of knowledge or skill that has been acquired (Fiddler, Marienau & Whitaker, 2006). Nevertheless, the field of assessment in South Africa and elsewhere in the world is fraught with contestation. A review of the research literature on assessment indicates difficulties, misunderstanding and confusion in how terms describing educational measurement concepts, and the relationships between them, are used (Frisbie, 2005). Umalusi believes that if all role players involved in examination processes can achieve a common understanding of key terms, concepts and processes involved in setting, moderating and evaluating examination papers, much unhappiness can be avoided. This exemplar book presents a particular set of guidelines for both novice and experienced Agricultural Management Practices national examiners, internal and external moderators, and evaluators to use in the setting, moderation and evaluation of examinations at the National Senior Certificate (NSC) level. The remainder of the exemplar book is organised as follows: First, the context in which the exemplar book was developed is described (Part 2), followed by a statement of its purpose (Part 3). Brief summaries of the roles of moderation and evaluation (Part 4) and cognitive demand (Part 5) an assessment. Examination questions selected from the NSC Agricultural Management Practices examinations of assessment bodies, the Department of Basic Education (DBE), and/or the Independent Examinations Board (IEB) are used to illustrate how to identify different levels of cognitive demand as required by the Curriculum and Assessment Policy Statement (CAPS) Agricultural Management Practices document (Part 6). Part 7 explains the protocols for identifying different levels of difficulty within a question paper. Application of 1 the Umalusi framework for determining difficulty described in Part 7 is illustrated, with reasons, by another set of questions from a range of Agricultural Management Practices examinations (Part 8). Concluding remarks complete the exemplar book (Part 9). 2 CONTEXT Umalusi has the responsibility to quality assure qualifications, curricula and assessments of National Qualification Framework (NQF) Levels 1 - 5. This is a legal mandate assigned by the General and Further Education and Training Act (Act 58 of 2001) and the National Qualification Framework Act (Act 67 of 2008). To operationalize its mandate, Umalusi, amongst other things, conducts research and uses the findings of this research to enhance the quality and standards of curricula and assessments. Since 2003, Umalusi has conducted several research studies that have investigated examination standards. For example, Umalusi conducted research on the NSC examinations, commonly known as ‘Matriculation’ or Grade 12, in order to gain an understanding of the standards of the new examinations (first introduced in 2008) relative to those of the previous NATED 550 Senior Certificate examinations (Umalusi, 2009a, 2009b). Research undertaken by Umalusi has assisted the organisation to arrive at a more informed understanding of what is meant by assessing the cognitive challenge of the examinations and of the processes necessary for determining whether the degree of cognitive challenge of examinations is comparable within a subject, across subjects and between years. Research undertaken by Umalusi has revealed that different groups of examiners, moderators and evaluators do not always interpret cognitive demand in the same way, posing difficulties when comparisons of cognitive 2 challenge were required. The research across all subjects also showed that using the type and level of cognitive demand of a question only as measure for judging the cognitive challenge of a question is problematic because cognitive demand levels on their own do not necessarily distinguish between degrees of difficulty of questions. The new Umalusi framework for thinking about question difficulty described in this exemplar book is intended to support all key role players in making complex decisions about what makes a particular question challenging for Grade 12 examination candidates. 3. THE PURPOSE OF THE EXEMPLAR BOOK The overall goal of this exemplar book is to ensure the consistency of standards of examinations across the years in the Further Education and Training (FET) sub-sector and Grade 12, in particular. The specific purpose is to build a shared understanding among teachers, examiners, moderators, evaluators, and other stakeholders, of methods used for determining the type and level of cognitive demand as well as the level of difficulty of examination questions. Ultimately, the common understanding that this exemplar book seeks to foster is based on the premise that the process of determining the type and level of cognitive demand of questions and that of determining the level of difficulty of examination questions are two separate judgements involving two different processes, both necessary for evaluating the cognitive challenge of examinations. This distinction between cognitive demand and difficulty posed by questions needs to be made in the setting, moderation, evaluation and comparison of Agricultural Management Practices examination papers. The exemplar book includes an explanation of the new Umalusi framework which is intended to provide all role-players in the setting of Agricultural 3 Management Practices examinations with a common language for thinking and talking about question difficulty. The reader of the exemplar book is taken through the process of evaluating examination questions; first in relation to determining the type and level of cognitive demand made by a question, and then in terms of assessing the level of difficulty of a question. This is done by providing examples of a range of questions which make different types of cognitive demands on candidates, and examples of questions at different levels of difficulty. Each question is accompanied by an explanation of the reasoning behind why it was judged as being of a particular level of cognitive demand or difficulty, and the reasoning behind the judgements made is explained. The examples of examination questions provided were sourced by Agricultural Management Practices evaluators from previous DBE and the IEB Agricultural Management Practices question papers, pre- and post- the implementation of CAPS during various Umalusi workshops. This exemplar book is an official document. The process of revising the Umalusi examination evaluation instrument and of developing a framework for thinking about question difficulty for both moderation and evaluation purposes has been a consultative one, with the DBE and the IEB assessment bodies. The new framework for thinking about question difficulty is to be used by Umalusi in the moderation and evaluation of Grade 12 Agricultural Management Practices examinations, and by all the assessment bodies in the setting of the question papers, in conjunction with the CAPS documents. 4. MODERATION AND EVALUATION OF ASSESSMENT A fundamental requirement, ethically and legally, is that assessments are fair, reliable and valid (American Educational Research Association [AERA], 4 American Psychological Association [APA] and National Council on Measurement in Education [NCME], 1999). Moderation is one of several quality assurance assessment processes aimed at ensuring that an assessment is fair, reliable and valid (Downing & Haladyna, 2006). Ideally, moderation should be done at all levels of an education system, including the school, district, provincial and national level in all subjects. The task of Umalusi examination moderators is to ensure that the quality and standards of a particular examination are maintained each year. Part of this task is for moderators to alert examiners to details of questions, material and/or any technical aspects in examination question papers that are deemed to be inadequate or problematic and that therefore, challenge the validity of that examination. In order to do this, moderators need to pay attention to a number of issues as they moderate a question paper – these are briefly described below. Moderation of the technical aspects of examination papers includes checking correct question and/or section numbering, and ensuring that visual texts and/or resource material included in the papers are clear and legible. The clarity of instructions given to candidates, the wording of questions, the appropriateness of the level of language used, and the correct use of terminology need to be interrogated. Moderators are expected to detect question predictability, for example, when the same questions regularly appear in different examinations, and bias in examination papers. The adequacy and accuracy of the marking memorandum (marking guidelines) need to be checked to ensure that they reflect and correspond with the requirements of each question asked in the examination paper being moderated. In addition, the task of moderators is to check that papers adhere to the overall examination requirements as set out by the relevant assessment body with regard to the format and structure (including the length, type of texts or reading selections prescribed) of the examination. This includes assessing 5 compliance with assessment requirements with regard to ensuring that the content is examined at an appropriate level and in the relative proportions (weightings) of content and/or skills areas required by the assessment body. The role of Umalusi examination evaluators is to perform analysis of examination papers after they have been set and moderated and approved by the Umalusi moderators. This type of analysis entails applying additional expert judgments to evaluate the quality and standard of finalised examination papers before they are written by candidates in a specific year. However, the overall aim of this evaluation is to judge the comparability of an examination against the previous years’ examination papers to ensure that consistent standards are being maintained over the years. The results of the evaluators’ analyses, and moderators’ experiences provide the Umalusi Assessment Standards Committee (ASC) with valuable information which is used in the process of statistical moderation of each year’s examination results. Therefore, this information forms an important component of essential qualitative data informing the ASC’s final decisions in the standardisation of the examinations. In order for the standardisation process to work effectively, efficiently and fairly, it is important that examiners, moderators and evaluators have a shared understanding of how the standard of an examination paper is assessed, and of the frameworks and main instruments that are used in this process. 5. COGNITIVE DEMANDS IN ASSESSMENT The Standards for educational and psychological testing (AERA, APA, & NCME, 1999) require evidence to support interpretations of test scores with respect to cognitive processes. Therefore, valid, fair and reliable examinations require that the levels of cognitive demand required by examination questions are 6 appropriate and varied (Downing & Haladyna, 2006). Examination papers should not be dominated by questions that require reproduction of basic information, or replication of basic procedures, and under-represent questions invoking higher level cognitive demands. Accordingly, the Grade 12 CAPS NSC subject examination specifications state that examination papers should be set in such a way that they reflect proportions of marks for questions at various level of cognitive demand. NSC examination papers are expected to comply with the specified cognitive demand levels and weightings. NSC examiners have to set and NSC internal moderators have to moderate examination papers as reflecting the proportions of marks for questions at different levels of cognitive demand as specified in the documents. Umalusi’s external moderators and evaluators are similarly tasked with confirming compliance of the examinations with the CAPS cognitive demand levels and weightings, and Umalusi’s revised examination evaluation instruments continue to reflect this requirement. Despite that, subject experts, examiners, moderators and evaluators are familiar with the levels and explanations of the types of cognitive demand shown in the CAPS documents, Umalusi researchers have noted that individuals do not always interpret and classify the categories of cognitive demand provided in the CAPS the same way. In order to facilitate a common interpretation and classification of the cognitive demands made by questions, the next section of this exemplar book provides a clarification of each cognitive demand level for Agricultural Management Practices followed by illustrative examples of examination questions that have been classified at that level of cognitive demand. 7 6. EXPLANATIONS AND EXAMPLES OF QUESTIONS ASSESSED AT THE DIFFERENT COGNITIVE DEMAND LEVELS IN THE AGRICULTURAL MANAGEMENT PRACTICES TAXONOMY ACCORDING TO CAPS The taxonomies of cognitive demand for each school subject in the CAPS documents are mostly based on the Revised Bloom’s Taxonomy (Anderson and Krathwohl, 2001) but resemble the original Bloom’s taxonomy in that categories of cognitive demand are arranged along a single continuum. Bloom’s Taxonomy of Educational Objectives (BTEO) (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) and the Revised Bloom’s Taxonomy imply that each more advanced or successive category of cognitive demand subsumes all categories below it. The CAPS Taxonomies of Cognitive Demand make a similar assumption (Crowe, 2012). Note: In classifying the type and level of cognitive demand, each question is classified at the highest level of cognitive process involved. Thus, although a particular question involves recall of knowledge, as well as comprehension and application, the question is classified as an ‘analysis’ question if that is the highest level of cognitive process involved. If evaluating’ is the highest level of cognitive process involved, the question as a whole should be classified as an ‘evaluation’ question. On the other hand, if one of more sub-sections of the question and the marks allocated for each sub-section can stand independently, then the level of cognitive demand for each sub-section of the question should be analysed separately. The CAPS documents for many subjects also give examples of descriptive verbs that can be associated with each of the levels of cognitive demand. However, it is important to note that such ‘action verbs’ can be associated with more than one cognitive level depending on the context of a question. The Agricultural Management Practices CAPS document states that Grade 12 NSC Agricultural Management Practices examination papers should examine three levels of cognitive demand (Table 1). 8 TABLE 1: LEVELS AND TYPES OF COGNITIVE DEMAND FOR AGRICULTURAL MANAGEMENT PRACTICES ACCORDING TO THE CAPS DOCUMENT COGNITIVE LEVEL 1 LEVEL 2 LEVEL 3 LEVELS Cognitive skill Knowledge Comprehension Analysis, Synthesis and assessed & Application Evaluation SOURCE: CAPS AGRICULTURAL MANAGEMENT PRACTICES FET (2011) P. (39) To facilitate reading of this section, each of the above cognitive demand levels in the Agricultural Management Practice Taxonomy are explained, and the explanation is followed by at least three examples of questions from previous Agricultural Management Practices NSC examinations classified at each of the levels of cognitive demand shown in Table 1 above. These examples were selected to represent the best and clearest examples of each level of cognitive demand that the Agricultural Management Practice experts could find. The discussion below each example question explains the reasoning processes behind the classification of the question at that particular type of cognitive demand (Table 2 to Table 4). Note: Be mindful that analyses of the level of cognitive process of a question and the level of difficulty of each question are to be treated as two separate judgments involving two different processes. Therefore, whether the question is easy or difficult should not influence the categorisation of the question in terms of the type and level of cognitive demand. Questions should NOT be categorised as higher order evaluation/synthesis questions because they are difficult questions. Some questions involving the cognitive process of recall or recognition may be more difficult than other recall or recognition questions. Not all comprehension questions are easier than questions involving analysis or synthesis. Some comprehension questions may be very difficult, for example explanation of complex scientific processes. For these reasons, you need to categorise the level of difficulty of questions separately from identifying the type of cognitive process involved. 9 TABLE 2: EXAMPLES OF QUESTIONS AT LEVEL 1: KNOWLEDGE Example 1: Question (4.5, 2010, November): The farmer can sell produce in the free market system. a) Briefly describe TWO main aspects that characterise the free market system. (2) b) State TWO advantages of the free market system to a farmer. (2) c) State TWO disadvantages of the free market system to a farmer. (2) Discussion: The verbs ‘describe’ and ‘state’ indicate that all three questions involve low order cognitive processes. Indeed, the questions require simple factual recall of knowledge regarding the free market system. To answer the questions Grade 12 candidates need to recall basic information of what they have learnt in class on the free market system. Memorandum/Marking guidelines (a) TWO main aspects that characterise the free market system.  Produce sell to whoever.   At any time.   At a negotiated price.   At any place.  (Any 2) (2) (b) TWO advantages of the free market system  Money immediately available. / Payments made in cash/cheque/ bank deposit/ internet.   Results in higher quality products as a result of competition.   Minimal marketing cost.   Larger profits.   Sell directly to the consumer/no middleman.  (Any 2) (2) (c) TWO disadvantages of the free market system  Results in forming of monopolies/conglomerates.   Low bargaining power.   Farmer needs training/time to market products.   No coordination between producers that leads to a surplus.   Farmer can be exposed to a high level of competition.  (Any 2) (2) Example 2 Question (2.4, 2012, November): Farm labourers work in harsh weather conditions. Most of the time it is difficult to find labourers that are willing to work on a farm. The labourers have difficulty doing their tasks because of a lack of experience. The secondary industries often offer the 10 workers better salaries. The amount of work on a farm fluctuates from season to season and therefore farmers tend not to recruit many labourers. The farmer ensures good quality meals and provides protective clothing for the workers. The farmer acquires new and modern equipment for the workers to work more efficiently. Better remuneration of the workers will be implemented and even certificates of appreciation for excellent work will be handed out to the workers. a) Name FOUR general problems related to labour as a production factor as it is contained in the scenario above. (4) b) Identify FOUR ways in the scenario in which the farmer improved the working conditions of the farm workers. (4) Discussion: The action verbs ‘name’ and ‘identify’ can be classified as low order questions. Both questions require candidates to recognise, locate and retrieve explicitly stated information from the simple text provided. Candidates can extract the correct responses directly from the case study provided. All Grade 12 candidates should, in any case, be familiar with and know about the problems faced by farm workers in South Africa as they are commonly covered in newspaper and television reports (for example, the farm workers strike of De Dooring in the Western Cape was well covered on television). Candidates taking this subject also come into regular contact with farm workers and such experiences make it possible for them to know first-hand about the conditions which farmer workers face on a daily basis. Memorandum/Marking guidelines (a) FOUR general problems related to labour as a production factor  Lack of experienced/untrained labourers.   Difficult to find labourers/Scarcity of labourers.   Recruiting potential of workers for a farming enterprise is low.   Harsh weather conditions/Conditions under which workers are working in the farm, e.g. unattractive.   Secondary industries offer better salaries/Competition between city lives/mines/industries and farm lives.   Seasonal fluctuation of workload in most of the farms /seasonal labours.  (Any 4) (4) (b) FOUR ways to improve the working conditions of the farm workers  Better clothing.   Correct equipment.   Safe working conditions.   Good quality meals/ Balanced diet/nutritious meals  Remuneration.   Certificates of appreciation/Acknowledgement of good labour practices.  (Any 4) (4) 11 Example 3: Question (4.2, 2010, March): Producer groups play a key role in shaping the future of agriculture and rural development. Crop and livestock producers in many countries participate in marketing assessment programmes. That allows them to benefit from various types of value-added activities by capturing or retaining more of the profits from their product. Name FIVE advantages that a producer group could offer to benefit their members. (5) Discussion: The action verb ‘name’ suggests that this is a low order question. Indeed, this is a basic recall question. Grade 12 candidates should have learnt about producer or commodity groups and their advantages. All they have to do here is remember what they have learnt in class. The source material also provides clues as to what the advantages are. Memorandum/Marking guidelines Advantages of producer groups:  To address a range of constraints on agricultural production and marketing.   Provide better access to sources of production equipment, supplies and technology.   Promotion/ advancement of agricultural products.   Assist farmers in obtaining financing for production.   Research in aspects that would enhance production.   Bargain for better prices on behalf of the farmers.  (5) TABLE 3: EXAMPLES OF QUESTIONS AT LEVEL 2: COMPREHENSION AND APPLICATION Example 1: Question (2.8, 2013, March): Mechanisation is the use of machines instead of people to do the work. The photograph below shows mechanised farm equipment used on a farm. 12 a) Identify the self -propelled farm machine shown in the photograph above. (2) b) Briefly explain the main benefits of using equipment such as illustrated in the photograph above. (5) Discussion: The mechanisation in the photograph provided as source material is commonly used in commercial farming as well as in those schools which offer this particular subject. Thus, the type of machinery should be recognizable, and not new to Grade 12 candidates. Question a) is classified as a ‘remembering’ and ‘recognition’ question. However, question b) makes medium level cognitive demands on candidates; it requires them to ‘explain’ the benefits of using the combined harvester or the equipment shown in the photograph. Question b) requires candidates to show their understanding of such equipment by expressing ideas in their own words. Memorandum/Marking guidelines (a) Identification of the farm implement  It is a combine harvester/ harvester.  (2) (b) Main benefits  Harvesting in bulk especially larger areas.   Saving on time/ faster method of harvesting.   Saving on labour.   Not a large capital investment because the farmer makes use of a contractor.   Can be more cost effective.   Make use of skilled workers/ labourers to handle equipment.  (Any 5) (5) Example 2: Question (4.7, 2011, March): The following marketing channel s exist in the formal and non-formal agricultural sectors for products: 13 MARKETING CHANNELS Farmer as producer of agricultural products Retailers Cooperatives Vendors/Hawkers Fresh produce markets Farm stalls Export markets Spaza shops Flea markets Consumers a) Draw a schematic representation of a marketing chain for an agricultural product, representing FOUR channels. (4) b) Differentiate between formal and informal marketing channels by giving ONE example of each. (2) Discussion: The action verbs ‘draw’ and ‘differentiate’ suggest that both these questions are middle order questions. Both questions assess candidates’ understanding and application of the formal and informal marketing channels. Part a) is not merely a recall question since candidates are required to process the channels and make a schematic drawing to show their understanding of the concept. Differentiating between the two marketing channels (part b) requires a clear understanding of the operational side of the two marketing channels. However, part b) provides candidates with a clue on the market channels to concentrate on so that deviation from the asked question is minimized. Therefore, both questions may be classified as middle order rather than higher order questions due to pointer given in answering. Memorandum/Marking guidelines (a) Schematic representation of a market chain  Farmer/producer → any TWO applicable channels→ consumer (4) (b) ONE example of formal and informal marketing channels Formal  Retailers.   Cooperatives.   Fresh produce markets.   Export markets.  (Any 1) (1) Informal  Vendors/hawkers.   Farm stalls.   Spaza shops.   Flea markets (Any 1) (1) Example 3: Question (4.5, 2012, November): The following table represents the marketing of farm products. 14 PRICE OF THE QUANTITY QUANTITY PRODUCT DEMANDED PER SUPPLIED PER (RAND) WEEK (kg) WEEK (kg) 1 6 0 2 5 1 3 4 2 4 3 3 5 2 4 a) Draw a line graph indicating the supply, demand and the point of market equilibrium. (4) b) Explain how the price of a product in the market will affect the marketing strategy for your product. (2) Discussion: To answer these questions, candidates need to interpret and understand the data provided and use the data in a new situation. In question a) they have to construct a graph; they have to be able to plot the graph on non-graph paper which is more cognitively demanding for them. Plotting the graph requires candidates to indicate the point of market equilibrium and have the correct x and y-axis for the data or information provided. This task is thus an example of applying a complex procedure. For question b) candidates have to devise a marketing strategy based on their interpretation of the data on the graph. They have to show their understanding of the data provided on the table in order for them to design and explain the best marketing strategy which is in line with their interpretation of the information on the graph. Memorandum/Marking guidelines (a) 15 RUBRIC  Correct demand graph.   Correct supply graph.   Labelling of graphs.   Market equilibrium label.  (4) (b) Effect of price on marketing strategy  The higher the price the more product one will want to sell.   The lower the price the less product one wants to sell.   Lower price will tend to keep product on farm/from market to sell when price is higher.   Low price leads to search for different markets/opportunities.  (Any 2) (2) TABLE 4: EXAMPLES OF HIGHER ORDER QUESTIONS: LEVEL 3 – ANALYSIS, SYNTHESIS AND EVALUATION Example 1: Question (2.7, 2011, March): The map below represents a farm that has recently been surveyed to determine the potential of the soil in the camps and field. This farmer intends to use the above data to differentiate between the cultivation strategies on the different fields on the farm. (a) Identify the process that was used to determine the different potential levels of the soils on this farm. (1) (b) This farm is mainly divided into part A and part B, as indicated on the map above. Identify a section that will be utilised for EACH of the following types of agricultural production and give a reason for your answer: i. Fodder production (2) ii. Crop production (2) 16 Discussion: The two questions require candidates to comprehend, apply, analyse and synthesize the data provided in order to respond correctly. In part a) candidates have to identify the process that was used to determine the different potential levels of the soils. Doing this entails interpreting and analysing the diagram provided and synthesising the information provided in order to respond appropriately to the question since it is not just a mere recall type of question even though the verb may be classified as for a low order question, but the question demand is higher. In part b), candidates are also expected to evaluate the information provided for proper utilization of the land. They need to evaluate the information given on the map for sustainable usage of the land for agricultural purposes. Part b) involves reviewing all the evidence and then making appropriate statements and judgments about the value of the land and identifying appropriate methods to use for particular purposes. Candidates have to combine separate ideas to form a new but coherent whole which requires innovativeness and creativity. Note that, although candidates have to work at several cognitive levels to answer these questions, including understanding and applying, both questions are classified as higher order questions as analysing, synthesizing and evaluating are the highest level of the cognitive demand. Memorandum/Marking guidelines (a) The process to determine the potential of soil  Soil surveying.  (1) (b) Identify a section utilised for types of farming and give a reason i. A- mostly/dominantly low-potential land.  (2) ii. B- mostly/dominantly high-potential land.  (2) Example 2: Question (2.3, 2009, November): The graph below represents the different climatic factors on the farm. "Klipheuwel" in the winter rainfall area of the country. The graph indicates the average climatic values taken during week-long periods. 17 a) Identify the week from the data supplied in the graph above that would be the most stressful to animals. Give a reason to support your answer. (2) b) Compare the possible effect of humidity combined with other climatic factors on the animals during week 4 and week 7. (4) c) Explain a measure that a farmer can apply to reduce the adverse effect of the following climatic conditions: i. Very high temperatures. ii. Very high wind speed. iii. Very low temperatures. (3) d) Deduce a possible reason for using more units of energy during week 4 compared to the other weeks indicated on the graph above. (2) Discussion: The action verbs ‘compare’, ‘explain’, and ‘deduce’ all indicate that questions b - d belong to higher order questions. However, all four of the questions require interpretation and analysis of the complex graph with more than one variable to respond correctly. Candidates need to understand all the variables on the graph to make the correct comparisons for temperature, humidity and wind. They have to make appropriate deductions and provide reasons based on units of energy utilized during week 4. In analysing the data, candidates are also required to evaluate the data which involves reviewing or synthesizing all the evidence, facts, and ideas, and then making appropriate statements and judgments in relation to a specific week. Memorandum/Marking guidelines (a)  Week 4  – cold temperature , high humidity , strong wind.   Week 7  – high temperature , high humidity.  (2) (b)  Week 4 – raining (wet animals) , wind (cool animals down) , low temperature (cool animals down).  (Any 2) 18  Week 7 – raining (wet animals) , low wind speed (animals less affected)  higher temperature (animals not so cold).  (Any 2) (4) (c) i. (a) Shelter , shade trees/shade netting / adapted breeds /cooling facilities like ventilation fans  and mist sprayers  (Any 1) (1) ii. Bring the animals indoors to avoid the effect of the wind.  Erect a windbreak to reduce the effect of the wind.  (Any 1) (1) iii. Heating facilities , Housing (shelter) , effective feeding , adapted breeds.  (Any 1) (1) (d) Lower temperature  – requires more heating, thus more units  of electricity is used. Animals need to be provided with more food to create their own energy to keep warm. (2) Example 3: Question (3.7, 2012, March): The profit or loss of the farming enterprise is called the gross margin. To calculate the gross margin of a farming enterprise, the total expenses are subtracted from the total income of that enterprise. Knowing the gross margin helps the farmer to make decisions on the profitability of the farming enterprise. A farmer has the following income values for both the crop and livestock enterprises:  Crop enterprise = R10 500,34  Livestock enterprise = R12 300,15 The tables below reflect the expenses incurred by the farmer. EXPENSES OF RUNNING A LIVESTOCK ENTERPRISE Date Expenses Amount (R) 12/03/11 Purchase of 8 x 50 kg starter 400.00 pellets at R50.00 a bag 25/03/11 Purchase of dip 2 x 5 litres at 240.00 R120.00 each 26/06/11 Purchase of deworming 145.00 medicines 10/07/11 Purchase of licks 300.00 22/08/11 Purchase of vaccines 250.00 22/08/11 Purchase of 8 x 50 kg grower 452.00 pellets at R56.50 a bag 17/10/11 Purchase of 50 kg feed 2 300.00 supplements 23/12/11 Purchase of Lucerne hay 3 940.00 TOTAL 8 027.00 EXPENSES OF GROWING 3 ha OF CROPS 19 Date Expenses Amount (R) 08/09/11 Ploughing of land at R210.00 a 630.00 hectare 12/09/11 Harrowing of land at R180.00 a 540.00 hectare 15/09/11 Fertilising of land at R100.00 a 300.00 hectare 15/09/11 Purchase of 15 x 50 kg mixed 1 200.00 fertilizers at R80.00 a bag 16/09/11 Runners for planting 15 x 50 kg 750.00 bags of runners at R50.00 a bag 19/09/11 Food for the planting workers 200.00 15/10/11 12 x 50 kg LAN at R95.00 a bag 1 140.00 15/11/11 12 x 50 kg at R95.00 a bag 1 140.00 TOTAL 5 900.00 a) Calculate the gross margin of both enterprises using the information in the tables above. (4) b) Deduce the most profitable enterprise. Substantiate your answer by giving a reason. (3) Discussion: The two questions are classified as high order cognitive demands since they require analysis and synthesis. To answer the questions, candidates have to understand and analyse both the data provided on the two tables for crops and livestock production and the two production enterprises. The two questions are closely linked, in that, if a candidate does the calculation wrong in a), s/he will also forfeit marks on the follow-up question b). To perform calculations on gross margin for the production enterprises (question a) and arrive at the correct answer requires deep understanding and analysis and synthesis of income and expenditures of the production enterprise or commodity. After calculating the gross margin in a), in b) candidates are required to evaluate or make a judgment about the results so that they are able to determine the production enterprise which is the most profitable and provide a good reason for their response. Memorandum/Marking guidelines (a) Calculate gross margin of enterprises Crop enterprise's gross margin is:  R10 500,34 – R5 900,00  = R4 600,34  Livestock enterprise 's gross margin is:  R12 300,15 – R8 027,00  = R4 273,15  (4) (b) Deduce most profitable enterprise with reason  The crop production enterprise  Because its expenses were less than the income  and the crop  enterprise's expenses were more than the income (3) 20 To accomplish the goal of discriminating between high achievers, those performing very poorly, and all candidates in between, examiners need to vary the challenge of examination questions. Until recently, the assumption has been that ‘alignment’ with the allocated percentage of marks for questions at the required cognitive demand levels meant that sufficient examination questions were relatively easy; moderately challenging; and difficult for candidates to answer. However, research and candidate performance both indicate that a range of factors other than type of cognitive demand contribute to the cognitive challenge of a question. Such factors include the level of content knowledge required, the language used in the question, and the complexity or number of concepts tested. In other words, cognitive demand levels on their own do not necessarily distinguish between degrees of difficulty of questions. This research helps, to some extent, explain why, despite that some NSC examination papers have complied with the specified cognitive demand weightings stipulated in the policy, they have not adequately distinguished between candidates with a range of academic abilities in particular between higher ability candidates. As a result, examiners, moderators and evaluators are now required to assess the difficulty level of each examination question in addition to judging its cognitive demand. Section 7 below explains the new protocol introduced by Umalusi for analysing examination question difficulty. 7 ANALYSING THE LEVEL OF DIFFICULTY OF EXAMINATION QUESTIONS When analysing the level of difficulty of each examination question, there are six important protocols to note. These are: 21 1. Question difficulty is assessed independently of the type and level of cognitive demand. 2. Question difficulty is assessed against four levels of difficulty. 3. Question difficulty is determined against the assumed capabilities of the ideal ‘envisaged’ Grade 12 Agricultural Management Practice NSC examination candidate. 4. Question difficulty is determined using a common framework for thinking about question difficulty. 5. Question difficulty entails distinguishing unintended sources of difficulty or ease from intended sources of difficulty or ease. 6. Question difficulty entails identifying differences in levels of difficulty within a single question. Each of the above protocols is individually explained and discussed below. 7.1 Question difficulty is assessed independently of the type and level of cognitive demand As emphasised earlier in this exemplar book, the revised Umalusi NSC examination evaluation instruments separate the analysis of the type of cognitive demand of a question from the analysis of the level of difficulty of each examination question. Cognitive demand describes the type of cognitive process that is required to answer a question, and this does not necessarily equate or align with the level of difficulty of other aspects of a question, such as the difficulty of the content knowledge that is being assessed. For example, a recall question can ask a candidate to recall very complex and abstract scientific content. The question would be categorised as Level 1 in terms of the cognitive demand taxonomy but may be rated as ‘difficult’ (Level 3 Table 5 below). Note: Cognitive demand is just one of the features of a question that can influence your comparative judgments of question difficulty. The type and level of cognitive process involved in answering a question does not necessarily determine how difficult the question would be for candidates. Not all evaluation/synthesis/analysis questions are more difficult than questions involving lower-order processes such as comprehension or application. 22 7.2 Question difficulty is assessed at four levels of difficulty The revised Umalusi NSC examination evaluation instruments require evaluators to exercise expert judgments about whether each examination question is ‘Easy’, ‘Moderately challenging’, ‘Difficult’ or ‘Very difficult’ for the envisaged Grade 12 learner to answer. Descriptions of these categories of difficulty are shown in Table 5. TABLE 5 LEVELS OF DIFFICULTY OF EXAMINATION QUESTIONS 1 2 3 4 Easy for the Moderately Difficult for the Very difficult for the envisaged challenging for envisaged envisaged Grade 12 Grade 12 the envisaged Grade 12 student to answer. student to Grade 12 student to answer. student to answer. The skills and knowledge answer. required to answer the question allow for the top students (extremely high- achieving/ability students) to be discriminated from other high achieving/ability students). Note: The forth level, ‘very difficult’ has been included in the levels of difficulty of examination questions to ensure that there are sufficient questions that discriminate well amongst higher ability candidates. 23 7.3 Question difficulty is determined against the assumed capabilities of the ideal ‘envisaged’ Grade 12 Agricultural Management Practice NSC examination candidate The revised Umalusi NSC examination evaluation instruments require evaluators to exercise expert judgments about whether each examination question is ‘Easy’, ‘Moderately challenging’, ‘Difficult’ or ‘Very difficult’ for the ‘envisaged’ Grade 12 learner to answer (Table 5). In other words, assessment of question difficulty is linked to a particular target student within the population of NSC candidates, that is, the Grade 12 candidate of average intelligence or ability. The Grade 12 learners that you may have taught over the course of your career cannot be used as a benchmark of the ‘envisaged’ candidate as we cannot know whether their abilities fall too high, or too low on the entire spectrum of all Grade 12 Agricultural Management Practice candidates in South Africa. The revised Umalusi NSC examination evaluation instruments thus emphasise that, when rating the level of difficulty of a particular question, your conception of the ‘envisaged’ candidate needs to be representative of the entire population of candidates for all schools in the country, in other words, of the overall Grade 12 population. Most importantly, the conception of this ‘envisaged’ candidate is a learner who has been taught the whole curriculum adequately by a teacher who is qualified to teach the subject, in a functioning school. There are many disparities in the South African education system that can lead to very large differences in the implementation of the curriculum. Thus this ‘envisaged’ learner is not a typical South African Grade 12 learner – it is an intellectual construct (an imagined person) whom you need to imagine when judging the level of difficulty of a question. This ideal ‘envisaged’ Grade 12 learner is an aspirational ideal of where we would like all Agricultural Management Practice learners in South Africa to be. 24 Note: The concept of the ideal envisaged Grade 12 candidate is that of an imaginary learner who has the following features: a. Is of average intelligence or ability b. Has been taught by a competent teacher c. Has been exposed to the entire examinable curriculum This ideal learner represents an imaginary person who occupies the middle ground of ability and approaches questions having had all the necessary schooling. 7.4 Question difficulty is determined using a common framework for thinking about question difficulty Examiners, moderators and evaluators in all subjects are now provided with a common framework for thinking about question difficulty to use when identifying sources of difficulty or ease in each question, and to provide their reasons for the level of difficulty they select for each examination question. The framework described in detail below provides the main sources of difficulty or ‘ease’ inherent in questions. The four sources of difficulty which must be considered when thinking about the level of difficulty of examination questions in this framework are as follows. 1. ‘Content difficulty’ refers to the difficulty inherent in the subject matter and/or concept/s assessed. 2. ‘Stimulus difficulty’ refers to the difficulty that candidates confront when they attempt to read and understand the question and its source material. The demands of the reading required to answer a question thus form an important element of ‘stimulus difficulty’. 3. ‘Task difficulty’ refers to the difficulty that candidates confront when they try to formulate or produce an answer. The level of cognitive demand of a question forms an element of ‘Task difficulty’, as does the demand of the written text or representations that learners are required to produce for their response. 4. ‘Expected response difficulty’ refers to difficulty imposed by examiners in a marking guideline, scoring rubric or memorandum. For example, mark allocations affect the amount and level of answers students are expected to write. 25 This framework derived from Leong (2006) was chosen because it allows the person making judgments about question difficulty to grapple with nuances and with making connections. The underlying assumption is that judgment of question difficulty is influenced by the interaction and overlap of different aspects of the four main sources of difficulty. Whilst one of the above four sources of difficulty may be more pronounced in a specific question, the other three sources may also be evident. Furthermore, not all four sources of difficulty need to be present for a question to be rated as difficult. The four-category conceptual framework is part of the required Umalusi examination evaluation instruments. Each category or source of difficulty in this framework is described and explained in detail below (Table 6). Please read the entire table very carefully. TABLE 6: FRAMEWORK FOR THINKING ABOUT QUESTION DIFFICULTY CONTENT/CONCEPT DIFFICULTY Content/concept difficulty indexes the difficulty in the subject matter, topic or conceptual knowledge assessed or required. In this judgment of the item/question, difficulty exists in the academic and conceptual demands that questions make and/or the grade level boundaries of the various ‘elements’ of domain/subject knowledge (topics, facts, concepts, principles and procedures associated with the subject). For example: Questions that assess ‘advanced content’, that is, subject knowledge that is considered to be in advance of the grade level curriculum, are likely to be difficult or very difficult for most candidates. Questions that assess subject knowledge which forms part of the core curriculum for the grade are likely to be moderately difficult for most candidates. Questions that assess ‘basic content’ or subject knowledge candidates would have learnt at lower grade levels, and which would be familiar to them are unlikely to pose too much of a challenge to most candidates. Questions that require general everyday knowledge or knowledge of ‘real life’ experiences are often easier than those that test more specialized school knowledge. Questions involving only concrete objects, phenomena, or processes 26 are usually easier than those that involve more abstract constructs, ideas, processes or modes. Questions which test learners’ understanding of theoretical or de-contextualised issues or topics, rather than their knowledge of specific examples or contextualised topics or issues tend to be more difficult. Questions involving familiar, contemporary/current contexts or events are usually easier than those that are more abstract or involve ‘imagined’ events (e.g. past/future events) or contexts that are distant from learners’ experiences. Content difficulty may also be varied by changing the number of knowledge elements or operations assessed. Generally, the difficulty of a question increases with the number of knowledge elements or operations assessed. Questions that assess learners on two or more knowledge elements or operations are usually (but not always) more difficult than those that assess a single knowledge element or operation. Assessing learners on a combination of knowledge elements or operations that are seldom combined usually increases the level of difficulty. EXAMPLES OF INVALID OR UNINTENDED SOURCE OF CONTENT DIFFICULTY  Testing obscure or unimportant concepts or facts that are not mentioned in the curriculum, or which are unimportant to the curriculum learning objectives.  Testing very advanced concepts or operations that candidates are extremely unlikely to have had opportunities to learn. STIMULUS DIFFICULTY Stimulus difficulty refers to the difficulty of the linguistic features of the question (linguistic complexity) and the challenge that candidates face when they attempt to read, interpret and understand the words and phrases in the question AND when they attempt to read and understand the information or ‘text’ or source material (diagrams, tables and graphs, pictures, cartoons, passages, etc.) that accompanies the question. For example: Questions that contain words and phrases that require only simple and straightforward comprehension are usually easier than those that require the candidate to understand subject specific phraseology and terminology (e.g. idiomatic or grammatical language not usually encountered in everyday language), or that require more technical comprehension and specialised command of words and language (e.g. everyday words involving different meanings within the context of the subject). Questions that contain information that is ‘tailored’ to an expected response, that is, questions that contain no irrelevant or distracting information, are generally easier than those that require candidates to select relevant and appropriate information or unpack a large amount of information for their response. A question set in a very rich context can increase question difficulty. For example, learners may find it difficult to select the correct operation when, 27 for example, a mathematics or accountancy question is set in a context-rich context. Although the level of difficulty in examinations is usually revealed most clearly through the questions, text complexity or the degree of challenge or complexity in written or graphic texts (such as a graph, table, picture, cartoon, etc.) that learners are required to read and interpret in order to respond can increase the level of difficulty. Questions that depend on reading and selecting content from a text can be more challenging than questions that do not depend on actually reading the accompanying text because they test reading comprehension skills as well as subject knowledge. Questions that require candidates to read a lot can be more challenging than those that require limited reading. Questions that tell learners where in the text to look for relevant information are usually easier than those where learners are not told where to look. The level of difficulty may increase if texts set, and reading passages or other source material used are challenging for the grade level, and make high reading demands on learners at the grade level. Predictors of textual difficulty include:  semantic content – for example, if vocabulary and words used are typically outside the reading vocabulary of Grade 12 learners, ’texts’ (passage, cartoon, diagram, table, etc.) are usually more difficult. ‘Texts’ are generally easier if words or images are made accessible by using semantic/context, syntactic/structural or graphophonic/visual cues.  syntactic or organisational structure – for example, sentence structure and length. For example, if learners are likely to be familiar with the structure of the ‘text’ or resource, for example, from reading newspapers or magazines, etc. ‘texts’ are usually easier than when the structure is unfamiliar.  literary techniques – for example, abstractness of ideas and imagery – and background knowledge required, for example, to make sense of allusions.  if the context is unfamiliar or remote, or if candidates do not have or are not provided with access to the context which informs a text (source material, passage, diagram, table, etc.) they are expected to read, and which informs the question they are supposed to answer and the answer they are expected to write, then constructing a response is likely to be more difficult than when the context is provided or familiar. Questions which require learners to cross-reference different sources are usually more difficult than those which deal with one source at a time. Another factor in stimulus difficulty is presentation and visual appearance. For example, type face and size, use of headings, and other types of textual organisers etc. can aid ‘readability’ and make it easier for learners to interpret the meaning of a question. EXAMPLES OF INVALID OR UNINTENDED SOURCES OF STIMULUS DIFFICULTY 28  Meaning of words unclear or unknown.  Difficult or impossible to work out what the question is asking.  Questions which are ambiguous.  Grammatical errors in the question that could cause misunderstanding.  Inaccuracy or inconsistency of information or data given.  Insufficient information provided.  Unclear resource (badly drawn or printed diagram, inappropriate graph, unconventional table).  Dense presentation (too many important points packed in a certain part of the stimulus). TASK DIFFICULTY Task difficulty refers to the difficulty that candidates confront when they try to formulate or produce an answer. For example: In most questions, to generate a response, candidates have to work through the steps of a solution. Generally, questions that require more steps in a solution are more difficult than those that require fewer steps. Questions involving only one or two steps in the solution are generally easier than those where several operations required for a solution. Task difficulty may also be mediated by the amount of guidance present in the question. Although question format is not necessarily a factor and difficult questions can have a short or simple format, questions that provide guided steps or cues (e.g. a clear and detailed framework for answering) are generally easier than those that are more open ended and require candidates to form or tailor their own response strategy or argument, work out the steps and maintain the strategy for answering the question by themselves. A high degree of prompting (a high degree of prompted recall, for example) tends to reduce difficulty level. Questions that test specific knowledge are usually less difficult that multi-step, multiple-concept or operation questions. A question that requires the candidate to use a high level of appropriate subject specific, scientific or specialised terminology in their response tends to be more difficult than one which does not. A question requiring candidates to create a complex abstract (symbolic or graphic) representation is usually more challenging than a question requiring candidates to create a concrete representation. A question requiring writing a one-word answer, a phrase, or a simple sentence is often easier to write than responses that require more complex sentences, a paragraph or a full essay or composition. Narrative or descriptive writing, for example where the focus is on recounting or ordering a sequence of events chronologically, is usually easier than writing discursively (argumentatively or analytically) where ideas need to be developed 29 and ordered logically. Some questions reflect task difficulty simply by ‘creating the space’ for A-grade candidates to demonstrate genuine insight, original thought or good argumentation, and to write succinctly and coherently about their knowledge. Another element is the complexity in structure of the required response. When simple connections between ideas or operations are expected in a response, the question is generally easier to answer than a question in which the significance of the relations between the parts and the whole is expected to be discussed in a response. In other words, a question in which an unstructured response is expected is generally easier than a question in which a relational response is required. A response which involves combining or linking a number of complex ideas or operations is usually more difficult than a response where there is no need to combine or link ideas or operations. On the other hand, questions which require continuous prose or extended writing may also be easier to answer correctly or to get marks for than questions that require no writing at all or single letter answer (such as multiple choice), or a brief response of one or two words or short phrase/s because they test very specific knowledge. The cognitive demand or thinking processes required form an aspect of task difficulty. Some questions test thinking ability, and learners’ capacity to deal with ideas, etc. Questions that assess inferential comprehension or application of knowledge, or that require learners to take ideas from one context and use it in another, for example, tend to be more difficult than questions that assess recognition or retrieval of basic information. On the other hand, questions requiring recall of knowledge are usually more difficult than questions that require simple recognition processes. When the resources for answering the question are included in the examination paper, then the task is usually easier than when candidates have to use and select their own internal resources (for example, their own knowledge of the subject) or transform information to answer the question. Questions that require learners to take or transfer ideas, skills or knowledge from one context/subject area and use them in another tend to be more difficult. EXAMPLES OF INVALID OR UNINTENDED SOURCES OF TASK DIFFICULTY  Level of detail required in an answer is unclear.  Context is unrelated to or uncharacteristic of the task than candidates have to do.  Details of a context distract candidates from recalling or using the right bits of their knowledge.  Question is unanswerable.  Illogical order or sequence of parts of the questions.  Interference from a previous question.  Insufficient space (or time) allocated for responding.  Question predictability or task familiarity. If the same question regularly appears in examination papers or has been provided to schools as 30 exemplars, learners are likely to have had prior exposure, and practised and rehearsed answers in class (for example, when the same language set works are prescribed each year).  Questions which involve potential follow-on errors from answers to previous questions. EXPECTED RESPONSE DIFFICULTY Expected response difficulty refers to difficulty imposed by examiners in a mark scheme and memorandum. This location of difficulty is more applicable to ‘constructed’ response questions, as opposed to ‘selected’ response questions (such as multiple choice, matching/true-false). For example: When examiners expect few or no details in a response, the question is generally easier than one where the mark scheme implies that a lot of details are expected. A further aspect of expected response difficulty is the clarity of the allocation of marks. Questions are generally easier when the allocation of marks is explicit, straight-forward or logical (i.e. 3 marks for listing 3 points) than when the mark allocation is indeterminate or implicit (e.g. when candidates need all 3 points for one full mark or 20 marks for a discussion of a concept, without any indication of how much and what to write in a response). This aspect affects difficulty because candidates who are unclear about the mark expectations in a response may not produce sufficient amount of answers in their response that will earn the marks that befit their ability. Some questions are more difficult/easy to mark accurately than others. Questions that are harder to mark and score objectively are generally more difficult for candidates than questions that require simple marking or scoring strategies on the part of markers. For example, recognition and recall questions are usually easier to test and mark objectively because they usually require the use of matching and/or simple scanning strategies on the part of markers. More complex questions requiring analysis (breaking down a passage or material into its component parts), evaluation (making judgments, for example, about the worth of material or text, or about solutions to a problem), synthesis (bringing together parts or elements to form a whole), and creativity (presenting own ideas or original thoughts) are generally harder to mark/score objectively. The best way to test for analysis, evaluation, synthesis and creativity is usually through extended writing. Such extended writing generally requires the use of more cognitively demanding marking strategies such as interpreting and evaluating the logic of what the candidate has written. Questions where a wide range of alternative answers or response/s is possible or where the correct answer may be arrived at through different strategies tend to be more difficult. On the other hand, questions may be so open-ended that learners will get marks even if they engage with the task very superficially. 31 EXAMPLES OF INVALID OR UNINTENDED SOURCES OF EXPECTED RESPONSE DIFFICULTY  Mark allocation is unclear or illogical. The weighting of marks is important in questions that comprise more than one component when components vary in levels of difficulty. Learners may be able to get the same marks for answering easy component/s of the item as other learners are awarded for answering the more difficult components.  Mark scheme and questions are incongruent. For example, there is no clear correlation between the mark indicated on the question paper and the mark allocation of the memorandum.  Question asked is not the one that examiners want candidates to answer. Memorandum spells out expectation to a slightly different question, not the actual question.  Impossible for candidate to work out from the question what the answer to the question is (answer is indeterminable).  Wrong answer provided in memorandum.  Alternative correct answers from those provided or spelt out in the memorandum are also plausible.  The question is ‘open’ but the memo has a closed response. Memo allows no leeway for markers to interpret answers and give credit where due. The framework described above does not provide you with explicit links between the different sources of difficulty, or show relationships and overlaps between the different categories and concepts in the framework. This is because it is impossible to set prescribed rules or pre-determined combinations of categories and concepts used for making judgments about the source of difficulty in a particular examination question. The intention behind the framework is to allow you to exercise your sense of judgment as an expert. The complexity of your judgment lies in your ability as an expert to recognise subtle interactions and identify links between different categories of a question’s difficulty or ease. For example, a question that tests specific knowledge of your subject can actually be more difficult that a multi- step question because it requires candidates to explain a highly abstract concept, or very complex content. In other words, although questions that test specific knowledge are usually less difficult than multiple-concept or operation questions, the level of difficulty of the content knowledge required to answer 32 a question can make the question more difficult than a multi-step or multi- operation question. Not all one-word response questions can automatically be assumed to be easy. For example, multiple-choice questions are not automatically easy because a choice of responses is provided – some can be difficult. As an expert in your subject, you need to make these types of judgments about each question. Note: It is very important that you become extremely familiar with the framework explained in Table 6, and with each category or source of difficulty provided (i.e. content difficulty, task difficulty, stimulus difficulty, and expected response difficulty). You need to understand the examples of questions which illustrate each of the four levels (Table 7 to Table 10). This framework is intended to assist you in discussing and justifying your decisions regarding the difficulty level ratings of questions. You are expected to refer to all four categories or sources of difficulty in justifying your decisions. When considering question difficulty ask:  How difficult is the knowledge (content, concepts or procedures) that is being assessed for the envisaged Grade 12 candidate? (Content difficulty)  How difficult is it for the envisaged l Grade 12 candidate to formulate the answer to the question? In considering this source of difficulty, you should take into account the type of cognitive demand made by the task. (Task difficulty)  How difficult is it for the envisaged Grade 12 candidate to understand the question and the source material that need to be read to answer the particular question? (Stimulus difficulty)  What does the marking memorandum and mark scheme show about the difficulty of the question? (Expected response difficulty) 7.5 Question difficulty entails distinguishing unintended sources of difficulty or ease from intended sources of difficulty or ease Close inspection of the framework for thinking about question difficulty (Section 7.4, Table 6) above, shows that, for each general category or source of difficulty, the framework makes a distinction between ‘valid’ or intended, and ‘invalid’ or unintended sources of question difficulty or ease. Therefore, defining question difficulty entails identifying whether sources of difficulty or ease in a 33 question were intended or unintended by examiners. Included in Table 6 are examples of unintended sources of difficulty or ease for each of the four categories. Valid difficulty or ‘easiness’ in a question has its source in the requirements of the question, and is intended by the examiner (Ahmed and Pollit, 1999). Invalid sources of difficulty or ‘easiness’ refer to those features of question difficulty or ‘easiness’ that were not intended by the examiner. Such unintended ‘mistakes’ or omissions in questions can prevent the question from assessing what the examiner intended, and are likely to prevent candidates from demonstrating their true ability or competence, and can result in a question being easier or more difficult than the examiner intended. For example, grammatical errors in a question that could cause misunderstanding for candidates are unintended sources of question difficulty because the difficulty in answering the question could lie in the faulty formulation of the question, rather than in the intrinsic difficulty of the question itself (for example, because of stimulus difficulty). Candidates “may misunderstand the question and therefore not be able to demonstrate what they know” (Ahmed and Pollit, 1999, p.2). Another example is question predictability (when the same questions regularly appear in examination papers or textbooks) because familiarity can make a question which was intended to be difficult, less challenging for examination candidates. Detecting unintended sources of difficulty or ease in examinations is largely the task of moderators. Nevertheless, evaluators also need to be vigilant about detecting sources which could influence or alter the intended level of question difficulty that moderators may have overlooked. 34 Note: When judging question difficulty, you should distinguish unintended sources of question difficulty or ease from those sources that are intended, thus ensuring that examinations have a range of levels of difficulty. The framework for thinking about question difficulty allows you to systematically identify technical and other problems in each question. Examples of problems might be: unclear instructions, poor phrasing of questions, and the provision of inaccurate and insufficient information, unclear or confusing visual sources or illustrations, incorrect use of terminology, inaccurate or inadequate answers in the marking memorandum, and question predictability. You should not rate a question as difficult/easy if the source of difficulty/ease lies in the ‘faultiness’ of the question or memorandum. Instead, as moderators and evaluators, you need to alert examiners to unintended sources of difficulty/ease so that they can improve questions and remedy errors or sources of confusion before candidates write the examination. 7.6 Question difficulty entails identifying differences in levels of difficulty within a single question An examination question can incorporate more than one level of difficulty if it has subsections. It is important that the components of such questions are ‘broken down’ into to their individual levels of difficulty. Note: Each subsection of a question should be analysed separately so that the percentage of marks allocated at each level of difficulty and the weighting for each level of difficulty can be ascertained as accurately as possible for that question. 8. EXAMPLES OF QUESTIONS AT DIFFERENT LEVELS OF DIFFICULTY This section provides at least three examples of questions from previous Agricultural Management Practice NSC examinations (Table 7 to Table 10) categorised at each of the four levels of difficulty described in Section 7 (Table 5) above. These examples were selected to represent the best and clearest examples of each level of difficulty that the Agricultural Management Practice experts could find. The discussion below each example question tries to explain the reasoning behind the judgments made about the categorisation of the question at that particular level of difficulty. 35 TABLE 7: EXAMPLES OF QUESTIONS AT DIFFICULTY LEVEL 1 – EASY Example 1: Question (2.8, 2011, March): The following composition of moist soil was measured on a cropping farm:  Inorganic particles: 35%  Water: 25%  Air: 30%  Organic material: 10% Draw a pie graph to represent the composition of the above soil. (4) Discussion: This question is classified as easy for the idea Grade 12 candidate because:  it assesses ‘basic content’ or subject knowledge that all Grade 12 Agricultural Management Practice candidates should have learnt at lower grade levels. Drawing a simple pie chart is a topic which is also covered in a number of other school subjects, and the concept of a pie chart is often applied in everyday sources such as newspaper reports. The concept should therefore be very familiar to candidates (content).  the source material provides clear indication of soil composition in percentages. It does not contain any superfluous or unnecessary detail which could distract candidates from understanding what is required (stimulus).  the question is very clear; it requires the candidates to make a drawing of a pie chart indicating all the percentages provided. Candidates should have been exposed to numerous similar tasks and questions from lower grades (task).  the mark allocation is straightforward. The 4 marks are allocated for completing the simple graph correctly. The 4 marks are allocated based on correct heading of the chart, correct structure, correct % distribution and correct labelling (expected response). This question is easy with regard to all four sources of difficulty. 36 Memorandum/Marking guidelines Marking rubric:  Heading of the chart.   Correct structure.   Correct % distribution.   Correct labelling.  (4) Example 2: Question (4.8, 2010, March): A group of farmers has decided to build a plant where different agricultural products can be dehydrated or dried. The dried products will then be sent to special markets. a) Explain the meaning of value-adding to an agricultural product. (2) b) Drying is a method of value-adding. Indicate the benefits of drying products of animal and plant origin (4) Discussion: These questions are both classified as easy for the envisaged Grade 12 candidate to answer because: 37  the topic of value adding of agricultural products is very familiar to Grade 12 candidates who should also have dealt with this issue practically. The topic does not require specialized knowledge of the actual procedures and principles of the methods of drying of agricultural products (content).  candidates do not have to read a lot of text, the photograph of meat drying simply informs the questions they are supposed to answer and does not require any interpretation (stimulus).  candidates will explain by recalling information on value adding on agricultural products and drying as a method of value-adding (task).  two (2) marks are allocated for writing a simple explanation in a) and four (4) marks are allocated in b) for writing down four benefits of drying. The answers expected are easy to formulate and do not entail much writing (expected response). This question is easy with regard to all four sources of difficulty. Memorandum/Marking guidelines (a) Value-adding  Taking a raw material and processing it or adding something.   to it to change it into a more saleable item that will be purchased by a different group of customers.  (2) (b) Benefits of drying agricultural products  Increase the potential and value for a product.   Less weight thus easier to transport to the markets.   Protection against organisms that causes product decay.  =  Easier to package and store.  (4) Example 3: Question (3.1, 2012, November): A progressive, privately owned, small stock-producing farm recorded and classified its total cash sales for four periods as shown in the graph below. CASH SALES 2010-2011 120 100 Value of sales (R) 80 60 40 20 0 Oct-Dec Jan-Mar Apr-Jun Jul-Sep Different periods of sale 38 a) Name FOUR kinds of business activities where the farmer needs to issue a receipt (4) b) Identify the TWO periods with the highest cash sales. (2) c) Suggest THREE possible factors that have an influence on the demand for products during the periods mentioned in QUESTION (b). (3) Discussion: All three of these questions are regarded as easy questions.  The questions test specific knowledge. Cash sale, kinds of business and demand for products are topics that are easy to understand for the envisaged grade 12 candidate. Question (c) the response expected is easier since livestock sales increases and also other materialistic items do usually get bought more during festive seasons which is a common knowledge that makes the question content asked to be easier (content).  The graph provided is simple and easy to interpret as it only shows two variables, namely value of sales and periods of sales. The data is very accessible and it should be easy for Grade 12 candidates to find the correct value of sales at different periods of sales. The three questions themselves are all clear and unambiguous. They contain simple words and phrases rather than specialized subject specific phraseology and terminology (stimulus).  Question (a) requires candidates to name four kinds of business activities where a farmer needs to issue a receipt. The question is simple, straightforward and the graph channels candidates’ thinking around agricultural business activities which does issue receipts when making transactions. Question (b) requires the candidates to identify the periods when sales are high. Question (c) requires candidates to suggest possible factors that have an influence on the demand of the product during the festive period (task).  The mark allocation is straightforward. Four (4) marks are allocated for four responses in question (a), two (2) marks for two responses in question (b) and three (3) marks for three responses in question (c) (expected response). This question is easy with regard to all four sources of difficulty. Memorandum/Marking guidelines (a) Kinds of business activity for receipt:  When the farmer is receiving payment for sales/selling produce.   When payment is received for services rendered by the farmer/farm.   When payment from a debtor is received.   When contributions or donations are received.   Renting of equipment/tractor/harvester.   Any transaction whereby money/goods are received.  (Any 4) (4) (b) Two periods and reasons for high cash sales:  October to December.   April to June.  (2) (c) Factors influencing demand:  Price of the product /price expectation of the product.   Buying power of the consumer /income of consumers/bonuses.   Price of competitive/related/ substitute products.  39  Consumer preferences/taste /fashion preferences.   Variety of products available.   Festive periods/holidays/festive season.   Breeding season/weaning season.   Complimented products.   Size of the market/number of consumers (Any 3) (3) TABLE 8: EXAMPLES OF QUESTIONS AT DIFFICULTY LEVEL 2 - MODERATE Example 1: Question (2.3, 2012, November): Veld or pasture management is the key to healthy and productive pastures. It is recommended that a farmer treats a specific veld according to its nature. Veld of the same grazing potential and plant palatability should be fenced together to utilize it optimally. Explain FOUR factors to consider when dividing pasture into camp systems. (4) Discussion: This question is regarded as moderate in terms of level of difficulty because:  it tests very specific knowledge. The topic ‘veld or pasture management’ and the concepts of ‘productive pastures’, grazing potential’, ‘plant palatability’ ‘grazing systems’ and ‘camp systems’ are moderately difficult content for the envisaged Grade 12 candidate (content).  the leading statement and a picture will assist candidates in making an appropriate response. Although the diagram does not elaborate on veld division into camp systems through the use of labels etc., the leading statement (text) makes up for this deficiency and contains information tailored towards the expected response. The question is not `long or convoluted and the language used is relatively simple. The illustration does not require much interpretation but mainly serves as an ‘aide memoire’ for candidates (stimulus).  candidates are expected to explain the four factors and not just to mention them. The response is not one word but a thorough explanation of the factors (task).  four (4) marks are allocated for question. Furthermore, candidates are expected to give a full explanation of the four factors using their own words (expected response). This question is moderate with regard to all four sources of difficulty. 40 Memorandum/Marking guidelines Criteria for camps:  Biotic composition/b

Use Quizgecko on...
Browser
Browser