ISTQB Certified Tester Advanced Level Technical Test Analyst (CTAL-TTA) PDF Syllabus v4.0

Document Details

ExceedingBlue7560

Uploaded by ExceedingBlue7560

null

2021

ISTQB

null

Tags

software testing technical test analysis software quality testing methodologies

Summary

This document is the v4.0 syllabus for the ISTQB Certified Tester Advanced Level Technical Test Analyst (CTAL-TTA) qualification. It covers key concepts and topics in software testing, including risk-based testing, white-box techniques, and quality characteristics. The syllabus was updated in 2021.

Full Transcript

Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) Certified Tester Advanced Level Technical Test Analyst (CTAL-TTA) Syllabus v4.0 International Software Testin...

Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) Certified Tester Advanced Level Technical Test Analyst (CTAL-TTA) Syllabus v4.0 International Software Testing Qualifications Board v4.0 Page 1 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) Copyright Notice Copyright Notice © International Software Testing Qualifications Board (hereinafter called ISTQB ®) ISTQB® is a registered trademark of the International Software Testing Qualifications Board. Copyright © 2021, the authors for the update 2021 Adam Roman, Armin Born, Christian Graf, Stuart Reid Copyright © 2019, the authors for the update 2019 Graham Bath (vice-chair), Rex Black, Judy McKay, Kenji Onoshi, Mike Smith (chair), Erik van Veenendaal. All rights reserved. The authors hereby transfer the copyright to the ISTQB®. The authors (as current copyright holders) and ISTQB® (as the future copyright holder) have agreed to the following conditions of use: Extracts, for non-commercial use, from this document may be copied if the source is acknowledged. Any Accredited Training Provider may use this syllabus as the basis for a training course if the authors and the ISTQB® are acknowledged as the source and copyright owners of the syllabus and provided that any advertisement of such a training course may mention the syllabus only after official Accreditation of the training materials has been received from an ISTQB®-recognized Member Board. Any individual or group of individuals may use this syllabus as the basis for articles and books, if the authors and the ISTQB® are acknowledged as the source and copyright owners of the syllabus. Any other use of this syllabus is prohibited without first obtaining the approval in writing of the ISTQB ®. Any ISTQB®-recognized Member Board may translate this syllabus provided they reproduce the abovementioned Copyright Notice in the translated version of the syllabus. v4.0 Page 2 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) Revision History Version Date Remarks v4.0 2021/06/30 GA release for v4.0 version v4.0 2021/04/28 Draft updated based on feedback from Beta Review. 2021 v4.0 Beta 2021/03/01 Draft updated based on feedback from Alpha Review. 2021 v4.0 Alpha 2020/12/07 Draft for Alpha Review updated to: Improve text throughout Remove subsection associated with K3 TTA-2.6.1 (2.6 Basis Path Testing) and remove LO Remove subsection associated with K2 TTA-3.2.4 (3.2.4 Call Graphs) and remove LO Rewrite subsection associated with TTA-3.2.2 (3.2.2 Data Flow Analysis) and make it a K3 Rewrite section associated with TTA-4.4.1 and TTA-4.4.2 (4.4. Reliability Testing) Rewrite section associated with TTA-4.5.1 and TTA-4.5.2 (4.5 Performance Testing) Add section 4.9 on Operational Profiles. Rewrite section associated with TTA-2.8.1 (2.7 Selecting White-Box Test Techniques section) Rewrite TTA-3.2.1 to include cyclomatic complexity (no impact on exam Qs) Rewrite TTA-2.4.1 (MC/DC) to make it consistent with other white-box LOs (no impact on exam Qs) 2019 v1.0 2019/10/18 GA release for 2019 version 2012 2012/10/19 GA release for 2012 version v4.0 Page 3 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) Table of Contents Copyright Notice...................................................................................................................................... 2 Revision History....................................................................................................................................... 3 Table of Contents.................................................................................................................................... 4 0. Introduction to this Syllabus............................................................................................................ 7 0.1 Purpose of this Syllabus......................................................................................................... 7 0.2 The Certified Tester Advanced Level in Software Testing..................................................... 7 0.3 Examinable Learning Objectives and Cognitive Levels of Knowledge.................................. 7 0.4 Expectations of Experience.................................................................................................... 7 0.5 The Advanced Level Technical Test Analyst Exam............................................................... 8 0.6 Entry Requirements for the Exam.......................................................................................... 8 0.7 Accreditation of Courses........................................................................................................ 8 0.8 Level of Syllabus Detail.......................................................................................................... 8 0.9 How this Syllabus is Organized.............................................................................................. 9 1. The Technical Test Analyst's Tasks in Risk-Based Testing - 30 mins......................................... 10 1.1 Introduction........................................................................................................................... 11 1.2 Risk-based Testing Tasks.................................................................................................... 11 1.2.1 Risk Identification............................................................................................................. 11 1.2.2 Risk Assessment............................................................................................................. 11 1.2.3 Risk Mitigation.................................................................................................................. 12 2. White-Box Test Techniques - 300 mins........................................................................................ 13 2.1 Introduction........................................................................................................................... 14 2.2 Statement Testing................................................................................................................ 14 2.3 Decision Testing................................................................................................................... 15 2.4 Modified Condition/Decision Testing.................................................................................... 15 2.5 Multiple Condition Testing.................................................................................................... 16 2.6 Basis Path Testing............................................................................................................... 17 2.7 API Testing........................................................................................................................... 17 2.8 Selecting a White-Box Test Technique................................................................................ 18 2.8.1 Non-Safety-Related Systems.......................................................................................... 19 2.8.2 Safety-related systems.................................................................................................... 19 3. Static and Dynamic Analysis - 180 mins...................................................................................... 21 3.1 Introduction........................................................................................................................... 22 3.2 Static Analysis...................................................................................................................... 22 3.2.1 Control Flow Analysis...................................................................................................... 22 3.2.2 Data Flow Analysis.......................................................................................................... 22 3.2.3 Using Static Analysis for Improving Maintainability......................................................... 23 3.3 Dynamic Analysis................................................................................................................. 24 3.3.1 Overview.......................................................................................................................... 24 3.3.2 Detecting Memory Leaks................................................................................................. 24 3.3.3 Detecting Wild Pointers................................................................................................... 25 3.3.4 Analysis of Performance Efficiency................................................................................. 25 4. Quality Characteristics for Technical Testing - 345 mins............................................................ 27 4.1 Introduction........................................................................................................................... 28 4.2 General Planning Issues...................................................................................................... 29 4.2.1 Stakeholder Requirements.............................................................................................. 29 4.2.2 Test Environment Requirements..................................................................................... 29 4.2.3 Required Tool Acquisition and Training........................................................................... 30 4.2.4 Organizational Considerations......................................................................................... 30 4.2.5 Data Security and Data Protection.................................................................................. 30 4.3 Security Testing................................................................................................................... 30 4.3.1 Reasons for Considering Security Testing...................................................................... 30 4.3.2 Security Test Planning..................................................................................................... 31 v4.0 Page 4 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 4.3.3 Security Test Specification.............................................................................................. 32 4.4 Reliability Testing................................................................................................................. 32 4.4.1 Introduction...................................................................................................................... 32 4.4.2 Testing for Maturity.......................................................................................................... 32 4.4.3 Testing for Availability...................................................................................................... 33 4.4.4 Testing for Fault Tolerance.............................................................................................. 33 4.4.5 Testing for Recoverability................................................................................................ 34 4.4.6 Reliability Test Planning.................................................................................................. 34 4.4.7 Reliability Test Specification............................................................................................ 35 4.5 Performance Testing............................................................................................................ 35 4.5.1 Introduction...................................................................................................................... 35 4.5.2 Testing for Time Behavior................................................................................................ 35 4.5.3 Testing for Resource Utilization....................................................................................... 35 4.5.4 Testing for Capacity......................................................................................................... 36 4.5.5 Common Aspects of Performance Testing...................................................................... 36 4.5.6 Types of Performance Testing......................................................................................... 36 4.5.7 Performance Test Planning............................................................................................. 37 4.5.8 Performance Test Specification....................................................................................... 38 4.6 Maintainability Testing.......................................................................................................... 38 4.6.1 Static and Dynamic Maintainability Testing..................................................................... 38 4.6.2 Maintainability Sub-characteristics.................................................................................. 39 4.7 Portability Testing................................................................................................................. 39 4.7.1 Introduction...................................................................................................................... 39 4.7.2 Installability Testing......................................................................................................... 39 4.7.3 Adaptability Testing......................................................................................................... 40 4.7.4 Replaceability Testing...................................................................................................... 40 4.8 Compatibility Testing............................................................................................................ 40 4.8.1 Introduction...................................................................................................................... 40 4.8.2 Coexistence Testing........................................................................................................ 40 4.9 Operational Profiles.............................................................................................................. 41 5. Reviews - 165 mins....................................................................................................................... 42 5.1 Technical Test Analyst Tasks in Reviews............................................................................ 43 5.2 Using Checklists in Reviews................................................................................................ 43 5.2.1 Architectural Reviews...................................................................................................... 43 5.2.2 Code Reviews.................................................................................................................. 44 6. Test Tools and Automation - 180 mins......................................................................................... 46 6.1 Defining the Test Automation Project.................................................................................. 47 6.1.1 Selecting the Automation Approach................................................................................ 47 6.1.2 Modeling Business Processes for Automation................................................................ 49 6.2 Specific Test Tools............................................................................................................... 50 6.2.1 Fault Seeding Tools......................................................................................................... 50 6.2.2 Fault Injection Tools......................................................................................................... 50 6.2.3 Performance Testing Tools.............................................................................................. 50 6.2.4 Tools for Testing Websites.............................................................................................. 51 6.2.5 Tools to Support Model-Based Testing........................................................................... 52 6.2.6 Component Testing and Build Tools............................................................................... 52 6.2.7 Tools to Support Mobile Application Testing................................................................... 52 7. References.................................................................................................................................... 54 7.1 Standards............................................................................................................................. 54 7.2 ISTQB® Documents............................................................................................................. 54 7.3 Books and articles................................................................................................................ 55 7.4 Other References................................................................................................................. 55 8. Appendix A: Quality Characteristics Overview............................................................................. 56 9. Index............................................................................................................................................. 58 v4.0 Page 5 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) Acknowledgements The 2019 version of this document was produced by a core team from the International Software Testing Qualifications Board Advanced Level Working Group: Graham Bath (vice-chair), Rex Black, Judy McKay, Kenji Onoshi, Mike Smith (chair), Erik van Veenendaal. The following persons participated in the reviewing, commenting, and balloting of the 2019 version of this syllabus: Dani Almog Andrew Archer Rex Black Armin Born Sudeep Chatterjee Tibor Csöndes Wim Decoutere Klaudia Dusser-Zieger Melinda Eckrich-Brájer Peter Foldhazi David Frei Karol Frühauf Jan Giesen Attila Gyuri Matthias Hamburg Tamás Horváth N. Khimanand Jan te Kock Attila Kovács Claire Lohr Rik Marselis Marton Matyas Judy McKay Dénes Medzihradszky Petr Neugebauer Ingvar Nordström Pálma Polyák Meile Posthuma Stuart Reid Lloyd Roden Adam Roman Jan Sabak Péter Sótér Benjamin Timmermans Stephanie van Dijck Paul Weymouth This document was produced by a core team from the International Software Testing Qualifications Board Advanced Level Working Group: Armin Born, Adam Roman, Stuart Reid. The updated v4.0 version of this document was produced by a core team from the International Software Testing Qualifications Board Advanced Level Working Group: Armin Born, Adam Roman, Christian Graf, Stuart Reid. The following persons participated in the reviewing, commenting, and balloting of the updated v4.0 version of this syllabus: Adél Vécsey-Juhász Jane Nash Pálma Polyák Ágota Horváth Lloyd Roden Paul Weymouth Benjamin Timmermans Matthias Hamburg Péter Földházi Jr. Erwin Engelsma Meile Posthuma Rik Marselis Gary Mogyorodi Nishan Portoyan Sebastian Małyska Geng Chen Joan Killeen Tal Pe'er Gergely Ágnecz Ole Chr. Hansen Wang Lijuan Zuo Zhenlei The core team thanks the review team and the National Boards for their suggestions and input. This document was formally released by the General Assembly of the ISTQB® on 30 June 2021. v4.0 Page 6 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 0. Introduction to this Syllabus 0.1 Purpose of this Syllabus This syllabus forms the basis for the International Software Testing Qualification at the Advanced Level for the Technical Test Analyst. The ISTQB® provides this syllabus as follows: 1. To National Boards, to translate into their local language and to accredit training providers. National Boards may adapt the syllabus to their particular language needs and modify the references to adapt to their local publications. 2. To Exam Boards, to derive examination questions in their local language adapted to the learning objectives for the syllabus. 3. To training providers, to produce courseware and determine appropriate teaching methods. 4. To certification candidates, to prepare for the exam (as part of a training course or independently). 5. To the international software and systems engineering community, to advance the profession of software and systems testing, and as a basis for books and articles. The ISTQB® may allow other entities to use this syllabus for other purposes, provided they seek and obtain prior written permission. 0.2 The Certified Tester Advanced Level in Software Testing The Advanced Level qualification is comprised of three separate syllabi relating to the following roles: Test Manager Test Analyst Technical Test Analyst The ISTQB® Technical Test Analyst Advanced Level Overview is a separate document [CTAL_TTA_OVIEW] which includes the following information: Business Outcomes Matrix showing traceability between business outcomes and learning objectives Summary 0.3 Examinable Learning Objectives and Cognitive Levels of Knowledge The Learning Objectives support the Business Outcomes and are used to create the examination for achieving the Advanced Technical Test Analyst Certification. The knowledge levels of the specific learning objectives at K2, K3 and K4 levels are shown at the beginning of each chapter and are classified as follows: K2: Understand K3: Apply K4: Analyze 0.4 Expectations of Experience Some of the learning objectives for the Technical Test Analyst assume that basic experience is available in the following areas: General programming concepts General concepts of system architectures v4.0 Page 7 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 0.5 The Advanced Level Technical Test Analyst Exam The Advanced Level Technical Test Analyst exam will be based on this syllabus. Answers to exam questions may require the use of materials based on more than one section of this syllabus. All sections of the syllabus are examinable except for the introduction and the appendices. Standards, books and other ISTQB® syllabi are included as references, but their content is not examinable beyond what is summarized in this syllabus itself. The format of the exam is multiple choice. There are 45 questions. To pass the exam, at least 65% of the total points must be earned. Exams may be taken as part of an accredited training course or taken independently (e.g., at an exam center or in a public exam). Completion of an accredited training course is not a pre-requisite for the exam. 0.6 Entry Requirements for the Exam The Certified Tester Foundation Level certification shall be obtained before taking the Advanced Level Technical Test Analyst certification exam. 0.7 Accreditation of Courses An ISTQB® Member Board may accredit training providers whose course material follows this syllabus. Training providers should obtain accreditation guidelines from the Member Board or body that performs the accreditation. An accredited course is recognized as conforming to this syllabus and is allowed to have an ISTQB® exam as part of the course. 0.8 Level of Syllabus Detail The level of detail in this syllabus allows internationally consistent courses and exams. To achieve this goal, the syllabus consists of: General instructional objectives describing the intention of the Advanced Level Technical Test Analyst A list of terms that students must be able to recall Learning objectives for each knowledge area, describing the cognitive learning outcome to be achieved A description of the key concepts, including references to sources such as accepted literature or standards The syllabus content is not a description of the entire knowledge area; it reflects the level of detail to be covered in Advanced Level training courses. It focuses on material that can apply to any software projects, using any software development lifecycle. The syllabus does not contain any specific learning objectives relating to any particular software development models, but it does discuss how these concepts apply in Agile software development, other types of iterative and incremental software development models, and in sequential software development models. v4.0 Page 8 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 0.9 How this Syllabus is Organized There are six chapters with examinable content. The top-level heading for each chapter specifies the minimum time for the chapter; timing is not provided below chapter level. For accredited training courses, the syllabus requires a minimum of 20 hours of instruction, distributed across the six chapters as follows: Chapter 1: The Technical Test Analyst's Tasks in Risk-Based Testing (30 minutes) Chapter 2: White-Box Test Techniques (300 minutes) Chapter 3: Static and Dynamic Analysis (180 minutes) Chapter 4: Quality Characteristics for Technical Testing (345 minutes) Chapter 5: Reviews (165 minutes) Chapter 6: Test Tools and Automation (180 minutes) v4.0 Page 9 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 1. The Technical Test Analyst's Tasks in Risk-Based Testing - 30 mins. Keywords product risk, project risk, risk assessment, risk identification, risk mitigation, risk-based testing Learning Objectives for The Technical Test Analyst's Tasks in Risk-Based Testing 1.2 Risk-based Testing Tasks TTA-1.2.1 (K2) Summarize the generic risk factors that the Technical Test Analyst typically needs to consider TTA-1.2.2 (K2) Summarize the activities of the Technical Test Analyst within a risk-based approach for testing activities v4.0 Page 10 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 1.1 Introduction The Test Manager has overall responsibility for establishing and managing a risk-based testing strategy. The Test Manager will usually request the involvement of the Technical Test Analyst to ensure the risk- based approach is implemented correctly. Technical Test Analysts work within the risk-based testing framework established by the Test Manager for the project. They contribute their knowledge of the technical product risks that are inherent in the project, such as risks related to security, system reliability and performance. They should also contribute to the identification and treatment of project risks associated with test environments, such as the acquisition and set-up of test environments for performance, reliability, and security testing. 1.2 Risk-based Testing Tasks Technical Test Analysts are actively involved in the following risk-based testing tasks: Risk identification Risk assessment Risk mitigation These tasks are performed iteratively throughout the project to deal with emerging risks and changing priorities, and to regularly evaluate and communicate risk status. 1.2.1 Risk Identification By calling on the broadest possible sample of stakeholders, the risk identification process is most likely to detect the largest possible number of significant risks. Because Technical Test Analysts possess unique technical skills, they are particularly well-suited for conducting expert interviews, brainstorming with co-workers, and analyzing their experiences to determine where the likely areas of product risk lie. In particular, Technical Test Analysts work closely with other stakeholders, such as developers, architects, operations engineers, product owners, local support offices, technical experts, and service desk technicians, to determine areas of technical risk impacting the product and project. Involving other stakeholders ensures that all views are considered and is typically facilitated by Test Managers. Risks that might be identified by the Technical Test Analyst are typically based on the [ISO 25010] product quality characteristics listed in Chapter 4 of this syllabus. 1.2.2 Risk Assessment While risk identification is about identifying as many pertinent risks as possible, risk assessment is the study of those identified risks to categorize each risk and determine the likelihood and impact associated with it. The likelihood of a product risk is usually interpreted as the probability of the occurrence of the failure in the system under test. The Technical Test Analyst contributes to understanding the probability of each technical product risk whereas the Test Analyst contributes to understanding the potential business impact of the problem should it occur. Project risks that become issues can impact the overall success of the project. Typically, the following generic project risk factors need to be considered: Conflict between stakeholders regarding technical requirements Communication problems resulting from the geographical distribution of the development organization Tools and technology (including relevant skills) Time, resource, and management pressure Lack of earlier quality assurance v4.0 Page 11 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) High change rates of technical requirements Product risks that become issues may result in higher numbers of defects. Typically, the following generic product risk factors need to be considered: Complexity of technology Code complexity Amount of change in the source code (insertions, deletions, modifications) Large number of defects found relating to technical quality characteristics (defect history) Technical interface and integration issues Given the available risk information, the Technical Test Analyst proposes an initial risk likelihood according to the guidelines established by the Test Manager. The initial value may be modified by the Test Manager when all stakeholder views have been considered. The risk impact is normally determined by the Test Analyst. 1.2.3 Risk Mitigation During the project, Technical Test Analysts influence how testing responds to the identified risks. This generally involves the following: Designing test cases for those risks addressing high risk areas and helping to evaluate the residual risk Reducing risk by executing the designed test cases and by putting into action appropriate mitigation and contingency measures as stated in the test plan Evaluating risks based on additional information gathered as the project unfolds, and using that information to implement mitigation measures aimed at decreasing the likelihood of those risks The Technical Test Analyst will often cooperate with specialists in areas such as security and performance to define risk mitigation measures and elements of the test strategy. Additional information can be obtained from ISTQB® Specialist syllabi, such as the Security Testing syllabus [CT_SEC_SYL] and the Performance Testing syllabus [CT_PT_SYL]. v4.0 Page 12 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 2. White-Box Test Techniques - 300 mins. Keywords API testing, atomic condition, control flow, decision testing, modified condition/decision testing, multiple condition testing, safety integrity level, statement testing, white-box test technique Learning Objectives for White-Box Test Techniques 2.2 Statement Testing TTA-2.2.1 (K3) Design test cases for a given test object by applying statement testing to achieve a defined level of coverage 2.3 Decision Testing TTA-2.3.1 (K3) Design test cases for a given test object by applying the Decision test technique to achieve a defined level of coverage 2.4 Modified Condition/Decision Testing TTA-2.4.1 (K3) Design test cases for a given test object by applying the modified condition/decision test technique to achieve full modified condition/decision coverage (MC/DC) 2.5 Multiple Condition Testing TTA-2.5.1 (K3) Design test cases for a given test object by applying the multiple condition test technique to achieve a defined level of coverage 2.6 Basis Path Testing (has been removed from version v4.0 of this syllabus) TTA-2.6.1 has been removed from version v4.0 of this syllabus. 2.7 API Testing TTA-2.7.1 (K2) Understand the applicability of API testing and the kinds of defects it finds 2.8 Selecting a White-box Test Technique TTA-2.8.1 (K4) Select an appropriate white-box test technique according to a given project situation v4.0 Page 13 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 2.1 Introduction This chapter describes white-box test techniques. These techniques apply to code and other structures with a control flow, such as business process flow charts. Each specific technique enables test cases to be derived systematically and focuses on a particular aspect of the structure. The test cases generated by the techniques satisfy coverage criteria which are set as an objective and are measured against. Achieving full coverage (i.e. 100%) does not mean that the entire set of tests is complete, but rather that the technique being used no longer suggests any useful further tests for the structure under consideration. Test inputs are generated to ensure a test case exercises a particular part of the code (e.g., a statement or decision outcome). Determining the test inputs that will cause a particular part of the code to be exercised can be challenging, especially if the part of the code to be exercised is at the end of a long control flow sub-path with several decisions on it. The expected results are identified based on a source external to this structure, such as a requirements or design specification or another test basis. The following techniques are considered in this syllabus: Statement testing Decision testing Modified condition/decision testing Multiple condition testing API testing The Foundation Syllabus [CTFL_SYL] introduces statement testing and decision testing. Statement testing focuses on exercising the executable statements in the code, whereas decision testing exercises the decision outcomes. The modified condition/decision and multiple condition techniques listed above are based on decision predicates containing multiple conditions and find similar types of defects. No matter how complex a decision predicate may be, it will evaluate to either TRUE or FALSE, which will determine the path taken through the code. A defect is detected when the intended path is not taken because a decision predicate does not evaluate as expected. Refer to [ISO 29119] for more details on the specification, and examples, of these techniques. 2.2 Statement Testing Statement testing exercises the executable statements in the code. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object, normally expressed as a percentage. Applicability Achieving full statement coverage should be considered as a minimum for all code being tested, although this is not always possible in practice. Limitations/Difficulties Achieving full statement coverage should be considered as a minimum for all code being tested, although this is not always possible in practice due to constraints on the available time and/or effort. Even high percentages of statement coverage may not detect certain defects in the code’s logic. In many cases achieving 100% statement coverage is not possible due to unreachable code. Although unreachable code is generally not considered good programming practice, it may occur, for instance, if a switch statement must have a default case, but all possible cases are handled explicitly. v4.0 Page 14 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 2.3 Decision Testing Decision testing exercises the decision outcomes in the code. To do this, the test cases follow the control flows from a decision point (e.g., for an IF statement, there is one control flow for the true outcome and one for the false outcome; for a CASE statement, there may be several possible outcomes; for a LOOP statement there is one control flow for the true outcome of the loop condition and one for the false outcome). Coverage is measured as the number of decision outcomes exercised by the tests divided by the total number of decision outcomes in the test object, normally expressed as a percentage. Note that a single test case may exercise several decision outcomes. Compared to the modified condition/decision and multiple condition techniques described below, decision testing considers the entire decision as a whole and evaluates only the TRUE and FALSE outcomes, regardless of the complexity of its internal structure. Branch testing is often used interchangeably with decision testing, because covering all branches and covering all decision outcomes can be achieved with the same tests. Branch testing exercises the branches in the code, where a branch is normally considered to be an edge of the control flow graph. For programs with no decisions, the definition of decision coverage above results in a coverage of 0/0, which is undefined, no matter how many tests are run, while the single branch from entry to exit point (assuming one entry and exit point) will result in 100% branch coverage being achieved. To address this difference between the two measures, ISO 29119-4 requires at least one test to be run on code with no decisions to achieve 100% decision coverage, so making 100% decision coverage and 100% branch coverage equivalent for nearly all programs. Many test tools that provide coverage measures, including those used for testing safety-related systems, employ a similar approach. Applicability This level of coverage should be considered when the code being tested is important or even critical (see the tables in section 2.8.2 for safety-related systems). This technique can be used for code and for any model that involves decision points, like business process models. Limitations/Difficulties Decision testing does not consider the details of how a decision with multiple conditions is made and may fail to detect defects caused by combinations of the condition outcomes. 2.4 Modified Condition/Decision Testing Compared to decision testing, which considers the entire decision as a whole and evaluates the TRUE and FALSE outcomes, modified condition/decision testing considers how a decision is structured when it includes multiple conditions (where a decision is composed of only one atomic condition, it is simply decision testing). Each decision predicate is made up of one or more atomic conditions, each of which evaluates to a Boolean value. These are logically combined to determine the outcome of the decision. This technique checks that each of the atomic conditions independently and correctly affects the outcome of the overall decision. This technique provides a stronger level of coverage than statement and decision coverage when there are decisions containing multiple conditions. Assuming N unique, mutually independent atomic conditions, MC/DC for a decision can usually be achieved by exercising the decision N+1 times. Modified condition/decision testing requires pairs of tests that show a change of a single atomic condition outcome can independently affect the result of a decision. Note that a single test case may exercise several condition combinations and therefore it is not always necessary to run N+1 separate test cases to achieve MC/DC. v4.0 Page 15 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) Applicability This technique is used in the aerospace and automotive industries, and other industry sectors for safety- critical systems. It is used when testing software where a failure may cause a catastrophe. Modified condition/decision testing can be a reasonable middle ground between decision testing and multiple condition testing (due to the large number of combinations to test). It is more rigorous than decision testing but requires far fewer test conditions to be exercised than multiple condition testing when there are several atomic conditions in the decision. Limitations/Difficulties Achieving MC/DC may be complicated when there are multiple occurrences of the same variable in a decision with multiple conditions; when this occurs, the conditions may be “coupled”. Depending on the decision, it may not be possible to vary the value of one condition such that it alone causes the decision outcome to change. One approach to addressing this issue is to specify that only uncoupled atomic conditions are tested using modified condition/decision testing. The other approach is to analyze each decision in which coupling occurs. Some compilers and/or interpreters are designed such that they exhibit short-circuiting behavior when evaluating a complex decision statement in the code. That is, the executing code may not evaluate an entire expression if the final outcome of the evaluation can be determined after evaluating only a portion of the expression. For example, if evaluating the decision “A and B”, there is no reason to evaluate B if A has already been evaluated as FALSE. No value of B can change the final result, so the code may save execution time by not evaluating B. Short-circuiting may affect the ability to attain MC/DC since some required tests may not be achievable. Usually, it is possible to configure the compiler to switch off the short-circuiting optimization for the testing, but this may not be allowed for safety-critical applications, where the tested code and the delivered code must be identical. 2.5 Multiple Condition Testing In rare instances, it might be required to test all possible combinations of atomic conditions that a decision may contain. This level of testing is called multiple condition testing. Assuming N unique, mutually independent atomic conditions, full multiple condition coverage for a decision can be achieved by exercising it 2N times. Note that a single test case may exercise several condition combinations and therefore it is not always necessary to run 2N separate test cases to achieve 100% multiple condition coverage. Coverage is measured as the number of exercised atomic condition combinations over all decisions in the test object, normally expressed as a percentage. Applicability This technique is used to test high risk software and embedded software which are expected to run reliably without crashing for long periods of time. Limitations/Difficulties Because the number of test cases can be derived directly from a truth table containing all the atomic conditions, this level of coverage can easily be determined. However, the sheer number of test cases required for multiple condition testing makes modified condition/decision testing more feasible for situations where there are several atomic conditions in a decision. If the compiler uses short-circuiting, the number of condition combinations that can be exercised will often be reduced, depending on the order and grouping of logical operations that are performed on the atomic conditions. v4.0 Page 16 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) 2.6 Basis Path Testing This chapter has been removed from version v4.0 of this syllabus. 2.7 API Testing An application programming interface (API) is a defined interface that enables a program to call another software system, which provides it with a service, such as access to a remote resource. Typical services include web services, enterprise service buses, databases, mainframes, and Web UIs. API Testing is a type of testing rather than a technique. In certain respects, API testing is quite similar to testing a graphical user interface (GUI). The focus is on the evaluation of input values and returned data. Negative testing is often crucial when dealing with APIs. Programmers that use APIs to access services external to their own code may try to use API interfaces in ways for which they were not intended. That means that robust error handling is essential to avoid incorrect operation. Combinatorial testing of many interfaces may be required because APIs are often used in conjunction with other APIs, and because a single interface may contain several parameters, where values of these may be combined in many ways. APIs frequently are loosely coupled, resulting in the very real possibility of lost transactions or timing glitches. This necessitates thorough testing of the recovery and retry mechanisms. An organization that provides an API interface must ensure that all services have a very high availability; this often requires strict reliability testing by the API publisher as well as infrastructure support. Applicability API testing is becoming more important for testing systems of systems as the individual systems become distributed or use remote processing as a way of off-loading some work to other processors. Examples include: Operating systems calls Service-oriented architectures (SOA) Remote procedure calls (RPC) Web services Software containerization results in the division of a software program into several containers which communicate with each other using mechanisms such as those listed above. API testing should also target these interfaces. Limitations/Difficulties Testing an API directly usually requires a Technical Test Analyst to use specialized tools. Because there is typically no direct graphical interface associated with an API, tools may be required to setup the initial environment, marshal the data, invoke the API, and determine the result. Coverage API testing is a description of a type of testing; it does not denote any specific level of coverage. At a minimum, the API test should include making calls to the API with both realistic input values and unexpected inputs for checking exception handling. More thorough API tests may ensure that callable entities are exercised at least once, or all possible functions are called at least once. Representational State Transfer is an architectural style. RESTful Web services allow requesting systems to access Web resources using a uniform set of stateless operations. Several coverage criteria exist for RESTful web APIs, the de-facto standard for software integration [Web-7]. They can be divided into two groups: input coverage criteria and output coverage criteria. Among others, input criteria may v4.0 Page 17 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) require the execution of all possible API operations, the use of all possible API parameters, and the coverage of sequences of API operations. Among others, output criteria may require the generation of all correct and erroneous status codes, and the generation of responses containing resources exhibiting all properties (or all property types). Types of Defects The types of defects that can be found by testing APIs are quite disparate. Interface issues are common, as are data handling issues, timing problems, loss of transactions, duplication of transactions and issues in exception handling. 2.8 Selecting a White-Box Test Technique The selected white-box test technique is normally specified in terms of a required coverage level, which is achieved by applying the test technique. For instance, a requirement to achieve 100% statement coverage would typically lead to the use of statement testing. However black-box test techniques are normally applied first, coverage is then measured, and the white-box test technique only used if the required white-box coverage level has not been achieved. In some situations, white-box testing may be used less formally to provide an indication of where coverage may need to be increased (e.g., creating additional tests where white-box coverage levels are particularly low). Statement testing would normally be sufficient for such informal coverage measurement. When specifying a required white-box coverage level it is good practice to only specify it at 100%. The reason for this is that if lower levels of coverage are required, then this typically means that the parts of the code that are not exercised by testing are the parts that are the most difficult to test, and these parts are normally also the most complex and error-prone. So, by requesting and achieving for example 80% coverage, it may mean that the code that includes the majority of the detectable defects is left untested. For this reason, when white-box coverage criteria are specified in standards, they are nearly always specified at 100%. Strict definitions of coverage levels have sometimes made this level of coverage impracticable. However, those given in ISO 29119-4 allow infeasible coverage items to be discounted from the calculations, thus making 100% coverage an achievable goal. When specifying the required white-box coverage for a test object, it is also only necessary to specify this for a single coverage criterion (e.g., it is not necessary to require both 100% statement coverage and 100% MC/DC). With exit criteria at 100% it is possible to relate some exit criteria in a subsumes hierarchy, where coverage criteria are shown to subsume other coverage criteria. One coverage criterion is said to subsume another if, for all components and their specifications, every set of test cases that satisfies the first criterion also satisfies the second. For example, branch coverage subsumes statement coverage because if branch coverage is achieved (to 100%), then 100% statement coverage is guaranteed. For the white-box test techniques covered in this syllabus, we can say that branch and decision coverage subsume statement coverage, MC/DC subsumes decision and branch coverage, and multiple condition coverage subsumes MC/DC (if we consider branch and decision coverage to be the same at 100%, then we can say that they subsume each other). Note that when determining the white-box coverage levels to be achieved for a system, it is quite normal to define different levels for different parts of the system. This is because different parts of a system contribute differently to risk. For instance, in an avionics system, the subsystems associated with in- flight entertainment would be assigned a lower level of risk than those associated with flight control. Testing interfaces is common for all types of systems and is normally required for all integrity levels for safety-related systems (see section 2.8.2 for more on integrity levels). The level of required coverage for API testing will normally increase based on the associated risk (e.g., the higher level of risk associated with a public interface may require more rigorous API testing). The selection of which white-box test technique to use is generally based on the nature of the test object and its perceived risks. If the test object is considered to be safety-related (i.e. failure could cause harm v4.0 Page 18 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) to people or the environment) then regulatory standards are applicable and will define required white- box coverage levels (see section 2.8.2). If the test object is not safety-related then the choice of which white-box coverage levels to achieve is more subjective, but should still be largely based on perceived risks, as described in section 2.8.1. 2.8.1 Non-Safety-Related Systems The following factors (in no particular order of priority) are typically considered when selecting white-box test techniques for non-safety-related systems: Contract – If the contract requires a particular level of coverage to be achieved, then not achieving this coverage level will potentially result in a breach of contract. Customer – If the customer requests a particular level of coverage, for instance as part of the test planning, then not achieving this coverage level may cause problems with the customer. Regulatory standard – For some industry sectors (e.g., financial) a regulatory standard that defines required white-box coverage criteria applies for mission-critical systems. See section 2.8.2 for coverage of regulatory standards for safety-related systems. Test strategy – If the organization’s test strategy specifies requirements for white-box code coverage, then not aligning with the organizational strategy may risk censure from higher management. Coding style – If the code is written with no multiple conditions within decisions, then it would be wasteful to require white-box coverage levels such as MC/DC and multiple condition coverage. Historical defect information – If historical data on the effectiveness of achieving a particular coverage level suggests that it would be appropriate to use for this test object, it would be risky to ignore the available data. Note that such data may be available within the project, organization or industry. Skills and experience – If the testers available to perform the testing are not sufficiently experienced and skilled in a particular white-box technique, it may be misunderstood and may introduce unnecessary risk if that technique was selected. Tools – White-box coverage can only be measured in practice by using coverage tools. If such tools are not available that support a given coverage measure, then selecting that measure to be achieved would introduce a high risk level. When selecting white-box testing for non-safety-related systems, the Technical Test Analyst has more freedom to recommend the appropriate white-box coverage for non-safety related systems than for safety related systems. Such choices are typically a compromise between the perceived risks and the cost, resources and time required to treat these risks through white-box testing. In some situations, other treatments, which might be implemented by other software testing approaches or otherwise (e.g., different development approaches) may be more appropriate. 2.8.2 Safety-related systems Where the software being tested is part of a safety-related system, then a regulatory standard which defines the required coverage levels to be achieved will normally have to be used. Such standards typically require a hazard analysis to be performed for the system and the resultant risks are used to assign integrity levels to different parts of the system. Levels of required coverage are defined for each of the integrity levels. IEC 61508 (functional safety of programmable, electronic, safety-related systems [IEC 61508]) is an umbrella international standard used for such purposes. In theory, it could be used for any safety-related system, however some industries have created specific variants (e.g., ISO 26262 [ISO 26262] applies to automotive systems) and some industries have created their own standards (e.g., DO-178C [DO- 178C] for airborne software). Additional information regarding ISO 26262 is provided in the ISTQB ® Automotive Software Tester syllabus [CT_AuT_SYL]. v4.0 Page 19 of 59 2021-06-30 © International Software Testing Qualifications Board Certified Tester Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA) IEC 61508 defines four safety integrity levels (SILs), each of which is defined as a relative level of risk- reduction provided by a safety function, correlated to the frequency and severity of perceived hazards. When a test object performs a function related to safety, the higher the risk of failure means that the test object should have higher reliability. The following table shows the reliability levels associated with the SILs. Note that the reliability level for SIL 4 for the continuous operation case is extremely high, since it corresponds to a mean time between failures (MTBF) greater than 10,000 years. IEC 61508 Continuous Operation On-Demand SIL (probability of a dangerous (probability of failure on failure per hour) demand) 1 10-6 to

Use Quizgecko on...
Browser
Browser