🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Ch03 Testing in the development process.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Ch 03 Testing-Component Level INFS4202 Content This is Ch19 in your textbook Introduction A software component testing strategy considers testing of individual components and integrating them into a working system. Testing begins “in the small” and progresses “to the larg...

Ch 03 Testing-Component Level INFS4202 Content This is Ch19 in your textbook Introduction A software component testing strategy considers testing of individual components and integrating them into a working system. Testing begins “in the small” and progresses “to the large.” By this we mean that early testing focuses on a single component or on a small group of related components and applies tests to uncover errors in the data and processing logic that have been encapsulated by the component(s). After components are tested, they must be integrated until the complete system is constructed. Testing Strategy Characteristics To perform effective testing, you should conduct technical reviews. By doing this, many errors will be eliminated before testing commences. Testing begins at the component level and works “outward” toward the integration of the entire computer-based system. Different testing techniques are appropriate for different software engineering approaches and at different points in time. Testing Strategy Characteristics Testing is conducted by the developer of the software and (for large projects) an independent test group. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy. Verification and Validation V&V Verification refers to the set of tasks that ensure that software correctly implements a specific function Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements. Verification and Validation SQA activities Verification and validation include a wide array of SQA activities: Technical reviews, quality and configuration audits, performance monitoring, simulation, Feasibility study, documentation review, Verification and Validation SQA activities database review, algorithm analysis, development testing, usability testing, acceptance testing, installation testing Quality is a culture Testing does provide the last bastion from which quality can be assessed and, more pragmatically, errors can be uncovered. But testing should not be viewed as a safety net. As they say, “You can’t test in quality. If it’s not there before you begin testing, it won’t be there when you’re finished testing.” Quality is incorporated into software throughout the process of software engineering, and testing cannot be applied as a fix at the end of the process. Proper application of methods and tools, effective technical reviews, and solid management and measurement all lead to quality that is confirmed during testing. Testing is part of the software process Software Testing Steps Criteria of Done A classic question arises every time software testing is discussed: “When are we done testing how do we know that we’ve tested enough?” Sadly, there is no definitive answer to this question, but there are a few pragmatic responses and early attempts at empirical guidance. Role of Scaffolding Because a component is not a stand-alone program, some type of scaffolding is required to create a testing framework. As part of this framework, driver and/or stub software must often be developed for each unit test. Role of Scaffolding a driver is nothing more than a “main program” that accepts test-case data, passes such data to the component (to be tested), and prints relevant results. Stubs serve to replace modules that are subordinate (invoked by) the component to be tested. A stub or “dummy subprogram” uses the subordinate module’s interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing. Role of Scaffolding White Box Testing Also known as white box, structural, clear box and open box testing. A software testing technique whereby explicit knowledge of the internal workings of the item being tested are used to select the test data. Unlike black box testing that is using the program specification to examine outputs, white box testing is based on specific knowledge of the source code to define the test cases and to examine outputs. White Box Testing Using white-box testing methods, you can derive test cases that 1. guarantee that all independent paths within a module have been exercised at least once, 2. exercise all logical decisions on their true and false sides, 3. execute all loops at their boundaries and within their operational bounds, and 4. exercise internal data structures to ensure their validity. White Box Testing, Basis Path Testing Basis path testing is a white-box testing technique The basis path method enables the test-case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing. Before the basis path method can be introduced, a simple notation for the representation of control flow, called a flow graph (or program graph), must be introduced. Structural Testing (White-box Testing) Statement Testing (Algebraic Testing): Test single statements (Choice of operators in polynomials, etc) Loop Testing: ○ Cause execution of the loop to be skipped completely. (Exception: Repeat loops) ○ Loop to be executed exactly once ○ Loop to be executed more than once Path testing: ○ Make sure all paths in the program are executed Branch Testing (Conditional Testing): Make sure that each possible outcome from a condition is tested at least once if ( i = TRUE) printf("YES\n");else printf("NO\n"); Test cases: 1) i = TRUE; 2) i = FALSE 20 Code Coverage Statement coverage ○ Elementary statements: assignment, I/O, call ○ Select a test set T such that by executing P in all cases in T, each statement of P is executed at least once. ○ read(x); read(y); if x > 0 then write(“1”); else write(“2”); if y > 0 then write(“3”); else write(“4”); ○ T: {, } 21 White-box Testing: Determining the Paths FindMean (FILE ScoreFile) { float SumOfScores = 0.0; int NumberOfScores = 0; 1 float Mean=0.0; float Score; Read(ScoreFile, Score); while 2 (! EOF(ScoreFile) { if (Score 3 > 0.0 ) { SumOfScores = SumOfScores + Score; 4 NumberOfScores++; } 5 Read(ScoreFile, Score); 6 } if 7 (NumberOfScores > 0) { Mean = SumOfScores / NumberOfScores; printf(“ The mean score is %f\n”, Mean); 8 } else printf (“No scores found in file\n”); 9 } 22 White Box Testing, Basis Path Testing Figure 4 a. Flow Chart b.Flow graph White Box Testing, Basis Path Testing In above figure (a) flowchart is used to depict program control structure (b). maps the flowchart into a corresponding flow graph each circle, called a flow graph node, represents one or more procedural statements. A sequence of process boxes and a decision diamond can map into a single node. The arrows on the flow graph, called edges or links, represent flow of control and are analogous to flowchart arrows. An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. When stated in terms of a flow White Box Testing, Basis Path Testing In above figure (a) flowchart is used to depict program control structure (b). maps the flowchart into a corresponding flow graph each circle, called a flow graph node, represents one or more procedural statements. A sequence of process boxes and a decision diamond can map into a single node. The arrows on the flow graph, called edges or links, represent flow of control and are analogous to flowchart arrows. An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. When stated in terms of a flow graph. an independent path must move along at least one edge that has not been traversed before White Box Testing, Basis Path Testing In above figure (a) flowchart is used to depict program control structure (b). maps the flowchart into a corresponding flow graph each circle, called a flow graph node, represents one or more procedural statements. A sequence of process boxes and a decision diamond can map into a single node. The arrows on the flow graph, called edges or links, represent flow of control and are analogous to flowchart arrows. An independent path is any path through the program that introduces at least one new set of processing statements or a new condition. When stated in terms of a flow graph. an independent path must move along at least one edge that has not been traversed before White Box Testing, Basis Path Testing a set of independent paths for the flow graph illustrated in above is Path 1: 1-11 Path 2: 1-2-3-4-5-10-1-11 Path 3: 1-2-3-6-8-9-10-1-11 Path 4: 1-2-3-6-7-9-10-1-11 Note that each new path introduces a new edge. The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11 is not considered to be an independent path because it is simply a combination of already specified paths and does not traverse any new edges. White Box Testing, Basis Path Testing Paths 1 through 4 constitute a basis set for the flow graph in Figure 19.5b. That is, if you can design tests to force execution of these paths (a basis set), every statement in the program will have been guaranteed to be executed at least one time and every condition will have been executed on its true and false sides. Basic flow graphs 1 1 1 2 3 2 2 3 4 5 4 3 6 if-then-else loop-while case-of Alternatives test cases: Alternatives test cases: Alternatives test cases: 1,2,4 / 1,3,4 (1, 2)+, 3 1,2,6 / 1,3,6 / 1,4,6 / 1,5,6 Black Box Testing Black-box testing, also called behavioral testing or functional testing, focuses on the functional requirements of the software. That is, black-box testing techniques enable you to derive sets of input conditions that will fully exercise all functional requirements for a program. Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary approach that is likely to uncover a different class of errors than white-box methods. Black Box Testing Black-box testing attempts to find errors in the following categories: 1. Incorrect or missing functions, 2. Interface errors, 3. Errors in data structures or external database access, 4. Behavior or performance errors, 5. Initialization and termination errors. Black Box Testing Unlike white-box testing, which is performed early in the testing process, black-box testing tends to be applied during later stages of testing. Because black-box testing purposely disregards control structure, attention is focused on the information domain. Black Box Testing Balck Box Tests are designed to answer the following questions: How is functional validity tested? How are system behavior and performance tested? What classes of input will make good test cases? Is the system particularly sensitive to certain input values? How are the boundaries of a data class isolated? What data rates and data volume can the system tolerate? What effect will specific combinations of data have on system operation? Black Box Testing, Interface Testing Interface testing is used to check that the program component accepts information passed to it in the proper order and data types and returns information in proper order and data format Interface testing is often considered part of integration testing. Because most components are not stand-alone programs, it is important to make sure that when the component is integrated into the evolving program it will not break the build. This is where the use stubs and drivers become important to component testers. Functional Testing (Black-box Testing) Focus: I/O behavior. If for any given input, we can predict the output, then the module passes the test. ○ Almost always impossible to generate all possible inputs ("test cases") Goal: Reduce number of test cases by equivalence partitioning: ○ Divide input conditions into equivalence classes ○ Choose test cases for each equivalence class. (Example: If an object is supposed to accept a negative number, testing one negative number is enough) 35 Boundary Value Analysis Any program can be considered to be a function ○ Program inputs form its domain ○ Program outputs form its range Boundary value analysis is the best known functional testing technique. The objective of functional testing is to use knowledge of the functional nature of a program to identify test cases. Historically, functional testing has focused on the input domain, but it is a good supplement to consider test cases based on the range as well. 36 Boundary Value Analysis Boundary value analysis focuses on the boundary of the input space to identify test cases. The rationale behind boundary value analysis is that errors tend to occur near the extreme values of an input variable. In our discussion we will assume a program P accepting two inputs y1 and y2 such that a ≤ y1 ≤ b and c ≤ y2 ≤ d 37 Valid Input for Program P consider the following function: f ( y1 , y2 ), where a  y1  b , c  y2  d boundary inequalities of n input variables define an n-dimensional input space: y2 d c a b y1 38 Value Selection in Boundary Value Analysis The basic idea in boundary value analysis is to select input variable values at their: ○ Minimum ○ Just above the minimum ○ A nominal value ○ Just below the maximum ○ Maximum 39 Single Fault Assumption Boundary value analysis is also augmented by the single fault assumption principle. “Failures occur rarely as the result of the simultaneous occurrence of two (or more) faults” In this respect, boundary value analysis test cases can be obtained by holding the values of all but one variable at their nominal values, and letting that variable assume its extreme values. 40 Boundary Value Analysis for Program P y2 d........ c. a b y1 T = { , , , , , , < 1min+, y2nom>, , } 41 Example Test Cases Using Boundary Value Analysis – Testing Triangle (1≤ sides ≤ 200) Case # a b c Expected Output 1 100 100 1 Isosceles 2 100 100 2 Isosceles 3 100 100 100 Equilateral 4 100 100 199 Isosceles 5 100 100 200 Not a Trianle 6 100 1 100 Isosceles 7 100 2 100 Isosceles 8 100 100 100 Equilateral 9 100 199 100 Isosceles 10 100 200 100 Not a Triangle 11 1 100 100 Isosceles 12 2 100 100 Isosceles 13 100 100 100 Equilateral 14 199 100 100 Isosceles 42 15 200 100 100 Not a Triangle Generalizing Boundary Value Analysis The basic boundary value analysis can be generalized in two ways: ○ By the number of variables - (4n +1) test cases for n variables ○ By the kinds of ranges of variables Programming language dependent Bounded discrete Unbounded discrete (no upper or lower bounds clearly defined) Logical variables 43 Limitations of Boundary Value Analysis Boundary value analysis works well when the program to be tested is a function of several independent variables that represent bounded physical quantities. Boundary value analysis selected test data with no consideration of the function of the program, nor of the semantic meaning of the variables. We can distinguish between physical and logical type of variables as well (e.g. temperature, pressure speed, or PIN numbers, telephone numbers etc.) 44 Independence Assumption and Efficacy of BVT Assumes that input variables are independent of one another, ○ i.e. the assumption that particular combinations of input variable values have no special significance If basic assumption is not true, then BVT may overlook important test requirements BVT is an instance of more general techniques such as equivalence class testing or domain testing. BVT tends to generate more test cases with poorer test coverage (with the independence assumption) than domain or equivalence testing. But, due to its simplicity, BVT test case generation can be easily automated. 45 Robustness Testing Robustness testing is a simple extension of boundary value analysis. In addition to the five boundary value analysis values of variables, we add values slightly greater that the maximum (max+) and a value slightly less than the minimum (min-). The main value of robustness testing is to force attention on exception handling. In some strongly typed languages values beyond the predefined range will cause a run-time error. It is a choice of using a weak typed language with exception handling or a strongly typed language with explicit logic to handle out of range values. 46 Robustness Test Cases for Program P y2 d....... … c.. a b y1 47 Worst Case Testing In worst case testing we reject the single fault assumption and we are interested what happens when more than one variable has an extreme value. Considering that we have five different values that can be considered during boundary value analysis testing for one variable, now we take the Cartesian product of these possible values for 2, 3, … n variables. In this respect we can have 5n test cases for n input variables. The best application of worst case testing is where physical variables have numerous interactions and failure of a program is costly. Worst case testing can be further augmented by considering robust worst case testing (i.e. adding slightly out of bounds 48 values to the five already considered). Worst Case Testing for Program P y2 d.................... c..... a b y1 49 Robust Worst Case Testing for Program P y2 d............ … … ….... …......... … … c... … a b y1 50 Special Value Testing Special value testing is probably the most widely practiced form of functional testing, most intuitive, and least uniform. Utilizes domain knowledge and engineering judgment about program’s “soft spots” to devise test cases. Event though special value testing is very subjective on the generation of test cases, it is often more effective on revealing program faults. 51 Guidelines for Boundary Value Testing With the exception of special value testing, the test methods based on the boundary values of a program are the most rudimentary. Issues in producing satisfactory test cases using boundary value testing: ○ Truly independent variables versus not independent variables ○ Normal versus robust values ○ Single fault versus multiple fault assumption Boundary value analysis can also be applied to the output range of a program (i.e. error messages), and internal variables (i.e. loop control variables, indices, and pointers). 52 Object-Oriented Testing When object-oriented software is considered, the concept of the unit changes. Encapsulation drives the definition of classes and objects. This means that each class and each instance of a class packages attributes (data) and the operations that manipulate these data. An encapsulated class is usually the focus of unit testing. However, operations (methods) within the class are the smallest testable units. Because a class can contain a number of different operations, and a particular operation may exist as part of a number of different classes, the tactics applied to unit testing must change. Object-Oriented Testing You can no longer test a single operation in isolation (the conventional view of unit testing) but rather as part of a class. To illustrate, consider a class hierarchy in which an operation X is defined for the superclass and is inherited by a number of subclasses. Each subclass uses operation X, but it is applied within the context of the private attributes and operations that have been defined for the subclass. Because the context in which operation X is used varies in subtle ways, it is necessary to test operation X in the context of each of the subclasses. This means that testing operation X in a stand-alone fashion Class Testing Class testing for OO software is driven by the operations encapsulated by the class and the state behavior of the class. To provide brief illustrations of these methods, consider a banking application in which an Account class has the following operations: open(), setup(), deposit(), withdraw(), balance(), summarize(), creditLimit(), and close() Each of these operations may be applied for Account, but certain constraints (e.g., the account must be opened before other operations can be applied and closed after all operations are completed) are implied by the nature of the problem. Class Testing The minimum behavioral life history of an instance of Account includes the following operations: Open setup deposit withdraw close Class Testing The sequence above represents the minimum test sequence for account. However, a wide variety of other behaviors may occur within this sequence: open setup deposit [deposit|withdraw|balance|summarize|creditLimit] withdraw close A variety of different operation sequences can be generated randomly. For example: Test case r1: open setup deposit deposit balance summarize withdraw close Test case r2: open setup deposit withdraw deposit balance creditLimit withdraw close Class Testing These and other random order tests are conducted to exercise different class instance life histories. Use of test equivalence partitioning can reduce the number of test cases required. Class Behavioral Testing The use of the state diagram as a model that represents the dynamic behavior of a class. The state diagram for a class can be used to help derive a sequence of tests that will exercise the dynamic behavior of the class (and those classes that collaborate with it) State Diagram This figure illustrates a state diagram of the Account class discussed above: Behavioral Testing, contd Referring to the above figure, initial transitions move through the empty acct and setup acct states. The majority of all behavior for instances of the class occurs while in the working acct state. A final withdrawal and account closure cause the account class to make transitions to the nonworking acct and dead acct states, respectively. The tests to be designed should achieve coverage of every state. That is, the operation sequences should cause the Account class to transition through all allowable states: Behavioral Testing In situations in which the class behavior results in a collaboration with one or more classes, multiple state diagrams are used to track the behavioral flow of the system. The state model can be traversed in a “breadth-first” manner. In this context, breadth-first implies that a test case exercises a single transition and that when a new transition is to be tested, only previously tested transitions are used. Behavioral Testing Consider a CreditCard object that is part of the banking system. The initial state of CreditCard is undefined (i.e., no credit card number has been provided). Upon reading the credit card during a sale, the object takes on a defined state; that is, the attributes card number and expiration date, along with bank-specific identifiers, are defined. The credit card is submitted when it is sent for authorization, and it is approved when authorization is received. The transition of CreditCard from one state to another can be tested by deriving test cases that cause the transition to occur. A breadth-first approach to this type of testing would not exercise submitted before itexercised undefined and defined. If it did, it would make use of transitions that had not been previously tested and would therefore violate the breadth-first criterion. Testing Activities Subsystem Requirements Unit System Code Test Analysis Design Document Tested Document User Subsystem Subsystem Manual Unit Code Test Tested Integration Functional Subsystem Test Test Integrated Functioning Subsystems System Tested Subsystem Subsystem Unit Code Test All tests by developer 64 Testing Activities continued Client’s Global Understanding User Requirements of Requirements Environment Functioning Validated Accepted System PerformanceSystem System Acceptance Installation Test Test Test Usable Tests by client System Tests by developer User’s understanding System in Use Tests (?) by user 65 Unit Testing Objective: Find differences between specified units and their imps. Unit: component ( module, function, class, objects, …) Unit test environment: Driver Test cases Test result Unit under Effectiveness? test Partitioning Code coverage Stub Stub Dummy modules 66 Integration Testing Integration Testing is a division of software testing that tests interfaces between different software components. Any software module will work well individually, but when it’s integrated with a different module, there are chances where the software might not behave as intended. This is when integration testing is performed to ensure that the software works smoothly without any issues. Watch this video What is Integration Testing Why Integration Testing It is extremely difficult to find and fix defects in integrated components. Performing Integration tests can help you in such cases. With integration testing, you can find and fix the bugs at the very start of the development. This test runs faster than the end to end tests. This test will help you to find system issues like cache integration, corrupted database schema, etc. With Integration Testing, you’ll be able to reduce the possibilities of software failure. Performing this testing will help you to check the structural changes when a user moves from one module to the next. By performing integrated testing, you can cover multiple modules, thus providing broader testing capabilities. Integration Testing Objectives: To expose problems arising from the combination To quickly obtain a working solution from components. Problem areas ○ Internal: between components Invocation: call/message passing/… Parameters: type, number, order, value Invocation return: identity (who?), type, sequence ○ External: Interrupts (wrong handler?) I/O timing ○ Interaction 69 Types Integration Testing Integration Testing is approached by combining different functional units and testing them to examine the results. Integration testing is divided in two categories:Structural Integration testing types fall under two categories, as mentioned in the image. And there is behavioral Integration testing, will come next 70 Incremental Integration Testing Incremental integration testing is performed by combining logically related two or more modules. Every module will be added one by one in the testing unit until the testers complete the whole system. With this approach, you can test the system for defects at an early stage in a smaller unit when it is reasonably easy to identify the cause. This type of testing intends to pass the feedback to the developers at the very start to fix the bugs. Bugs found with this testing can be fixed without disturbing the other modules. This method generally uses stubs and drivers to set up the transmission. Stubs and drivers are duplicate programs used to establish communication. Stubs and Drivers Stubs and Drivers are the dummy programs in Integration testing used to facilitate the software testing activity. These programs act as a substitutes for the missing models in the testing. They do not implement the entire programming logic of the software module but they simulate data communication with the calling module while testing. Stub: Is called by the Module under Test. Driver: Calls the Module to be tested. Incremental Integration Testing Approaches 1. Bottom-up Integration Testing Here the testing starts from the lowest module in the architecture. The testing control flow moves upwards from the bottom. This method will be executed whenever the top modules are under development. This method will use the drivers to restore the working of modules that are missing. This way of approach has a high success ratio and is an efficient way to test and develop a product. It is faster than the other traditional methods of testing. Integration Testing: Bottom-up Test drivers Testing Level N Level N Le vel N Level N Level N sequence Test drivers Level N–1 Level N–1 Level N–1 Bottom-Up Integration A B F G drivers are replaced one at a time, "depth first" C worker modules are grouped into builds and integrated D E cluster Incremental Integration Testing Approaches 1. Bottom-up Integration Testing Advantages: Fault localization is easier. No time is wasted waiting for all modules to be developed unlike Big-bang approach Disadvantages: Critical modules (at the top level of software architecture) which control the flow of application are tested last and may be prone to defects. An early prototype is not possible Incremental Integration Testing Approaches 2. Top-down Integration Testing In this approach, testing is performed from the top-most module in the architecture. The testing control flow moves to the bottom from the top. This method will use stubs as duplicate programs to restore the working of modules that are missing. This method is comparatively easier than the bottom-up approach as it uses stubs, which are generally easier to write than the drivers. With this approach, you can find the interface errors with ease because of its incremental nature. Top Down Integration A top module is tested with stubs B F G stubs are replaced one at a time, "depth first" C as new modules are integrated, some subset of tests is re-run D E Incremental Integration Testing Approaches 2. Top-down Integration Testing Advantages: Fault Localization is easier. Possibility to obtain an early prototype. Critical Modules are tested on priority; major design flaws could be found and fixed first. Disadvantages: Needs many Stubs. Modules at a lower level are tested inadequately. Incremental Integration Testing Approaches 3. Sandwich Integration Testing It is a combination of Bottom-up and Top- down Approaches. In this approach, bottom modules are tested with top modules, at the same time, the top modules are tested with the lower modules. The goal here is to reach the mid module by testing both top and bottom modules simultaneously. This approach uses both stubs and drivers. Integration Testing Approaches Big Bang Integration Testing This type of testing is usually performed only after all the modules are developed. Once developed, all modules will be coupled to form a single software system, and then the testing will be performed. This sort of testing generally suits smaller systems. Though every module will be developed before even starting the integration testing, the biggest disadvantage here is some of your resources will be unproductive as they have to wait for all the modules to be developed before starting the testing process thus, making it costly and time-consuming. Example of Integration Test Case Integration Test Case differs from other test cases in the sense it focuses mainly on the interfaces & flow of data/information between the modules. Here priority is to be given for the integrating links rather than the unit functions which are already tested. Sample Integration Test Cases for the following scenario: Application has 3 modules say ‘Login Page’, ‘Mailbox’ and ‘Delete emails’ and each of them is integrated logically. Here do not concentrate much on the Login Page testing as it’s already been done in Unit Testing. But check how it’s linked to the Mail Box Page. Similarly Mail Box: Check its integration to the Delete Mails Module. Example of Integration Test Case Test Case Test Case Test Case ID Expected Result Objective Description Check the Enter login interface link credentials and To be directed to 1 between the click on the Login the Mail Box Login and button Mailbox module Check the interface link From Mailbox Selected email between the select the email should appear in 2 Mailbox and and click a delete the Deleted/Trash Delete Mails button folder Module How to do Integration Testing? The Integration test procedure irrespective of the Software testing strategies (discussed above): 1. Prepare the Integration Tests Plan 2. Design the Test Scenarios, Cases, and Scripts. 3. Executing the test Cases followed by reporting the defects. 4. Tracking & re-testing the defects. 5. Steps 3 and 4 are repeated until the completion of Integration is successful. Entry and Exit Criteria of Integration Testing Entry Criteria: Exit Criteria: Unit Tested Components/Modules Successful Testing of Integrated All High prioritized bugs fixed and closed Application. All Modules to be code completed and Executed Test Cases are documented integrated successfully. All High prioritized bugs fixed and closed Integration tests Plan, test case, scenarios Technical documents to be submitted to be signed off and documented. followed by release Notes. Required Test Environment to be set up for Integration testing Integration Testing (Behavioral: Path-Based) A B C MM-path: Interleaved sequence of module exec path and messages Module exec path: entry-exit path in the same module Atomic System Function: port input, … {MM-paths}, … port output Test cases: exercise ASFs 86 System Testing Stress testing: push it to its limit + beyond Volume Users Application response : (System) rate Resources: phy. + logical 87 Acceptance Testing Purpose: ensure that end users are satisfied Basis: user expectations (documented or not) Environment: real Performed: for and by end users (commissioned projects) Test cases: ○ May reuse from system test ○ Designed by end users 88 Regression Testing Whenever a system is modified (fixing a bug, adding functionality, etc.), the entire test suite needs to be rerun ○ Make sure that features that already worked are not affected by the change Automatic re-testing before checking in changes into a code repository Incremental testing strategies for big systems 89 Test Stopping Criteria Meet deadline, exhaust budget, …  management Achieved desired coverage Achieved desired level failure intensity 90 Testing Activities Identify Test conditions (“What”): an item or event to be verified. How the “what” can be tested: realization Design Build test cases (imp. scripts, data) Build Run the system Execute Test case outcome with Compare expected outcome Test result 91 Goodness of test cases Exec. of a test case against a program P ○ Covers certain requirements of P; ○ Covers certain parts of P’s functionality; ○ Covers certain parts of P’s internal logic. ➔ Idea of coverage guides test case selection. 92 Decision Tables - General Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch- case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way. “http://en.wikipedia.org/wiki/Decision_table” 93 Decision Tables - Usage Decision tables make it easy to observe that all possible conditions are accounted for. Decision tables can be used for: ○ Specifying complex program logic ○ Generating test cases (Also known as logic-based testing) Logic-based testing is considered as: ○ structural testing when applied to structure (i.e. control flowgraph of an implementation). ○ functional testing when applied to a specification. 94 Decision Tables - Structure Conditions - (Condition stub) Condition Alternatives – (Condition Entry) Actions – (Action Stub) Action Entries Each condition corresponds to a variable, relation or predicate Possible values for conditions are listed among the condition alternatives Boolean values (True / False) – Limited Entry Decision Tables Several values – Extended Entry Decision Tables Don’t care value Each action is a procedure or operation to perform The entries specify whether (or in what order) the action is to be performed 95 Decision Table - Example Printer does not print Y Y Y Y N N N N A red light is flashing Y Y N N Y Y N N Conditions Printer is unrecognized Y N Y N Y N Y N Ceck the power cable X Check the printer-computer cable X X Actions Ensure printer software is installed X X X X Check/replace ink X X X X Check for paper jam X X Printer Troubleshooting 96 Decision Table Example 97 Decision Table Development Methodology 1. Determine conditions and values 2. Determine maximum number of rules 3. Determine actions 4. Encode possible rules 5. Encode the appropriate actions for each rule 6. Verify the policy 7. Simplify the rules (reduce if possible the number of columns) 98 Decision Tables - Usage The use of the decision-table model is applicable when : ○ the specification is given or can be converted to a decision table. ○ the order in which the predicates are evaluated does not affect the interpretation of the rules or resulting action. ○ the order of rule evaluation has no effect on resulting action. ○ once a rule is satisfied and the action selected, no other rule need be examined. ○ the order of executing actions in a satisfied rule is of no consequence. The restrictions do not in reality eliminate many potential applications. ○ In most applications, the order in which the predicates are evaluated is immaterial. ○ Some specific ordering may be more efficient than some other but in general the ordering is not inherent in the program's logic. 99 Decision Tables - General Decision tables are a precise yet compact way to model complicated logic. Decision tables, like if-then-else and switch- case statements, associate conditions with actions to perform. But, unlike the control structures found in traditional programming languages, decision tables can associate many independent conditions with several actions in an elegant way. “http://en.wikipedia.org/wiki/Decision_table” 100 Decision Tables - Usage Decision tables make it easy to observe that all possible conditions are accounted for. Decision tables can be used for: ○ Specifying complex program logic ○ Generating test cases (Also known as logic-based testing) Logic-based testing is considered as: ○ structural testing when applied to structure (i.e. control flowgraph of an implementation). ○ functional testing when applied to a specification. 101 Decision Tables - Structure Conditions - (Condition stub) Condition Alternatives – (Condition Entry) Actions – (Action Stub) Action Entries Each condition corresponds to a variable, relation or predicate Possible values for conditions are listed among the condition alternatives Boolean values (True / False) – Limited Entry Decision Tables Several values – Extended Entry Decision Tables Don’t care value Each action is a procedure or operation to perform The entries specify whether (or in what order) the action is to be performed 102 Decision Table - Example Printer does not print Y Y Y Y N N N N A red light is flashing Y Y N N Y Y N N Conditions Printer is unrecognized Y N Y N Y N Y N Ceck the power cable X Check the printer-computer cable X X Actions Ensure printer software is installed X X X X Check/replace ink X X X X Check for paper jam X X Printer Troubleshooting 103 Decision Table Example 104 Decision Table Development Methodology 1. Determine conditions and values 2. Determine maximum number of rules 3. Determine actions 4. Encode possible rules 5. Encode the appropriate actions for each rule 6. Verify the policy 7. Simplify the rules (reduce if possible the number of columns) 105 Decision Tables - Usage The use of the decision-table model is applicable when : ○ the specification is given or can be converted to a decision table. ○ the order in which the predicates are evaluated does not affect the interpretation of the rules or resulting action. ○ the order of rule evaluation has no effect on resulting action. ○ once a rule is satisfied and the action selected, no other rule need be examined. ○ the order of executing actions in a satisfied rule is of no consequence. The restrictions do not in reality eliminate many potential applications. ○ In most applications, the order in which the predicates are evaluated is immaterial. ○ Some specific ordering may be more efficient than some other but in general the ordering is not inherent in the program's logic. 106 Acknowledgements These slides are based on: ○ Lecture slides by Ian Summerville, see http://www.comp.lancs.ac.uk/computing/resources/ser/ ○ Lecture slides by Sagar Naik (ECE355 Univ. of Waterloo) ○ Lecture slides by Kostas Kontogiannis (SE465 Univ. of Waterloo) ○ Lecture Notes from Bernd Bruegge, Allen H. Dutoit “Object-Oriented Software Engineering – Using UML, Patterns and Java” 107

Tags

software testing testing strategies component testing software engineering
Use Quizgecko on...
Browser
Browser