Document Details

EngrossingCornflower

Uploaded by EngrossingCornflower

2021

Dr. Noha Adly

Tags

software testing software engineering program testing computer science

Summary

These lecture notes cover software testing, including program testing goals, validation testing and defect testing. The notes were created on 12/20/2021.

Full Transcript

12/20/2021 Program testing  Testing is intended to show that a program does what it is...

12/20/2021 Program testing  Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put Software Testing into use.  When you test software, you execute a program using artificial data.  Development testing  You check the results of the test run for errors, anomalies or  Test-driven development information about the program’s non-functional attributes.  Release testing  Can reveal the presence of errors, NOT their absence  User testing (Dijkstra 1972)  Testing is part of a more general verification and validation process, which also includes static validation techniques. Dr. Noha Adly CSE 322 - Software Testing 1 Dr. Noha Adly CSE 322 - Software Testing 2 1 2 Program testing goals Testing process goals  To demonstrate to the developer and the customer that Validation testing the software meets its requirements.  To demonstrate to the developer and the system  For custom software, this means that there should be at least customer that the software meets its requirements one test for every requirement in the requirements document.  For generic software products, it means that there should be  A successful test shows that the system operates as tests for all of the system features, plus combinations of these intended. features, that will be incorporated in the product release. Defect testing  To discover situations in which the behavior of the  To discover faults or defects in the software where its software is incorrect, undesirable or does not conform to behavior is incorrect or not in conformance with its its specification. specification  Defect testing is concerned with rooting out undesirable system behavior such as system crashes, unwanted interactions with  A successful test is a test that makes the system other systems, incorrect computations and data corruption. perform incorrectly and so exposes a defect in the system. Dr. Noha Adly CSE 322 - Software Testing 3 Dr. Noha Adly CSE 322 - Software Testing 4 3 4 12/20/2021 Validation and defect testing An input-output model of program testing  The first goal leads to validation testing Validation testing involves testing with correct inputs that are outside Ie  You expect the system to perform correctly using a given set of Defect testing is to find those inputs in the set Ie test cases that reflect the system’s expected use.  The second goal leads to defect testing  The test cases are designed to expose defects.  The test cases in defect testing can be deliberately obscure and need not reflect how the system is normally used.  No definite boundary between these two approaches  During validation testing, you will find defects in the system;  During defect testing, some of the tests will show that the program meets, or not, its requirements Dr. Noha Adly CSE 322 - Software Testing 5 Dr. Noha Adly CSE 322 - Software Testing 6 5 6 Verification vs Validation V & V confidence Testing is part of a broader process of Software  Aim of V & V is to establish confidence that the system is ‘fit for purpose’. Verification and Validation (V & V)  The level of required confidence depends on Verification: "Are we building the product right”?  Software purpose  The software should conform to its specification ; The level of confidence depends on how critical the software meets its functional and non-functional requirements. is to an organisation.  User expectations Validation: "Are we building the right product”? Users may have low expectations of certain kinds of  The software should do what the user expects it to software.  Marketing environment Validation is more general; going beyond Getting a product to market early may be more important checking conformance to specification than finding defects in the program. Dr. Noha Adly CSE 322 - Software Testing 7 Dr. Noha Adly CSE 322 - Software Testing 8 7 8 12/20/2021 Inspections and testing Inspections and testing  Software inspections: Concerned with analysis of the static system representation to discover problems  These involve people examining the source representation with the aim of discovering anomalies and defects.  analyze and check the system requirements, design models, the program source code, and proposed system tests  They are Static verification: do not need to execute the software  Shown to be an effective technique for discovering program errors (60-90%)  May be supplement by tool-based document and code analysis.  Software testing: Concerned with exercising and observing product behaviour (dynamic verification)  The system is executed with test data and its operational behaviour is observed. Dr. Noha Adly CSE 322 - Software Testing 9 Dr. Noha Adly CSE 322 - Software Testing 10 9 10 Examples of Inspection Testing Examples of Inspection Testing (Contd)  Data faults  Interface faults  Are all program variables initialized before their values are used?  Do all function and method calls have the correct number of  Have all constants been named? parameters?  Is there any possibility of buffer overflow?  Do formal and actual parameter types match?  Are the parameters in the right order?  Control faults:  If components access shared memory, do they have the same model of  For each conditional statement, is the condition correct? the shared memory structure?  Is each loop certain to terminate?  Storage management faults  Are compound statements correctly bracketed?  If a linked structure is modified, have all links been correctly  In case statements, are all possible cases accounted for? reassigned?  Input/output faults:  If dynamic storage is used, has space been allocated correctly?  Are all input variables used?  Is space explicitly de-allocated after it is no longer required?  Are all output variables assigned a value before they are output?  Exception management faults  Can unexpected inputs cause corruption?  Have all possible error conditions been taken into account? Dr. Noha Adly CSE 322 - Software Testing 11 Dr. Noha Adly CSE 322 - Software Testing 12 11 12 12/20/2021 Advantages of inspections over testing Inspections and testing  During testing, errors can mask (hide) other errors.  Inspections and testing are complementary and not Because inspection is a static process, you don’t have to opposing verification techniques. be concerned with interactions between errors.  Both should be used during the V & V process.  Incomplete versions of a system can be inspected  Inspections can check conformance with a specification without additional costs. If a program is incomplete, then but not conformance with the customer’s real you need to develop specialized test harnesses to test requirements the parts that are available.  Inspections cannot check non-functional characteristics  As well as searching for program defects, an inspection such as performance, usability, etc. can also consider broader quality attributes of a program, such as compliance with standards,  Inspections cannot discover defects of unexpected inappropriate algorithms, poor programming style interactions of different components or timing problems leading to problems in maintenance Dr. Noha Adly CSE 322 - Software Testing 13 Dr. Noha Adly CSE 322 - Software Testing 14 13 14 A model of the software testing process Stages of testing  Development testing, where the system is tested during development to discover bugs and defects.  System designers and programmers are likely to be involved in the testing process.  Release testing, where a separate testing team test a complete version of the system before it is released to users.  The aim is to check that the system meets the requirements of the system stakeholders.  User testing, where users or potential users of a system test the system in their own environment.  Acceptance testing is one type of user testing where the customer formally tests a system to decide if it should be accepted from the system supplier Dr. Noha Adly CSE 322 - Software Testing 15 Dr. Noha Adly CSE 322 - Software Testing 16 15 16 12/20/2021 Development testing  Includes all testing activities carried out by the developing team  There are three stages of development testing  Unit testing individual program units or object classes are tested. Development testing Focus on testing the functionality of objects or methods.  Component testing several individual units are integrated to create composite components Focus on testing component interfaces that provide access to the component functions  System testing some or all of the components in a system are integrated and the system is tested as a whole. Focus on testing component interactions. Dr. Noha Adly CSE 322 - Software Testing 17 Dr. Noha Adly CSE 322 - Software Testing 18 17 18 Unit testing Object class testing  Unit testing is the process of testing individual  Complete test coverage of a class features involves components in isolation  Testing all operations associated with an object  It is a defect testing process.  Set and check the value of all object’s attributes  Units may be:  Exercising the object in all possible states, simulating all events that cause a state change.  Individual functions or methods within an object  Object classes with several attributes and methods  Inheritance makes it more difficult to design object class tests as the information to be tested is not localised:  Composite components with defined interfaces used to access their functionality  The operation that is inherited may make assumptions about other operations and attributes.  Your tests should be calls to these routines with different  These assumptions may not be valid in some subclasses that input parameters inherit the operation.  You therefore have to test the inherited operation everywhere that it is used Dr. Noha Adly CSE 322 - Software Testing 19 Dr. Noha Adly CSE 322 - Software Testing 20 19 20 12/20/2021 The weather station object interface Weather station state diagram  Need to define test cases for reportWeather, reportStatus(), restart(), shutdown(), etc... Using state model, you identify sequences of state transitions that have to be tested and define event sequences to force these transitions  But also test sequences: e.g. restart, then shutdown Dr. Noha Adly CSE 322 - Software Testing 21 Dr. Noha Adly CSE 322 - Software Testing 22 21 22 Weather station testing Automated testing  Using a state model, identify sequences of state  Whenever possible, unit testing should be automated so transitions to be tested and the event sequences to that tests are run and checked without manual cause these transitions intervention  For example:  In automated unit testing, you make use of a test  Shutdown → Running → Shutdown automation framework (such as JUnit) to write and run  Configuring → Running → Testing → Transmitting → Running your program tests.  Running → Collecting →Running → Summarizing  Unit testing frameworks provide generic test classes that →Transmitting → Running you extend to create specific test cases. They can then run all of the tests that you have implemented and report, often through some GUI, on the success or otherwise of the tests. Dr. Noha Adly CSE 322 - Software Testing 23 Dr. Noha Adly CSE 322 - Software Testing 24 23 24 12/20/2021 Automated test components Choosing unit test cases  An automated test has three parts:  Test cases are effective when:  A setup part, where you initialize the system with the test  The test cases should show that, when used as expected, the case, namely the inputs and expected outputs. component that you are testing does what it is supposed to do.  A call part, where you call the object or method to be tested  If there are defects in the component, these should be revealed by test cases.  An assertion part where you compare the result of the call with the expected result. If the assertion evaluates to true,  So, you should design two types of unit test case: the test has been successful if false, then it has failed.  The first of these should reflect normal operation of a program and should show that the component works as expected.  When objects you are testing have dependencies on other objects not yet implemented, you use Mock Objects  The other kind of test case should be based on testing experience of where common problems arise. It should use  Simulating a database if the object execute a database call abnormal inputs to check that these are properly processed and  If the system intends to take action at certain times, the do not crash the component. mock object can return those times Dr. Noha Adly CSE 322 - Software Testing 25 Dr. Noha Adly CSE 322 - Software Testing 26 25 26 Testing strategies Partition testing  Two strategies that can be effective in helping you  Input data and output results often are members of sets choose test cases are: with common characteristics  Partition testing, where you identify groups of inputs  E.g. positive numbers, menu selections, etc.. that have common characteristics and should be  Each of these classes is an equivalence partition or processed in the same way. domain where the program behaves in an equivalent You should choose tests from within each of these groups. way for each class member.  Guideline-based testing, where you use testing  Test cases should be chosen from each partition. guidelines to choose test cases.  A good rule of thumb for test-case selection is to choose test These guidelines reflect previous experience of the kinds of cases on the boundaries of the partitions, plus cases close to the errors that programmers often make when developing midpoint of the partition. components. Dr. Noha Adly CSE 322 - Software Testing 27 Dr. Noha Adly CSE 322 - Software Testing 28 27 28 12/20/2021 Equivalence partitioning Equivalence partitions You identify partitions by using the program specification. Ex: say a program specification states that the program accepts four to ten inputs which are five-digit integers greater than 10,000. You use this information to identify the input partitions and possible test input values. Dr. Noha Adly CSE 322 - Software Testing 29 Dr. Noha Adly CSE 322 - Software Testing 30 29 30 Testing guidelines (sequences) General testing guidelines  You can use testing guidelines to help choose test cases  Choose inputs that force the system to generate all error  Guidelines encapsulate knowledge of what kinds of test cases messages are effective for discovering errors.  Design inputs that cause input buffers to overflow  Ex: when you are testing programs with sequences, arrays, or  Repeat the same input or series of inputs numerous lists, guidelines that could help reveal defects include: times  Test software with sequences which have only a single value  Force invalid outputs to be generated  Use sequences of different sizes in different tests.  Force computation results to be too large or too small.  Derive tests so that the first, middle and last elements of the sequence are accessed.  Test with sequences of zero length. Dr. Noha Adly CSE 322 - Software Testing 31 Dr. Noha Adly CSE 322 - Software Testing 32 31 32 12/20/2021 Component testing Interface testing  Software components are often composite components that are made up of several interacting objects.  For example, in the weather station system, the reconfiguration component includes objects that deal with each aspect of the reconfiguration.  You access the functionality of these objects through the defined component interface.  Testing composite components should therefore focus on showing that the component interface behaves according to its specification.  You can assume that unit tests on the individual objects within the component have been completed. Dr. Noha Adly CSE 322 - Software Testing 33 Dr. Noha Adly CSE 322 - Software Testing 34 33 34 Interface testing Interface errors  Objectives are to detect faults due to interface errors or  Interface misuse invalid assumptions about interfaces.  A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order.  Different types of Interface errors can occur  Parameter interfaces Data passed from one method or  Interface misunderstanding procedure to another e.g. methods in an object  A calling component embeds assumptions about the behaviour  Shared memory interfaces Block of memory is shared between of the called component which are incorrect. procedures or functions e.g embedded systems  Ex: binary search method called with an unordered array  Procedural interfaces where one component encapsulates a set  Timing errors of procedures to be called by other components e.g. objects and reusable components  The called and the calling component operate at different speeds and out-of-date information is accessed e.g. producer-consumer  Message passing interfaces: interfaces where one component problems request services from other components by passing a message e,g, client-server systems Dr. Noha Adly CSE 322 - Software Testing 35 Dr. Noha Adly CSE 322 - Software Testing 36 35 36 12/20/2021 Interface testing guidelines System testing  Design tests so that parameters to a called procedure  System testing during development involves integrating are at the extreme ends of their ranges. components to create a version of the system and then  Always test pointer parameters with null pointers. testing the integrated system.  The focus in system testing is testing the interactions  Design tests which cause the component to fail. between components.  Use stress testing in message passing systems.  System testing checks that  In shared memory systems, vary the order in which  components are compatible, components are activated.  interact correctly and  transfer the right data at the right time across their interfaces.  System testing tests the emergent behavior of a system. Dr. Noha Adly CSE 322 - Software Testing 37 Dr. Noha Adly CSE 322 - Software Testing 38 37 38 Use-case testing Collect weather data sequence chart  The use-cases developed to identify system interactions are an efficient basis for system testing.  Each use case usually involves several system components so testing the use case forces these interactions to occur.  The sequence diagrams associated with the use case documents the components and interactions that are being tested. You can use diagram to identify operations to be tested and to help design the test cases to execute the tests. issuing a request for a report will result in the execution of the following thread: SatComms:request → WeatherStation:reportWeather → Commslink:Get(summary)→ WeatherData:summarize Dr. Noha Adly CSE 322 - Software Testing 40 Dr. Noha Adly CSE 322 - Software Testing 41 40 41 12/20/2021 Test cases derived from sequence diagram Testing policies  Sequence diagrams help designing the specific test cases as it  Exhaustive system testing is impossible so testing shows inputs required and outputs created policies which define the required system test coverage  An input of a request for a report should have an associated may be developed. acknowledgement. A report should ultimately be returned from the request.  Examples of testing policies:  During testing, you should create summarized data that can be used to  All system functions that are accessed through menus should be check that the report is correctly organized. tested  An input request for a report to WeatherStation results in a  Combinations of functions (e.g. text formatting) that are summarized report being generated. accessed through the same menu must be tested  Can be tested by creating raw data corresponding to the summary that E.g. using footnotes with multicolumn layout causes incorrect layout you have prepared for the test of SatComms and checking that the of the text WeatherStation object correctly produces this summary. This raw data  Where user input is provided, all functions must be tested with is also used to test the WeatherData object. both correct and incorrect input.  Figure does not show exceptions. A complete use case/scenario test must take these exceptions into account and ensure that they are correctly handled. Dr. Noha Adly CSE 322 - Software Testing 42 Dr. Noha Adly CSE 322 - Software Testing 43 42 43 Test-driven development  Test-driven development (TDD) is an approach to program development in which you inter-leave testing and code development.  Tests are written before code and ‘passing’ the tests is Test-driven development the critical driver of development.  You develop code incrementally, along with a test for that increment. You don’t move on to the next increment until the code that you have developed passes its test.  TDD was introduced as part of agile methods such as Extreme Programming. However, it can also be used in plan-driven development processes. Dr. Noha Adly CSE 322 - Software Testing 44 Dr. Noha Adly CSE 322 - Software Testing 45 44 45 12/20/2021 TDD process Benefits of test-driven development activities  Code coverage  Every code segment that you write has at least one associated test so all code written has at least one test.  Start by identifying the increment of functionality that is required. This should normally be small and implementable in  Regression testing a few lines of code  A regression test suite is developed incrementally as a program is developed so it checks that new changes have not introduced  Write a test for this functionality and implement this as an new bugs automated test  Simplified debugging  Run the test, along with all other tests that have been implemented. Initially, you have not implemented the  When a test fails, it should be obvious where the problem lies. The newly written code needs to be checked and modified. functionality so the new test will fail  System documentation  Implement the functionality and re-run the test.  The tests themselves are a form of documentation that describe  Once all tests run successfully, you move on to implementing what the code should be doing. the next chunk of functionality. Dr. Noha Adly CSE 322 - Software Testing 46 Dr. Noha Adly CSE 322 - Software Testing 47 46 47 Regression testing  Regression testing is testing the system to check that changes have not ‘broken’ previously working code.  In a manual testing process, regression testing is expensive Release testing  with automated testing, it is simple and straightforward. All tests are rerun every time a change is made to the program.  Tests must run ‘successfully’ before the change is committed. Dr. Noha Adly CSE 322 - Software Testing 48 Dr. Noha Adly CSE 322 - Software Testing 49 48 49 12/20/2021 Release testing Release testing  Release testing is the process of testing a particular  Release testing Versus System testing release of a system that is intended for use outside of  Release testing is a form of system testing. the development team.  Important differences:  Customers and users A separate team that has not been involved in the system  Other teams that are developing related systems (complex projects) development, should be responsible for release testing.  For software products, the release could be for product management System testing by the development team should focus on preparing it for sale discovering bugs in the system (defect testing). The objective of release testing is to check that the system meets its  The primary goal of the release testing process is to convince requirements and is good enough for external use (validation the supplier of the system that it is good enough for use. testing).  Release testing, therefore, has to show that the system delivers its  Release testing is usually a black-box testing process where specified functionality, performance and dependability, and that it does not fail during normal use.  Tests are only derived from the system specification.  His behavior can only be determined by studying its inputs and outputs  Also called functional testing, because the tester is only concerned with functionality and not the implementation Dr. Noha Adly CSE 322 - Software Testing 50 Dr. Noha Adly CSE 322 - Software Testing 51 50 51 Release testing Requirements based testing  Release testing involves  Requirements-based testing involves examining each requirement and developing a test or tests for it.  Requirements-based testing  Requirements-based testing is validation rather than  Scenario testing defect testing  Performance testing  Example: Mentcare system requirements:  If a patient is known to be allergic to any particular medication, then prescription of that medication shall result in a warning message being issued to the system user.  If a prescriber chooses to ignore an allergy warning, they shall provide a reason why this has been ignored. Dr. Noha Adly CSE 322 - Software Testing 52 Dr. Noha Adly CSE 322 - Software Testing 53 52 53 12/20/2021 Requirements based testing - Example Scenario Testing  Requirements tests  Scenario testing is an approach to release testing where 1. Set up a patient record with no known allergies. Prescribe you use scenarios to develop test cases for the system. medication for allergies that are known to exist. Check that a warning message is not issued by the system.  A scenario is a story that describes one way in which the 2. Set up a patient record with a known allergy. Prescribe the system might be used. medication to that the patient is allergic to, and check that the warning is issued by the system.  Scenarios should be realistic, and real system users 3. Set up a patient record in which allergies to two or more drugs are should be able to relate to them. recorded. Prescribe both of these drugs separately and check that the correct warning for each drug is issued.  If you have used scenarios or user stories as part of the 4. Prescribe two drugs that the patient is allergic to. Check that two requirements engineering process (Ch 4), then you may warnings are correctly issued. be able to reuse them as testing scenarios. 5. Prescribe a drug that issues a warning and overrule that warning. Check that the system requires the user to provide information explaining why the warning was overruled. Dr. Noha Adly CSE 322 - Software Testing 54 Dr. Noha Adly CSE 322 - Software Testing 55 54 55 A usage scenario for the Mentcare system Scenario based testing George is a nurse who specializes in mental healthcare. One of his responsibilities is to  Features tested by scenario visit patients at home to check that their treatment is effective and that they are not  Authentication by logging on to the system. suffering from medication side effects.  Downloading and uploading of specified patient records to a laptop On a day for home visits, George logs into the Mentcare system and uses it to print his schedule of home visits for that day, along with summary information about the patients  Home visit scheduling to be visited. He requests that the records for these patients be downloaded to his  Encryption and decryption of patient records on a mobile device. laptop. He is prompted for his key phrase to encrypt the records on the laptop.  Record retrieval and modification One of the patients that he visits is Jim, who is being treated with medication for  Links with the drugs database that maintains side-effect information depression. Jim feels that the medication is helping him but believes that it has the side  The system for call prompting effect of keeping him awake at night. George looks up Jim’s record and is prompted for his key phrase to decrypt the record. He checks the drug prescribed and queries its side  As a tester, you should run through the scenario effects. Sleeplessness is a known side effect so he notes the problem in Jim’s record and  observing how the system behaves in response to different inputs suggests that he visits the clinic to have his medication changed. Jim agrees so George enters a prompt to call him when he gets back to the clinic to make an appointment with  Make deliberate mistakes and check the response of the system to a physician. George ends the consultation and the system re-encrypts Jim’s record. errors After, finishing his consultations, George returns to the clinic and uploads the records of  Note performance problems e.g. encryption patients visited to the database. The system generates a call list for George of those  testing several requirements within the same scenario patients who He has to contact for follow-up information and make clinic appointments. Dr. Noha Adly CSE 322 - Software Testing 56 Dr. Noha Adly CSE 322 - Software Testing 57 56 57 12/20/2021 Performance testing Performance testing – operational profile  Part of release testing may involve testing the emergent  To test whether performance requirements are being properties of a system, such as performance and achieved, you need to construct an operational profile. reliability.  An operational profile is a set of tests that reflect the  Performance tests have to be designed to ensure that actual mix of work that will be handled by the system. the system can process its intended load.  If 90% of the transactions in a system are of type A, 5%  Performance tests usually involve planning a series of of type B, and the remainder of types C, D, and E, then tests where the load is steadily increased until the you have to design the operational profile so that the system performance becomes unacceptable. vast majority of tests are of type A. Otherwise, you will  As with other types of testing, performance testing is not get an accurate test of the operational performance of the system. concerned both with demonstrating that the system meets its requirements and discovering problems and defects in the system. Dr. Noha Adly CSE 322 - Software Testing 58 Dr. Noha Adly CSE 322 - Software Testing 59 58 59 Performance testing – Stress test  Stress testing is a form of performance testing where the system is deliberately overloaded to test its failure behavior - design tests around the limits of the system  If you are testing a transaction processing system that is designed to process up to 300 transactions /sec. You start by testing this system with fewer than 300 trans/ sec. You then gradually increase the load on User Testing the system beyond 300 trans/ sec until it is well beyond the maximum design load of the system and the system fails.  Stress testing helps you do two things:  Test the failure behavior of the system. It should not cause data corruption or unexpected loss of user services causes it to “fail-soft”  Reveal defects that only show up when the system is fully loaded  Stress testing is relevant to distributed systems which exhibit severe degradation when they are heavily loaded. Dr. Noha Adly CSE 322 - Software Testing 60 Dr. Noha Adly CSE 322 - Software Testing 61 60 61 12/20/2021 User testing Types of user testing  User or customer testing is a stage in the testing process  Alpha testing in which users or customers provide input and advice on  Users of the software work with the development team to test system testing. early releases of the software at the developer’s site.  User testing is essential, even when comprehensive  Beta testing system and release testing have been carried out.  A release of the software is made available to users to allow  The reason for this is that influences from the user’s working them to experiment and to raise problems that they discover with environment have a major effect on the reliability, performance, the system developers. usability and robustness of a system. These cannot be replicated  Acceptance testing in a testing environment.  Customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment. User Acceptance Test UAT Involves commissioning and payments Dr. Noha Adly CSE 322 - Software Testing 62 Dr. Noha Adly CSE 322 - Software Testing 63 62 63 The Acceptance Testing Process The Acceptance Testing Process 1. Define acceptance criteria  acceptance criteria should be part of the system contract  Difficult as detailed requirements may not be available 2. Plan acceptance testing  decide on the resources, time, and budget for acceptance testing and establishing a testing schedule  There are six stages in the Acceptance Testing process  The plan should discuss the required coverage of the requirements and the order in which system features are tested. 1. Define acceptance criteria  It should define risks to the testing process such as system crashes 2. Plan acceptance testing and inadequate performance, and discuss how these risks can be mitigated. 3. Derive acceptance tests 3. Derive acceptance tests 4. Run acceptance tests  tests have to be designed to check whether or not a system is 5. Negotiate test results acceptable.  Acceptance tests should aim to test both the functional and non- 6. Reject/accept system functional characteristics (e.g., performance) of the system. Dr. Noha Adly CSE 322 - Software Testing 64 Dr. Noha Adly CSE 322 - Software Testing 65 64 65 12/20/2021 The Acceptance Testing Process Agile methods and acceptance testing  In agile methods, the user/customer is part of the development team and 4. Run acceptance tests is responsible for making decisions on the acceptability of the system  The agreed acceptance tests are executed on the system. (alpha tester)  A user testing environment need to be set up to run these tests  User provides the system requirements in terms of user stories 5. Negotiate test results  Tests are defined by the user/customer – whether software supports  very unlikely that all of the defined acceptance tests will pass user stories - and are integrated with other tests in that they are run  the developer and the customer have to negotiate to decide if the automatically when changes are made. system is good enough to be used.  They must agree on how the developer will fix the identified problems.  There is no separate acceptance testing process. 6. Reject/accept system  Main problem here is whether or not the embedded user is ‘typical’ and can represent the interests of all system stakeholders.  developers and customer decide whether or not the system should be accepted  Difficult to find such users  acceptance tests may not be a reflection of how a system is used  If not, then further development is required to fix the identified  Automated testing limit interactive testing problems. Once complete, the acceptance testing phase is repeated.  outcome of negotiations may be conditional acceptance of the system.  Many companies use a mix of agile and more traditional testing.  The system may be developed using agile techniques,  But separate acceptance testing is used for major releases Dr. Noha Adly CSE 322 - Software Testing 66 Dr. Noha Adly CSE 322 - Software Testing 67 66 67

Use Quizgecko on...
Browser
Browser