Software Quality Assurance PDF
Document Details
Uploaded by Deleted User
Tags
Related
Summary
This document provides an overview of software quality assurance (SQA). It details the definition of software quality, factors that can affect a client's requirements, and discusses various perspectives of quality from both the user and developer viewpoints.
Full Transcript
**MODULE 8 The Software Quality Assurance** **Software Quality** The definition comes from the perspective of Crosby who reassures the software engineer with its strictness. This perspective is: "If I deliver all that is specified in the requirements document, then I will have delivered quality so...
**MODULE 8 The Software Quality Assurance** **Software Quality** The definition comes from the perspective of Crosby who reassures the software engineer with its strictness. This perspective is: "If I deliver all that is specified in the requirements document, then I will have delivered quality software." However, the second part of this definition is from the quality perspective of Juran, which specifies that one must satisfy the client's needs, wants, and expectations that are not necessarily described in the requirements documentation. These two points of view force the software engineer to establish the kind of agreement that must describe client's requirements and attempt to faithfully reflect his needs, wants, and expectations. Of course, there is a practical element to the functional characteristics that need to be described, but also implicit characteristics, which are expected of any professionally developed piece of software. Software quality is recognized differently depending on each perspective, including that of the clients, maintainers, and users. Sometimes, it is necessary to differentiate between the client, who is responsible for acquiring the software, and the users, who will ultimately use it. Users seek, among other things, functionalities, performance, efficiency, accurate results, reliability, and usability. Clients typically focus more on costs and deadlines, with a view to the best solution at the best price. This can be considered an external point of view with regard to quality. To draw a parallel with the automobile industry, the user (driver) will go to the garage that provides him with fast service, quality, and a good price. He has a non-technical point of view. **Factors that can Affect Meeting the True Requirements of the Client** +-----------------------+-----------------------+-----------------------+ | **Type of | **Origin of the | **Main causes of | | requirement** | expression** | difference** | +=======================+=======================+=======================+ | **True** | Mind of the | Unfamiliarity with | | | stakeholders | true requirements | | | | | | | | Instability of | | | | requirements | | | | | | | | Different | | | | viewpoints of | | | | ordering party and | | | | users | +-----------------------+-----------------------+-----------------------+ | **Expressed** | User requirements | Incomplete | | | | specification | | | | | | | | Lack of standards | | | | | | | | Inadequate or | | | | difficult | | | | communication with | | | | the ordering party | | | | | | | | Insufficient | | | | quality control | +-----------------------+-----------------------+-----------------------+ | **Specified** | Software | Inappropriate use | | | Specification | of management and | | | Document | production methods, | | | | techniques, and tools | +-----------------------+-----------------------+-----------------------+ | **Achieved** | Documents and Product | Insufficient tests | | | code | | | | | Insufficient | | | | quality control | | | | techniques | +-----------------------+-----------------------+-----------------------+ **Software Quality Assurance** A set of activities that **define** and **assess** the adequacy of software processes to provide evidence that establishes confidence that the software processes are appropriate for and produce software products of suitable quality for their intended purposes. A key attribute of SQA is the objectivity of the SQA function with respect to the project. The SQA function may also be organizationally independent of the project; that is, free from technical, managerial, and financial pressures from the project. The term "software quality assurance" could be a bit misleading. The implementation of software engineering practices can only "assure" the quality of a project, since the term "assurance" refers to "grounds for justified confidence that a claim has been or will be achieved." In fact, QA is implemented to reduce the risks of developing a software that does not meet the wants, needs, and expectations of stakeholders within budget and schedule. **This perspective of QA, in terms of software development, involves the following elements:** **the need to plan the quality aspects of a product or service;** **systematic activities that tell us, throughout the software life cycle, that certain corrections are required;** **the quality system is a complete system that must, in the context of quality management, allow for the setting up of a quality policy and continuous improvement;** **QA techniques that demonstrate the level of quality reached so as to instill confidence in users; and lastly,** **demonstrate that the quality requirements defined for the project, for the change or by the software department have been met.** **Software Quality Assurance through Prototyping** Prototyping affords both the engineer and the user a chance to "test drive" software to ensure that it is, in fact, what the user needs. Also, engineers improve their understanding of the technical demands upon, and the consequent feasibility of, a proposed system. Prototyping is the process of developing a trial version of a system (a prototype) or its components or characteristics in order to clarify the requirements of the system or to reveal critical design considerations. The use of prototyping may be an effective technique for correcting weaknesses of the traditional "waterfall" software development life cycle by educating the engineers and users. The basic idea here is that instead of freezing the requirements before a design or coding can proceed, a throwaway prototype is built to understand the requirements. This prototype is developed based on the currently known requirements. By using this prototype, the client can get an "actual feel" of the system, since the interactions with prototype can enable the client to better understand the requirements of the desired system. Prototyping is an attractive idea for complicated and large systems for which there is no manual process or existing system to help determining the requirements. The prototype are usually not complete systems and many of the details are not built in the prototype. The goal is to provide a system with overall functionality. **Advantages of Prototype model:** Users are actively involved in the development. Since in this methodology a working model of the system is provided, the users get a better understanding of the system being developed. Errors can be detected much earlier. Quicker user feedback is available leading to better solutions. Missing functionality can be identified easily. Confusing or difficult functions can be identified. Requirements validation, Quick implementation of, incomplete, but functional application. **Disadvantages of Prototype model:** Leads to implementing and then repairing way of building systems. Practically, this methodology may increase the complexity of the system as scope of the system may expand beyond original plans. Incomplete application may cause application not to be used as the full system was designed. Incomplete or inadequate problem analysis. **When to use Prototype model:** Prototype model should be used when the desired system needs to have a lot of interaction with the end users. Typically, online systems, web interfaces have a very high amount of interaction with end users, are best suited for Prototype model. It might take a while for a system to be built that allows ease of use and needs minimal training for the end user. Prototyping ensures that the end users constantly work with the system and provide a feedback which is incorporated in the prototype to result in a useable system. They are excellent for designing good human computer interface systems. **MODULE 9 Models of Prototyping** **Models of Prototyping and Tools** **Prototyping Model** The prototyping model is a systems development method in which a prototype is built, tested and then reworked as necessary until an acceptable outcome is achieved from which the complete system or product can be developed. This model works best in scenarios where not all of the project requirements are known in detail ahead of time. It is an iterative, trial-and-error process that takes place between the developers and the users. **This model has following six SDLC phases as follow:** **Step 1:** Requirements Gathering and Analysis A prototyping model starts with requirement analysis. In this phase, the requirements of the system are defined in detail. During the process, the users of the system are interviewed to know what is their expectation from the system. **Step 2:** Quick Design The second phase is a preliminary design or a quick design. In this stage, a simple design of the system is created. However, it is not a complete design. It gives a brief idea of the system to the user. The quick design helps in developing the prototype. **Step 3:** Build a Prototype In this phase, an actual prototype is designed based on the information gathered from quick design. It is a small working model of the required system. **Step 4:** Initial User Evaluation In this stage, the proposed system is presented to the client for an initial evaluation. It helps to find out the strength and weakness of the working model. Comment and suggestion are collected from the customer and provided to the developer. **Step 5:** Refining prototype If the user is not happy with the current prototype, you need to refine the prototype according to the user\'s feedback and suggestions. This phase will not over until all the requirements specified by the user are met. Once the user is satisfied with the developed prototype, a final system is developed based on the approved final prototype. **Step 6:** Implement Product and Maintain Once the final system is developed based on the final prototype, it is thoroughly tested and deployed to production. The system undergoes routine maintenance for minimizing downtime and prevent large-scale failures. **Types of Prototype Models** There are a few types of prototype models that can be implemented by development teams based on their needs: **Rapid throwaway** - This method involves exploring ideas by quickly developing a prototype based on preliminary requirements that is then revised through customer feedback. The name rapid throwaway refers to the fact that each prototype is completely discarded and may not be a part of the final product. **Evolutionary** - This approach uses a continuous, working prototype that is refined after each iteration of customer feedback. Because each prototype is not started from scratch, this method saves time and effort. **Incremental** - This technique breaks the concept for the final product into smaller pieces, and prototypes are created for each one. In the end, these prototypes are merged into the final product. **Extreme** - This prototype model is used specifically for web development. All web prototypes are built in an HTML format with a services layer and are then integrated into the final product. **Advantages of the Prototyping Model** Users are actively involved in development. Therefore, errors can be detected in the initial stage of the software development process. Missing functionality can be identified, which helps to reduce the risk of failure as Prototyping is also considered as a risk reduction activity. Helps team member to communicate effectively Customer satisfaction exists because the customer can feel the product at a very early stage. **Disadvantages of the Prototyping Model** Prototyping is a slow and time taking process. The cost of developing a prototype is a total waste as the prototype is ultimately thrown away. Prototyping may encourage excessive change requests. Sometimes customers may not be willing to participate in the iteration cycle for the longer time duration. There may be far too many variations in software requirements when each time the prototype is evaluated by the customer. **MODULE 10 Overview of Software Review and Inspection** **Overview of Software Review and Inspection** **Software Review** In software engineering, the term software review is to review any work done by trained people, they inspect the software to find out the positive and negative aspects of a program. It is a complete process that results in carefully examining a software product in a meeting or at any event. Software review is an important part of SDLC that assists software engineers in validating the quality, functionality, and other vital features and components of the software. As mentioned above, it is a complete process that involves testing the software product and ensuring that it meets the requirements stated by the client. It is systematic examination of a document by one or more individuals, who work together to find & resolve errors and defects in the software during the early stages of Software Development Life Cycle (SDLC). Usually performed manually, software review is used to verify various documents like requirements, system designs, codes, \"test plans\", & \"test cases\". **Objectives of Software Review:** The objective of software review is: 1\. To improve the productivity of the development team. 2\. To make the testing process time and cost effective. 3\. To make the final software with fewer defects. 4\. To eliminate the inadequacies. **Process of Software Review:** ---------------------- **Entry Evaluation** ---------------------- ---------------------------- **Management Preparation** ---------------------------- --------------------- **Review Planning** --------------------- ----------------- **Preparation** ----------------- ------------------------------------- **Examination and Exit Evaluation** ------------------------------------- **Types of Software Reviews:** There are mainly 3 types of software reviews: **1. Software Peer Review** Peer review is the process of assessing the technical content and quality of the product and it is usually conducted by the author of the work product along with some other developers. Peer review is performed in order to examine or resolve the defects in the software, whose quality is also checked by other members of the team. **Peer Review has following types:** I. **Code Review**: Computer source code is examined in a systematic way. II\. **Pair Programming**: It is a code review where two developers develop code together at the same platform. III\. **Walkthrough:** Members of the development team is guided by author and other interested parties and the participants ask questions and make comments about defects. IV\. **Technical Review:** A team of highly qualified individuals examines the software product for its client's use and identifies technical defects from specifications and standards. V. **Inspection:** In inspection the reviewers follow a well-defined process to find defects. **2. Software Management Review** Software Management Review evaluates the work status. In this section decisions regarding downstream activities are taken. **3. Software Audit Review** Software Audit Review is a type of external review in which one or more critics, who are not a part of the development team, organize an independent inspection of the software product and its processes to assess their compliance with stated specifications and standards. This is done by managerial level people **Advantages of Software Review:** Defects can be identified earlier stage of development (especially in formal review). Earlier inspection also reduces the maintenance cost of software. It can be used to train technical authors. It can be used to remove process inadequacies that encourage defects. **Software Inspection** The term software inspection was developed by IBM in the early 1970s, when it was noticed that the testing was not enough sufficient to attain high quality software for large applications. Inspection is used to determine the defects in the code and remove it efficiently. This prevents defects and enhances the quality of testing to remove defects. This software inspection method achieved the highest level for efficiently removing defects and improving software quality. **Software Inspection Process** The inspection process was developed in the mid-1970s, later extended and revised. The process must have an entry criterion that determines whether the inspection process is ready to begin. this prevents incomplete products from entering the inspection process. Entry criteria can be interstitial with items such as "The Spell-Document Check". There are some of the stages in the software inspection process such as 1. **Planning --** The moderator plans the inspection. 2. **Overview Meeting** -- The background of the work product is described by the author. 3. **Preparation --** The examination of the work product is done by inspector to identify the possible defects. 4. **Inspection Meeting --** The reader reads the work product part by part during this meeting and the inspectors the faults of each part. 5. **Rework --** after the inspection meeting, the writer changes the work product according to the work plans. 6. **Follow Up --** The changes done by the author are checked to make sure that everything is correct. **MODULE 11 Code Reviewing and Software Inspection Techniques** **Code Reviews and Inspection** Our objective with Inspections is to reduce the Cost of Quality by finding and removing defects earlier and at a lower cost. While some testing will always be necessary, we can reduce the costs of test by reducing the volume of defects propagated to test. - Ron Radice (2002) When you catch bugs early, you also get fewer compound bugs. Compound bugs are two separate bugs that interact: you trip going downstairs, and when you reach for the handrail it comes off in your hand. - Paul Graham (2001) The main objective in software development is to get a working program that meets all the requirements and has no defects. The code should be perfect and it has no errors when you deliver it. The software quality assurance is all about trying to get as close to perfection, within the given time and budget. Software quality is usually discussed from two different perspectives: the user's and the developer's. From the user's perspective, quality has a number of characteristics, things that your program must do in order to be accepted by the user---among which are the following: **Correctness**: The software has to work, period. **Usability**: It has to be easy to learn and easy to use. **Reliability**: It has to stay up and be available when you need it. **Security**: The software has to prevent unauthorized access and protect your data. **Adaptability**: It should be easy to add new features. From the developer's perspective, things are a bit different. The developer wants to see the following: **Maintainability**: It has to be easy to make changes to the software. **Portability**: It has to be easy to move the software to a different platform. **Readability**: Many developers won't admit this, but you do need to be able to read the code. **Understandability**: The code needs to be designed in such a way that a new developer can understand how it all hangs together. **Testability**: Well, at least the testers think your code should be easy to test. Code that's created in a modular fashion, with short functions that do only one thing, is much easier to understand and test than code that is all just one big main() function. Software Quality Assurance (SQA) has three legs to it: **Testing**: Finding the errors that surface while your program is executing, also known as dynamic analysis. **Debugging**: Getting all the obvious errors out of your code---the ones that are found by testing it. **Reviews**: Finding the errors that are inherently in your code as it sits there, also known as static analysis. Many developers think that testing is a way to quality, but testing is limited. It can't explore every code path and test every possible data combination, and some of the test themselves are flawed. According to Edsger Dijkstra famously said, "... program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence." Reviewing your code, reading it and looking for errors on the page, provides another mechanism for making sure you've implemented the user's requirements and the resulting design correctly. In fact, most development organizations that use a plan-driven methodology will not only review code, they'll also review the requirements document, architecture, design specification, test plan, the tests themselves, and user documentation, in short, all the work products produced by the software development organization. **Walkthroughs, Reviews, and Inspections** Testing alone is not effective, it will only find errors about 50% or so in the program. Adding some type of code review in testing regimen, to find errors in code, can bring the percentage up to 93-99%. Three types of reviews are typically done: walkthroughs, code reviews, and inspections. These three works their way up from very informal techniques to very formal methodologies **Walkthroughs** Walkthroughs, also known as desk checks or code reads, are the least formal type of a review. Walkthroughs are normally used to confirm small changes to code, say a line or two, that you've just made to fix an error. If you've just added a new method to a class, or you've changed more than about 25 or 30 lines of code, don't do a walkthrough. Do a code review instead. Walkthroughs involve two or at most three people: the author of the code and the reviewer. The author's job in a walkthrough is to explain to the reviewer what the change is supposed to do and to point out where the change was made. The reviewer's job is to understand the change and then read the code. Once the reviewer reads the code, they make one of two judgments: either they agree that the change is correct, or they don't. If not, the author has to go back, fix the code again, and then do another walkthrough. If the reviewer thinks the change is correct, then the author can integrate the changed code back into the code base for integration testing. **Code Reviews** This is somewhat more formal than a walkthrough. Code reviews are what most software developers do. You should always do a code review if you've changed a substantial amount of code, or if you've added more than just a few lines of new code to an existing program. Code reviews are real meetings. There are usually between three and five attendees at a code review. The people who attend a code review should each bring a different perspective to the meeting. The moderator of the code review is usually the author. There should be one or more developers at the meeting, someone who's working on the same project as the author There should be a tester at the code review. Finally, there should be an experienced developer present who's not on the same project as the author. Managers are not allowed at code reviews. The presence of a manager changes the dynamics of the meeting and makes the code review less effective. People who might be willing to honestly critique a piece of code among peers will clam up in the presence of a manager; that doesn't help find errors. No managers, please. **Code Inspection** Code inspections are the most formal type of review meeting. The sole purpose of an inspection is to find defects in a work product. Inspections can be used to review planning documents, requirements, designs, or code, in short, any work product that a development team produces. Code inspections have specific rules regarding how many lines of code to review at once, how long the review meeting must be, and how much preparation each member of the review team should do, among other things. Inspections are typically used by larger organizations because they take more training, time, and effort than walkthroughs or code reviews. They're also used for mission- and safety-critical software where defects can cause harm to users. Code inspections have several very important criteria, including the following: Inspections use checklists of common error types to focus the inspectors. The focus of the inspection meeting is solely on finding errors; no solutions are permitted. Reviewers are required to prepare beforehand; the inspection meeting will be canceled if everyone isn't ready. Each participant in the inspection has a distinct role. All participants have had inspection training **Inspection Roles** The following are the roles used in code inspections: **Moderator**: The moderator gets all the materials from the author, decides who the other participants in the inspection should be, and is responsible for sending out all the inspection materials and scheduling and coordinating the meeting. **Author**: The author distributes the inspection materials to the moderator. If an Overview meeting is required, the author chairs it and explains the overall design to the reviewers. Overview meetings are discouraged in code inspections, because they can "taint the evidence" by injecting the author's opinions about the code and the design before the inspection meeting. **Reader**: The reader's role is to read the code. Actually, the reader is supposed to paraphrase the code, not read it. Paraphrasing implies that the reader has a good understanding of the project, its design, and the code in question. **Reviewers**: The reviewers do the heavy lifting in the inspection. A reviewer can be anyone with an interest in the code who is not the author. Normally, reviewers are other developers from the same project. As in code reviews, it's usually a good idea to have a senior person who's not on the project also be a reviewer. **Recorder**: Every inspection meeting has a recorder. The recorder is one of the reviewers and is the one who takes notes at the inspection meeting. The recorder merges the defect lists of the reviewers and classifies and records errors found during the meeting. **Managers**: As with code reviews, managers aren't invited to code inspections. **Inspection Phase and Procedures** Fagan inspections have seven phases that must be followed for each inspection: Planning In the Planning phase, the moderator organizes and schedules the meeting and picks the participants. The moderator and the author get together to discuss the scope of the inspection materials---for code inspections, typically 200--500 uncommented lines of code will be reviewed. The author then distributes the code to be inspected to the participants. **The Overview** Meeting An Overview meeting is necessary if several of the participants are unfamiliar with the project or its design and they need to come up to speed before they can effectively read the code. If an Overview meeting is necessary, the author will call it and run the meeting. The meeting itself is mostly a presentation by the author of the project architecture and design. As mentioned, Overview meetings are discouraged, because they have a tendency to taint the evidence. Like the Inspection meeting itself, Overview meetings should last no longer than two hours. **Preparation** In the Preparation phase, each reviewer reads the work to be inspected. Preparation should take no more than two or three hours. The amount of work to be inspected should be between 200--500 uncommented lines of code or 30--80 pages of text. A number of studies have shown that reviewers can typically review about 125--200 lines of code per hour. In Fagan inspections, the Preparation phase is required. The Inspection meeting can be canceled if the reviewers haven't done their preparation. The amount of time each reviewer spent in preparation is one of the metrics gathered at the Inspection meeting. **The Inspection** Meeting The moderator is in charge of the Inspection meeting. Their job during the meeting is to keep the meeting on track and focused. The Inspection meeting should last no more than two hours. If there is any material that has not been inspected at the end of that time, a new meeting is scheduled. At the beginning of the meeting, the reviewers turn in their list of previously discovered errors to the recorder. **The Inspection Report** Within a day of the meeting, the recorder distributes the Inspection report to all participants. The central part of the report is the defects that were found in the code at the meeting. **Rework and Follow-up** The author fixes all the severity 1 through 3 defects found during the meeting. If enough defects were found, or if enough refactoring or code changes had to occur, then another Inspection is scheduled. How much is enough? Amounts vary. McConnell says 5% of the code, but this author has typically used 10% of the code inspected. So, if you inspected 200 lines of code and had to change 20 or more of them in the rework, then you should have another Inspection meeting. If it's less than 10%, the author and the moderator can do a walkthrough. Regardless of how much code is changed, the moderator must check all the changes as part of the follow-up. **MODULE 12 Code Modern Code Review** Introduction Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The objective of this paper is to increase our understanding of the practical benefits that the MCR process produces on reviewed source code. To anyone who thinks of code reviews with a cringe and a shudder, recalling the way they used to be done years ago, the prospect of introducing such a system into your fast- paced Agile workplace can seem like cruel and unusual punishment. Beginning back in 1976, when IBM's Michael Fagan published his groundbreaking paper, \"Design and Code Inspections to Reduce Errors in Program Development,\" the idea of a formal, systematic code review caught on quickly (with earlier versions of peer review tending to be less structured) and generally consisted of a bunch of people sitting together around a table in a stuffy room, poring over dot matrix print-outs of computer code together, red pens in hand, until they were bleary-eyed and brain-dead. But just because something's painful it doesn't mean it isn't worth the effort. **Common Code Review Approaches** **The Email Thread** As soon as a given piece of code is ready for review, the file is sent around to the appropriate colleagues via email for each of them to review as soon as their workflow permits. While this approach can certainly be more flexible and adaptive than more traditional techniques, such as getting five people together in a room for a code-inspection meeting, an email thread of suggestions and differing opinions tends to get complicated fast, leaving the original coder on her own to sort through it all. **Pair Programming** As one of the hallmarks of Extreme Programming (XP), this approach to writing software puts developers' side by side (at least figuratively), working on the same code together and thereby checking each other's work as they go. It's a good way for senior developers to mentor junior colleagues, and seems to bake code review directly into the programming process. Yet because authors and even co-authors tend to be too close to their own work, other methods of code review may provide more objectivity. Pair programming can also use more resources, in terms of time and personnel, than other methods. **Over-The-Shoulder** More comfortable for most developers than XP's pair programming, the old over-the-shoulder technique is the easiest and most intuitive way to engage in peer code review. Once your code is ready, just find a qualified colleague to site down at your workstation (or go to theirs) and review your code for you, as you explain to them why you wrote it the way you did. This informal approach is certainly \"lightweight,\" but it can be a little too light if it lacks methods of tracking or documentation. (Hint: bring a notepad.) **Tool Assisted** We saved our personal favorite for last, as there is arguably no simpler and more efficient way to review code than through software-based code review tools, some of which are browser-based or seamlessly integrate within a variety of standard IDE and SCM development frameworks. Software tools solve many of the limitations of the preceding approaches above, tracking colleagues' comments and proposed solutions to defects in a clear and coherent sequence (similar to tracking changes in MS Word), enabling reviews to happen asynchronously and nonlocally, issuing notifications to the original coder when new reviews come in, and keeping the whole process moving efficiently, with no meetings and no one having to leave their desks to contribute. Some tools also allow requirements documents to be reviewed and revised and, significantly, can also generate key usage statistics, providing the audit trials and review metrics needed for process improvement and compliance reporting. **MODULE 13 Objectives of Software Testing** **Objectives of Software Testing** Software testing is a form of testing in which various subsystems are combined to test the entire system as a complete entity. It can also be stated as the process of verifying and validating that a software or application is bug free and it meets the technical requirements. Software Testing has different goals and objectives. The major objectives of Software testing are finding defects which may get created by the programmer while developing the software. The developers must gain confidence in and providing information about the level of quality. To have a software that has no defects and sure that the end result meets the business and user requirements. Another objective is to ensure that it satisfies the that is requirement specification and system requirement specifications. Finding faults in the existing software is one of the goals of software testing another is finding measures to improve the software in terms of efficiency, accuracy and usability. It mainly aims at measuring specification, functionality and performance of a software program or application. **Software testing can be divided into two steps:** 1\. **Verification**: This is a set of tasks that ensure that the software implements a specific function. A question that will popped up here is, are we building the product right? 2\. **Validation**: A different set of tasks that ensure that the software has been built is traceable to customer requirements. The question here is, are we building the right product. Different Types of Software Testing **Manual Testing** In this type of testing, the use of automated tool or any script is prohibited. What will happen here is that the tester takes over the role of an end-user and tests the software to identify any unexpected behavior or bug. This testing has different stages such as unit testing, integration testing, system testing, and user acceptance testing. The testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of testing. Exploratory testing is also included, as the testers explore the software to identify errors in it. **Automation Testing** This testing is also known as Test Automation, the tester writes scripts and uses another software to test the product. It involves the automation of a manual process. Automation testing is used to re-run the test scenarios that were performed manually, quickly, and repeatedly. Another thing is the automation testing is also used to test the application from load, performance, and stress point of view. It increases the test coverage, improves accuracy, and saves time and money in comparison to manual testing. **Dynamic Testing** Software is developed in units called subroutines or programs. These units, in turn, are combined to form large systems. One approach to Quality Assurance is to test the code for a completed unit of software by actually entering test data and comparing the results with the expected results in a process called dynamic testing. There are two forms of dynamic testing: **Black-box:** This testing involves viewing the software unit as a device that has expected input and output behaviors but whose internal workings are unknown (a black box). If the unit demonstrates the expected behaviors for all the input data in the test suite, it passes the test. Black-box testing takes place without the tester having any knowledge of the structure or nature of the actual code. For this reason, it is often done by someone other than the person who wrote the code. **White-box:** A test that treats the software unit as a device that has expected input and output behaviors but whose internal workings, unlike the unit in black box testing, are known. White-box testing involves testing all possible logic paths through the software unit with thorough knowledge of its logic. The test data must be carefully constructed so that each program statement executes at least once. For example, if a developer creates a program to calculate an employee's gross pay, the tester would develop data to test cases in which the employee worked less than 40 hours, exactly 40 hours, and more than 40 hours (to check the calculation of overtime pay). Other Types of Software Testing **Static testing**: Special software programs called static analyzers are run against new code. Rather than reviewing input and output, the static analyzer looks for suspicious patterns in programs that might indicate a defect. **Integration testing:** After successful unit testing, the software units are combined into an integrated subsystem that undergoes rigorous testing to ensure that the linkages among the various subsystems work successfully. **System testing:** After successful integration testing, the various subsystems are combined to test the entire system as a complete entity. **User acceptance testing:** Independent testing is performed by trained end users to ensure that the system operates as they expect. **Unit Testing:** It focuses on the smallest unit of software design. In this, we test an individual unit or group of interrelated units. It is often done by the programmer by using sample input and observing its corresponding outputs. **Regression Testing:** Every time a new module is added leads to changes in the program. This type of testing makes sure that the whole component works properly even after adding components to the complete program. **Smoke Testing:** This test is done to make sure that software under testing is ready or stable for further testing. It is called a smoke test as the testing an initial pass is done to check if it did not catch the fire or smoke in the initial switch on. **Alpha Testing:** This is a type of validation testing. It is a type of acceptance testing which is done before the product is released to customers. It is typically done by QA people. **Beta Testing:** The beta test is conducted at one or more customer sites by the end-user of the software. This version is released for a limited number of users for testing in a real-time environment. **Stress Testing:** In this, we give unfavorable conditions to the system and check how they perform in those conditions. **Performance Testing:** It is designed to test the run-time performance of software within the context of an integrated system. It is used to test the speed and effectiveness of the program. It is also called load testing. In it we check, what is the performance of the system in the given load. **Object-Oriented Testing:** This testing is a combination of various testing techniques that help to verify and validate object-oriented software. **This testing is done in the following manner:** **o Testing of Requirements** **o Design and Analysis of Testing** **o Testing of Code o Integration testing** **o System testing o User Testing** **MODULE 14 Testing of a Software** **Test Process in Software Testing** The process of testing is [not just a single activity]. It must be planned and requires discipline to act upon it. The effectiveness and quality of software testing are mainly determined by the quality of the test processes used. **The following are the basic steps:** 1\. Planning and Control Test planning involves creating a document that contains the overall approach and test objectives. It involves reviewing the test basis, identifying the test conditions based on analysis of test items, writing test cases and Designing the test environment. Completion or exit criteria must be specified so that we know when testing (at any stage) is complete. Control is the activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project. 2\. Analysis and Design In this stage it has major tasks, such as reviewing the test basis. The test basis is the information on which test cases are based, including the requirements, design specifications, product risk analysis, architecture and interfaces. Another task here is identifying test conditions, designing the test environment set-up, and identify the required infrastructure and tools, 3\. Implementation and Execution Test execution involves actually running the specified test on a computer system either manually or by using an automated test tool. It is a Fundamental Test Process in which actual work is done. One of the major tasks here is to create test suites from test cases for efficient test execution. Test suite is a collection of test cases that are used to test a software program. 4\. Evaluating Exit Criteria and Reporting Evaluating exit criteria is a process defining when to stop testing. It depends on coverage of code, functionality or risk. Basically, it also depends on business risk, cost and time and vary from project to project. Exit criteria come into picture, when maximum test cases are executed with certain pass percentage, bug rate falls below certain level and achieve the deadlines. Evaluating exit criteria has the following major tasks, such as assessing if more test is needed or if the exit criteria specified should be changed, and writing a test summary report for stakeholders. 5\. Test Closure Activities Test closure activities are done when software is ready to be delivered. The testing can be closed for the other reasons also like, when a project is cancelled, some target is achieved and the maintenance release or update is done. Test closure activities have the following major tasks, such as cheçking which planned deliverables are actually delivered, finalizing and archiving testware such as scripts, and evaluating how the testing went and learn lessons for future releases and projects. **MODULE 15 Software Development and Deployment Tools** **Unit Testing and Test-Driven Development** **Unit Testing** is a **software testing technique** by means of which **individual units of software** i.e. group of **computer program modules**, usage procedures and operating procedures are tested to determine whether they are suitable for use or not. It is a **testing method** using which **every independent module** is tested to determine if there are any issue by the developer himself. It is correlated with functional correctness of the independent modules. **Objective of Unit Testing:** The objective of Unit Testing is: 1\. **To isolate a section of code.** 2\. **To verify the correctness of code.** 3\. **To test every function and procedure.** 4\. **To fix bug early in development cycle and to save-costs.** 5\. **To help the developers to understand the code base and enable them to make changes quickly** 6\. **To help for code reuse.** ------------------------ **Acceptance Testing** ------------------------ -------------------- **System Testing** -------------------- ------------------------- **Integration Testing** ------------------------- ------------------ **Unit Testing** ------------------ There are two types of unit testing a mentioned above, **manual** and **automated**. This is the workflow of unit testing: ----------------------- **Create Test Cases** ----------------------- ------------ **Review** ------------ **Advantages of Unit Testing:** - Unit Testing allows developers to learn what functionality is provided by a unit and how to use it to gain a basic understanding of the unit API. - Unit testing allows the programmer to refine code and make sure the module works properly. - Unit testing enables to test parts of the project without waiting for others to be completed. **Test Driven Development** (TDD) is the **process** in which **test cases are written** **before the code that** **validates those cases**. It depends on repetition of a very short development cycle. Test driven **Development** is a technique in which automated Unit test are used to drive the design and free decoupling of dependencies. The following sequence of steps is generally followed: 1\. Add a test -Write a test case that describe the function completely. In order to make the test cases the developer must understand the features and requirements using user stories and use cases. 2\. Run all the test cases and make sure that the new test case fails, while otter existing fails 3\. Write the code that passes the test case. 4\. Run the test cases. 5\. Refactor code -This is done to remove duplication of code. 6\. Repeat the above-mentioned steps again and again. **Red** - Create a **test case and make it fail** **Green** - Make the **test case pass by any means**. **Refactor** - **Change the code to remove duplicate/redundancy**. Benefits: - Unit test provides constant feedback about the functions. - Quality of design increases which further helps in proper maintenance. - Test driven development act as a safety net against the bugs. - TDD ensures that your application actually meets requirements defined for it. - TDD have very short development lifecycle. - Software Deployment and Deployment Tools **Software deployment** includes all of the steps, processes, and activities that are required to make a software system or update available, to its intended users. Today, most IT organizations and software developers deploy software updates, patches and new applications with a combination of manual and automated processes. Some of the most common activities of software deployment include software release, installation, testing, deployment, and performance monitoring. **Software Release** The software release cycle refers to the stages of development for a piece of computer software, whether it is released as a piece of physical media, online, or as a web-based application. When a software development team prepares a new software release, it typically includes a specific version of the code and associated resources that have been assigned a version number. When the code is updated or modified with bug fixes, a new version of the code may be packaged with supporting resources and assigned a new release number. Versioning new software releases in this way helps to differentiate between different versions and identify the most up-to-date software release. **Preparation** In the preparation stage, developers must gather all of the code that will e deployed along with any other libraries, configuration files, or resources needed for the application to function. Together, these items can be packaged as a single software release. Developers should also verify that the host server is correctly configured and running smoothly. **Testing** Before an update can be pushed to the live environment, it should be deployed to a test server where it can be subjected to a pre-configured set of automated tests. Developers should review the results and correct any bugs or errors before deploying the update to the live environment. Deployment Once an update has been fully tested, it can be deployed to the live environment. Developers may run a set of scripts to update relevant databases before changes can go live. The final step is to check for bugs or errors that occur on the live server to ensure the best possible customer experience for users interacting with the new update. **Software deployment tools** also enable developers to track progress on their projects, and manage changes. Constant integration and deployment may be utilized to deploy software as changes are done, giving seamless updates for end-users. Choosing the best software deployment tool is difficult because what might be best for a particular development team may not meet another team\'s requirements. The following are some of the leading software deployment tools available in the market: - Bamboo - TeamCity - AWS CodeDeploy - Octopus Deploy Deployment is not the last stage of the software development life cycle. The reason for this is that there is simply no way to catch all bugs and flaws during testing. Maintenance is the final step of the life cycle, and this is when remaining fixes will be delivered. It is also when additional features or functions might be introduced, and updates to the software made.