INFS4202 SW Testing & Quality Assurance PDF

Summary

This document is a complete set of chapters for an introductory course on software testing and quality assurance. It covers topics such as different software testing types, techniques, and tools. The topics include software testing, different types of testing, and some key terminology related to software testing.

Full Transcript

INFS4202 SW Testing & Quality Assurance Contents INFS4202 SW Testing & Quality Assurance................................................................................................................... 1 Chapter 00 – Introduction....................................................................

INFS4202 SW Testing & Quality Assurance Contents INFS4202 SW Testing & Quality Assurance................................................................................................................... 1 Chapter 00 – Introduction......................................................................................................................................... 4 What is Software Testing?..................................................................................................................................... 4 Skills required to become a Software Tester......................................................................................................... 4 SDLC Software Development Life Cycle................................................................................................................ 4 The stages of the testing process.......................................................................................................................... 4 Chapter 1 – Introduction to Software Testing.......................................................................................................... 5 Software Testing.................................................................................................................................................... 5 Bugs a.k.a............................................................................................................................................................... 5 Sources of Problems............................................................................................................................................... 5 Some Software Testing Objectives........................................................................................................................ 5 Correctness of Software......................................................................................................................................... 5 Adverse Effects of Faulty Software........................................................................................................................ 5 Testing Team.......................................................................................................................................................... 6 Testability............................................................................................................................................................... 6 Some Terminology.................................................................................................................................................. 7 A software defect occurs when at least one of these rules is true....................................................................... 7 Testing in Software Lifecycle................................................................................................................................. 7 When to Start Testing in SDLC Testing Cycle........................................................................................................ 8 Static and dynamic V&V........................................................................................................................................ 8 Key Differences....................................................................................................................................................... 8 Testing stages........................................................................................................................................................ 9 System & Performance Testing.............................................................................................................................. 9 Alpha Testing & Beta Testing............................................................................................................................... 10 Chapter 2 – Test Scenario and Test Cases............................................................................................................. 11 What is Test Scenario?......................................................................................................................................... 11 What is Scenario Testing?.................................................................................................................................... 11 Best Practices for Writing Test Scenarios............................................................................................................ 11 What is a Test case?............................................................................................................................................ 11 Test Case Attributes............................................................................................................................................. 12 Chapter 3 – Testing-Component Level................................................................................................................... 13 Introduction.......................................................................................................................................................... 13 Testing Strategy Characteristics.......................................................................................................................... 13 Verification and Validation V&V.......................................................................................................................... 13 Role of Scaffolding............................................................................................................................................... 13 White-Box Testing................................................................................................................................................ 14 Black-Box Testing................................................................................................................................................. 15 Summary – Chapter 3.......................................................................................................................................... 16 Chapter 4 – Object-Oriented Testing and Metrics................................................................................................. 17 Object-Oriented Testing – Guidelines................................................................................................................. 17 OOT Strategy (Object-Oriented Testing)............................................................................................................. 17 Object-Oriented Test Case Design....................................................................................................................... 19 Object-Oriented Test Methods............................................................................................................................ 20 Object-Oriented Metrics...................................................................................................................................... 20 Chapter 5 – Performance Testing........................................................................................................................... 21 Performance Testing............................................................................................................................................ 21 Why Performance Testing?.................................................................................................................................. 21 Types of Performance Testing.............................................................................................................................. 21 Load Testing......................................................................................................................................................... 21 Stress Testing....................................................................................................................................................... 22 Endurance Testing................................................................................................................................................ 22 Spike Testing........................................................................................................................................................ 23 Volume Testing..................................................................................................................................................... 23 Performance Testing Process............................................................................................................................... 23 1. Performance Test Scenarios........................................................................................................................ 23 2. User Distribution......................................................................................................................................... 24 3. Scripting...................................................................................................................................................... 24 4. Dry Run........................................................................................................................................................ 24 5. Running the Test and Analyzing the Results.............................................................................................. 24 Performance Testing Tools Examples.................................................................................................................. 24 Chapter 6 - Software Maintenance and Evolution................................................................................................ 26 Software Maintenance......................................................................................................................................... 26 Software Evolution............................................................................................................................................... 26 Program Comprehension..................................................................................................................................... 27 Reverse Engineering............................................................................................................................................. 27 Key to Maintenance is in Development............................................................................................................... 27 Software Maintenance Activities......................................................................................................................... 27 Program Evolution................................................................................................................................................ 27 Software Maintenance Types.............................................................................................................................. 28 Program Comprehension Strategies.................................................................................................................... 28 Two Main Levels of Reverse Engineering............................................................................................................. 28 Reverse Engineering Objectives........................................................................................................................... 28 Source Code Reverse Engineering Techniques..................................................................................................... 29 Tools for Code Maintenance................................................................................................................................ 29 Dependency Graph............................................................................................................................................... 29 Clone detector...................................................................................................................................................... 29 Package Decomposition...................................................................................................................................... 29 Finding Dependencies.......................................................................................................................................... 29 Class Abstraction................................................................................................................................................. 30 Chapter 7 – Software Quality Assurance............................................................................................................... 31 What is Software Quality Assurance (SQA)?....................................................................................................... 31 Quality Control (QC)............................................................................................................................................. 31 Quality Assurance (QA)........................................................................................................................................ 31 Teams Involved in SQA......................................................................................................................................... 31 Software Reviews................................................................................................................................................. 32 Guidelines for FTRs.............................................................................................................................................. 32 Inspections vs. Walkthroughs.............................................................................................................................. 32 Why Inspections are Important........................................................................................................................... 33 Inspection Process............................................................................................................................................... 33 Roles and Data Usage......................................................................................................................................... 33 Impact of Inspections.......................................................................................................................................... 34 Statistical Quality Assurance (SQA).................................................................................................................... 34 ISO 9000 Standards............................................................................................................................................. 34 Capability Maturity Model (CMM)...................................................................................................................... 35 Six Sigma.............................................................................................................................................................. 35 McCall’s Quality Factors...................................................................................................................................... 36 ISO 9001, ISO 9002, ISO 9003 Standards............................................................................................................ 37 ISO 9001 Requirements (Grouped under 20 Categories).................................................................................... 38 Software vs. Other Industries.............................................................................................................................. 38 Salient Features of ISO 9001................................................................................................................................ 38 Shortcomings of ISO 9001 Certification.............................................................................................................. 38 SEI Capability Maturity Model (CMM)................................................................................................................ 39 CMM Levels Explained......................................................................................................................................... 39 Comparison of ISO 9001 and CMM..................................................................................................................... 39 Personal Software Process (PSP)......................................................................................................................... 39 Six Sigma and Quality Attributes........................................................................................................................ 40 Chapter 8 – Test Management & Control............................................................................................................... 41 What to Estimate?............................................................................................................................................... 41 Estimation Key Techniques.................................................................................................................................. 41 Test estimation best practices............................................................................................................................. 42 Test Plan............................................................................................................................................................... 42 Chapter 00 – Introduction What is Software Testing? Software Testing is a process of verifying a computer system/program to decide whether it meets the specified requirements and produces the desired results. As a result, you identify bugs in software product/project. Skills required to become a Software Tester Non-Technical Skills Analytical skills Communication skills Time Management & Organization Skills GREAT Attitude Passion Technical Skills Basic knowledge of Database/ SQL Basic knowledge of Linux commands Knowledge and hands-on experience of a Test Management Tool Knowledge and hands-on experience of any Defect Tracking tool Knowledge and hands-on experience of Automation tool SDLC Software Development Life Cycle Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and test high quality software. The SDLC aims to produce high-quality software that meets or exceeds customer expectations, reaches completion within times and cost estimates. During this life cycle There should be a testing process involved. The stages of the testing process Chapter 1 – Introduction to Software Testing Software Testing Software testing is a process used to identify the correctness, completeness and quality of developed computer software. It is the process of executing a program / application under positive and negative conditions by manual or automated means. It checks for the: – Specification – Functionality – Software Performance Software testing is like giving a computer program some tasks to do and checking if it does them correctly. The idea is to find any problems (bugs) in the software before it is used by real people. For example, imagine you are testing a calculator app. You give it the task of adding 2 + 2, and you expect it to give you 4. If it does, that’s great! But if it gives you 5, then there’s a problem that needs fixing. Bugs a.k.a. Defects Fault Problem Error Incident Anomaly Variance Anomaly Failure Inconsistency Product Anomaly Product Incidence Feature Sources of Problems Requirements Definition: Erroneous, incomplete, inconsistent requirements. Design: Fundamental design flaws in the software. Implementation: Mistakes in chip fabrication, wiring, programming faults, malicious code. Support Systems: Poor programming languages, faulty compilers and debuggers, misleading development tools. Inadequate Testing of Software: Incomplete testing, poor verification, mistakes in debugging. Evolution: Sloppy redevelopment or maintenance, introduction of new flaws in attempts to fix old flaws, incremental escalation to inordinate complexity. Some Software Testing Objectives Find as many defects as possible. Find important problems fast. Assess perceived quality risks. Advise about perceived project risks. Advise about perceived quality. Certify to a given standard. Assess conformance to a specification (requirements, design, or product claims). Correctness of Software When we test a software program, we can find errors, but we can never be 100% sure that there are no more errors left, no matter how many tests we run. Imagine testing a car. You drive it on a bumpy road, and if it breaks down, you know there's a problem. But even if it doesn’t break down, you can’t be completely certain that it won’t break down in the future. You’ve only proven it works for that specific road. The same is true for software—just because it works in some situations, doesn’t mean it will work in all possible situations. So, software testing helps us find mistakes, but we can’t prove that there are no mistakes at all. To fully guarantee that software is perfect, you’d need a more complex and theoretical process called formal verification, but even that has its limits if there’s any mistake in the verification itself. Adverse Effects of Faulty Software Faulty software can lead to serious problems in many areas of life. Let’s look at some examples to explain this: 1. Communications: Imagine sending an important email, but due to software bugs, the email gets corrupted or never arrives. This can cause misunderstandings or missed opportunities. 2. Space Applications: If a rocket's software has bugs, it might cause the rocket to fail, resulting in huge losses of time, money, or even lives. A bug in a spaceship's navigation system could lead to crashes or mission failures. 3. Defense and Warfare: In military systems, faulty software might misidentify a friendly aircraft as an enemy, leading to accidents or attacks on allies. 4. Transportation: A bug in the software controlling a car's brakes could prevent the car from stopping, leading to accidents. Similarly, software errors in airplanes or trains could cause delays or even deadly crashes. 5. Safety-Critical Applications: Some systems, like those used in hospitals, control life-saving machines. If their software fails, it could lead to injury or death. 6. Electric Power: Faulty software controlling the power grid could lead to widespread power outages or even cause dangerous accidents like radiation leaks in nuclear plants. 7. Money Management: Banking software bugs could allow fraud, cause privacy violations, or even shut down entire financial systems like stock markets or banks, leading to financial chaos. 8. Elections: A software bug in voting machines could report the wrong winner in an election, either by mistake or on purpose, leading to political instability. 9. Law Enforcement: Bugs in software used by police or courts might lead to wrongful arrests or even jail time for innocent people. In all these cases, faulty software can cause harm to people, property, and institutions. It shows how important it is for software to work correctly, especially in critical systems. Testing Team 1. Program Manager: Think of them as the "project leader" who plans and makes important decisions. For example, they decide what features an app should have, how long it will take to build, and how to solve problems that come up. Their goal is to make sure the project finishes on time without too many risks. 2. QA (Quality Assurance) Lead: This person is like a "coach" for the testing team. They help improve how the team checks the software and make sure they’re following the best methods. For example, they teach others how to use testing tools and work with other departments, like the developers, to improve the product. 3. Test Analyst Lead: They’re in charge of creating and managing test plans. For example, they decide which parts of a mobile app need testing, create the tests (both manual and automated), and gather the data needed to run these tests. After each test, they look at the results to see if there are any problems. 4. Test Engineer: These are the "hands-on testers" who actually run the tests. They write test cases (like step-by-step instructions for testing an app) and report any bugs they find. For example, if they’re testing a shopping app, they check if they add items to the cart work and report if something goes wrong. They also figure out the best way to test every part of the app to make sure nothing is missed. Testability Testability refers to how easy or difficult it is to test software and find any issues. Let’s break down each part in simple words: 1. Operability: o The software works smoothly without causing many problems. For example, if you are testing a mobile app, it should run without crashing or freezing, making it easier to test. 2. Observability: o It’s easy to see what happens after you run a test. For instance, if you test a calculator app and ask it to add 2 + 2, the result (4) should be clearly shown. If something goes wrong, you can easily tell. 3. Controllability o You can automate the testing process and make it efficient. For example, instead of manually testing every button in an app, you can set up a script to automatically click all the buttons for you. 4. Decomposability: o You can break the software into smaller parts and test them separately. For example, in a shopping app, you can test the login system, the product search, and the checkout process independently to make troubleshooting easier. 5. Simplicity: o The software should be designed in a simple way so it's easier to test. For example, if a game has a lot of complicated rules, it becomes harder to test. But if it’s straightforward, like "tap the screen to jump," it’s much simpler to check for errors. 6. Stability: o The software should be stable, meaning that no major changes are made during the testing process. For example, if you're testing a feature and the developers keep changing it, it becomes harder to finish the tests. 7. Understandability: o The design and code should be easy to understand. For example, if the code is clearly written and the design is logical, testers can figure out how to test it more easily. Some Terminology Error o An error is a human action that produces an incorrect result that results in a fault. Bug o The presence of errors at the time of execution of the software. Fault o State of software caused by an error. Failure o Deviation of the software from its expected result. It is an event. A failure is said to occur whenever external behavior does not conform to system spec. A software defect occurs when at least one of these rules is true The software does not do something that the specification says it should do. The software does something that the specification says it should not do. The software does something that the specification does not mention. The software does not do something that the product specification does not mention but should. The software is difficult to understand, hard to use, slow Testing in Software Lifecycle Requirements phase Analysis phase Design phase (System and Object) Implementation phase Testing phase Integration phase Maintenance phase When to Start Testing in SDLC Testing Cycle Although testing varies between organizations, there is a cycle to testing: 1. Requirements Analysis: Testing should begin in the requirements phase of the software life cycle (SDLC). 2. Design Analysis: During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those testers work. 3. Test Planning: Test Strategy, Test Plan(s), Test Bed creation. 4. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software. 5. Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team. 6. Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. 7. Retesting the Defects Static and dynamic V&V Static and dynamic testing are two different ways to check if software is working correctly. Let’s explain each one in simple terms: 1. Static Testing (No code execution) This type of testing checks the software without running the code. It’s like reviewing the plans, documents, and code to find mistakes early. Think of it like proofreading an essay—you are looking for spelling or grammar errors before printing it out. Example: Before launching a website, you check the design documents, requirements, and code to ensure everything is correct. If you find something wrong, like a missing requirement or a typo in the code, you fix it before even running the website. 2. Dynamic Testing (Code execution) In this type of testing, you run the software to see if it behaves as expected. It checks whether the program actually works by testing its functionality and performance. This is like trying out the essay after it’s printed to see if it makes sense when read aloud. Example: After launching the website, you try all the features—like logging in, searching for products, or checking out—to see if they work as expected. You also check if the website is fast and doesn’t use too much memory or slow down. Key Differences Execution: Static Testing: No need to run the program. You review documents, code, or design for mistakes. o Example: You check a written recipe for any missing steps without cooking the dish. Dynamic Testing: The program is actually run to check how it behaves. o Example: You follow the recipe and cook the dish to see if it tastes good. What is Checked: Static Testing: You look at things like the code, requirements, and design to spot errors. o Example: You check a website’s code or design plan before the website is built. Dynamic Testing: You test the functionality, performance, and resource usage after running the software. o Example: You check if a website loads quickly and allows users to log in without errors. Purpose: Static Testing: Focuses on preventing defects early on. o Example: You review a project plan to prevent mistakes before the project begins. Dynamic Testing: Focuses on finding and fixing defects while using the software. o Example: You test a finished project to find any issues that still need fixing. Verification vs. Validation: Static Testing: This is part of verification—making sure the product is being built correctly (according to the plan). o Example: You check if the blueprint for a house is correct. Dynamic Testing: This is part of validation—making sure the product works as expected when used. o Example: You walk through the finished house to see if everything is working, like the doors and windows. When It's Done: Static Testing: Done before compiling the code or running the software. o Example: Reviewing the building plan before construction begins. Dynamic Testing: Done after compiling the code, meaning after the software has been built. o Example: Testing the built house by turning on the lights and water to see if everything works. Testing stages 1. Unit Testing: This is where individual parts of the software, called units or components, are tested on their own to make sure they work correctly. 2. Integration Testing: After the individual parts are tested, you combine them to see if they work well together. This stage checks for issues that happen when different parts of the software interact. 3. System Testing: This is where the entire system is tested as a whole to make sure all the features work together. This happens before the software is delivered to the user. 4. Acceptance Testing: This is the final stage, where real users test the software to make sure it meets their needs and requirements. It is sometimes called alpha testing when done by the company’s own testers before release. System & Performance Testing 1. Usability Testing: This checks how easy and user-friendly the software is for people to use. Example: Imagine you’re testing a new app. You check if the buttons are easy to find, the screens are simple to understand, and the design looks nice. You also see if tasks are quick to complete (e.g., how fast you can buy something on an online shopping app). 2. Load Testing: This checks if the software works well when many people are using it at the same time. Example: You test a shopping website to see if it can handle thousands of people using it during a sale without slowing down or crashing. 3. Stress Testing: This is like load testing but taken to the extreme to find the breaking point. Example: You keep adding users to the shopping website until it stops working to find out the maximum number of users the website can handle at once. 4. Data Volume Testing: This checks how the software handles large amounts of data. Example: You test a banking app to see if it can store millions of transactions without slowing down or losing any data. 5. Security Testing: This checks how well the software protects against unauthorized access (hackers) or other harmful activities. Example: You test an online banking app to see if someone can break into accounts, steal information, or damage the system. Alpha Testing & Beta Testing Alpha Testing – The application is tested by the users who don’t know about the application. – Done at developer’s site under controlled conditions – Under the supervision of the developers. Beta Testing – This testing is done before the final release of the software to end-users. – Software is given to some selected users to perform their own testing with no controlled conditions. So, they are free enough to do whatever they want to do on the system to find errors. Chapter 2 – Test Scenario and Test Cases What is Test Scenario? A test scenario is like a story about how a person would use a software application in real life. It describes a situation or a specific feature to test, focusing on how the software should behave in that context. Test scenarios can serve as the basis for lower-level test case creation. A single test scenario can cover one or more test cases. Therefore, a test scenario has a one-to-many relationship with the test cases. Test Scenario Example As an example, consider a test scenario – “Verify that the user is not able to login with incorrect credentials”. Now, this test scenario can be further broken down into multiple test cases like 1. Checking that a user with the correct username and incorrect password should not be allowed to log in. 2. Checking that a user with an incorrect username and correct password should not be allowed to log in. What is Scenario Testing? Scenario Testing is a method of testing software by using realistic scenarios to see how the application behaves. It’s like acting out real-life situations to ensure everything works as expected. Here’s a simple explanation: Characteristics of Scenario Testing: 1. Coherent: o The test scenario should make sense and reflect a logical use of the software. o Example: Testing a travel booking app with a scenario like "A user searches for a flight, books it, and receives a confirmation email" is coherent because it follows a natural sequence of actions. 2. Credible: o The scenarios should be realistic and reflect what could actually happen in the real world. o Example: Testing how an online store handles a situation where a user applies a discount code at checkout is credible because it’s a common real-world scenario. 3. Motivating: o Scenarios should highlight important issues that motivate stakeholders to fix problems if they occur. o Example: If a scenario shows that users are unable to complete a purchase due to a bug, it motivates developers to address the issue quickly. 4. Complex: o Scenarios often test more complex features or workflows, not just simple actions. o Example: Testing an app’s workflow for a user who books a hotel room, modifies the booking, and then cancels it involves multiple steps and interactions, making it a complex scenario. 5. Easy to Evaluate: o Even though scenarios involve complex logic, the results should be straightforward to check. o Example: After testing the flight booking scenario, it’s easy to evaluate if the user received a confirmation email and if the booking details are correct. Best Practices for Writing Test Scenarios Should be easy to understand. Easily executable. Should be accurate. Traceable or mapped with the requirements. Should not have any ambiguity. What is a Test case? A test case is like a detailed checklist for testing a specific part of a software to see if it works as expected. It includes everything you need to test a feature thoroughly, including the steps to follow, what you need to test, and what the results should be. A test case has pre-requisites, input values, and expected results in a documented form that cover the different test scenarios. Once the test cases are created from the requirements, it is the job of the testers to execute those test cases. The testers read all the details in the test case, perform the test steps, and then based on the expected and actual result, mark the test case as Pass or Fail. Test Case Attributes 1. TestCaseId 2. Test Summary 3. Description 4. Prerequisite or pre-condition 5. Test Steps 6. Test Data 7. Expected result 8. Actual result 9. Automation Status 10. Date 11. Executed by Chapter 3 – Testing-Component Level Introduction A software component testing strategy considers testing individual components and integrating them into a working system. Testing begins “in the small” and progresses “to the large.” By this we mean that early testing focuses on a single component or on a small group of related components and applies tests to uncover errors in the data and processing logic that have been encapsulated by the component(s). After components are tested, they must be integrated until the complete system is constructed. Testing Strategy Characteristics To perform effective testing, o you should conduct technical reviews. By doing this, many errors will be eliminated before testing commences. Testing begins at the component level and works “outward” toward the integration of the entire computer-based system. Different testing techniques are appropriate for different software engineering approaches and at different points in time Testing is conducted by the developer of the software and (for large projects) an independent test group. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy. Verification and Validation V&V Verification refers to the set of tasks that ensure that software correctly implements a specific function Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements. Verification and Validation SQA activities o Technical reviews o Quality and configuration audits o Performance monitoring o Simulation o Feasibility study o Documentation review o Database review o Algorithm analysis o Development testing o Usability testing o Acceptance testing o Installation testing Role of Scaffolding Scaffolding in programming is like setting up a temporary structure to support something while it's being built. In the context of testing components (pieces of a program), scaffolding means creating the necessary tools and setup to effectively test these components. Here's how it works: 1. Component Testing: When you test a component (a small part of a program), it often relies on other components to function correctly. Testing one component in isolation (on its own) can be challenging because it might need to interact with these other components. 2. Driver and Stub Software: o Driver: Think of a driver as a tool that calls or uses the component you’re testing. It simulates how other parts of the program will interact with it. o Stub: A stub is a simple version of another component that the one you're testing depends on. It provides the basic responses needed for testing but doesn’t have all the full functionality. 3. Scaffolding Setup: To test a component effectively, you set up this "scaffolding" by creating drivers and stubs. This setup helps to mimic the real environment in which the component will operate. It ensures that you can White-Box Testing Imagine you have a complex machine, and you want to test how well it works. White-box testing is like opening up the machine and examining its internal parts to make sure everything functions correctly. Instead of just checking how the machine behaves from the outside (like black-box testing), you’re looking inside to see how it’s all put together. Here’s how white-box testing works: 1. Knowledge of Internal Workings: You use what you know about the inside of the machine (or the code) to create tests. This means you’re looking at the actual code and its structure. 2. Types of Tests in White-Box Testing: o Path Testing: Ensure that every possible route or path through the code is tested. Imagine a road map where you need to drive down every road at least once. o Decision Testing: Test every decision point in the code to make sure it works both ways. For example, if there’s a decision like “If it’s raining, bring an umbrella; otherwise, wear sunglasses,” you’d test both scenarios. o Loop Testing: Check the loops (repeated sections of code) to make sure they work properly. For example, ensure the loop doesn’t run forever or that it handles cases where it runs only once or multiple times correctly. o Data Structure Testing: Make sure the internal data used by the code (like arrays or lists) is handled correctly. Basis Path Testing: o Basis Path Testing is a specific white box testing method that helps make sure you cover all the important paths in the code. o Flow Graph: To use this method, you first create a diagram (a flow graph) that shows all possible paths the code can take. o Basis Set: You then create test cases to cover this set of paths. This ensures that every line of code gets executed at least once. Structural Testing Techniques: o Statement Testing: Test each individual statement in the code to ensure it works as expected. o Loop Testing: Check how loops behave in different scenarios—whether they are skipped, run once, or run multiple times. o Path Testing: Make sure every possible route through the code is tested. o Branch Testing: Ensure every possible outcome of decisions (like if statements) is tested. Black-Box Testing Black-Box Testing is like testing a machine without looking inside it. You don’t care how it works internally; you just want to check if it behaves the way it’s supposed to. You’re focusing on what the program does, not how it does it. Here’s how black-box testing works: 1. Focus on Inputs and Outputs: In black-box testing, you give the program a set of inputs and see if it produces the correct outputs. You don’t need to know how the code processes the inputs. You just want to make sure it works as expected based on what it’s supposed to do. 2. Functional Testing: This kind of testing focuses on whether the program’s functions work correctly. You test different features and see if they perform as required. 3. Error Types: Black-box testing helps you find different kinds of errors: o Incorrect or Missing Functions: Some features might not work correctly, or there may be features that are missing. o Interface Errors: There might be issues with how the program interacts with other programs, systems, or users. o Data Errors: Problems with data structures or external databases that the program uses. o Behavior or Performance Errors: The program may not behave as expected, or it may perform too slowly. o Initialization and Termination Errors: Problems with how the program starts or ends. 4. Later Stages of Testing: Black-box testing is typically used in the later stages of testing when you’re making sure everything works as expected based on the requirements. In summary, black-box testing focuses on testing the functionality of the program from an external perspective. It’s all about checking if the program works correctly without worrying about how the code is written. Summary – Chapter 3 1. Introduction to Software Testing Testing starts with checking small parts (components) of the software and gradually combining them into a full system. It's easier to find errors in small sections before testing the entire system. 2. Testing Strategy Testing begins small and moves outward, meaning you first test individual components before combining them. It's important to conduct technical reviews to remove errors early. Testing finds errors, while debugging fixes them. 3. Verification & Validation (V&V) Verification checks if the software functions as required, while Validation ensures the software meets the user’s needs. V&V activities include technical reviews, feasibility studies, and usability testing. 4. White Box Testing White-box testing uses knowledge of the code's structure to create tests. It involves testing paths, logical decisions, loops, and data structures. 5. Black Box Testing Black-box testing focuses on the software’s behavior based on input and output, without looking at the code. It helps find issues like missing functions, interface errors, and data handling problems. 6. Boundary Value Analysis (BVA) This technique tests the boundaries of input values, as errors are likely to happen at extreme values. 7. Object-Oriented Testing In object-oriented testing, you test classes and their interactions instead of individual functions. 8. Class Testing Class testing focuses on the methods within a class and how they interact. For instance, in an Account class, methods like deposit and withdraw are tested together. 9. Integration Testing Integration testing checks if different modules work well together. This is done after unit testing. Types include Bottom-up, Top-down, and Big Bang integration testing. 10. Regression Testing After making changes to the software, you re-run previous tests to ensure no existing features were broken. 11. System Testing In system testing, the entire system is tested to make sure it meets all the specified requirements. 12. Acceptance Testing Acceptance testing is the final stage, where users test the system to ensure it meets their needs and expectations. 13. Test Stopping Criteria Testing stops when deadlines or budgets are reached, or when the desired software quality is achieved. Chapter 4 – Object-Oriented Testing and Metrics Object-Oriented Testing – Guidelines 1. Evaluate Correctness and Consistency Before testing the actual code, check if the design and analysis make sense. For object-oriented systems, this means looking at diagrams or models (like UML diagrams) to ensure that the objects, their relationships, and behaviors are correct and logical. Example: Image you’re designing a library system. You have classes like Book, Member, and Loan. Check if: i. Each Book can be loaned to only one Member at a time. ii. The Loan class correctly tracks when a book is issued and returned. 2. The Testing Strategy Changes Testing in object-oriented systems is a bit different because of encapsulation (the way objects hide their internal details). a. The Concept of 'Unit' Broadens o In procedural programming, a "unit" might be a single function. o In object-oriented programming, a "unit" could be an entire class or even a group of related classes working together. o Example: Instead of testing a single method (e.g., calculateFine()), you may test the whole Loan class, which includes methods for issuing, returning, and calculating fines. b. Integration Testing Focuses on Threads or Scenarios o Integration testing looks at how classes work together in specific situations or workflows. o Instead of testing random connections, focus on realistic scenarios where objects interact. o Example: In the library system, test the process of: 1. A Member borrowing a Book. 2. Creating a Loan object. 3. Checking if the Book's availability updates correctly. c. Validation Uses Black-Box Methods o Validation ensures the software works as expected by testing it like a user would (without worrying about the internal code). o Example: Test if the library system prevents a member from borrowing more than 5 books, regardless of how this limit is implemented. 3. Test Case Design Test cases use conventional methods like: Black-box testing: Focuses on inputs and expected outputs. White-box testing: Focuses on the internal logic and paths. a. Special Features in Object-Oriented Testing ▪ Test things unique to object-oriented systems, such as: ▪ Inheritance: Ensure child classes correctly extend or override parent classes. ▪ Polymorphism: Check if objects behave correctly depending on their actual type. ▪ Encapsulation: Ensure private data isn't modified unexpectedly. Example: o Inheritance: If PremiumMember is a child of Member, ensure it adds extra features (like higher borrowing limits) without breaking Member behavior. o Polymorphism: If a Member can be either a RegularMember or a PremiumMember, check that the system correctly identifies the type and applies the right rules. OOT Strategy (Object-Oriented Testing) 1. Encapsulation and Inheritance Make Testing More Complicated. Object-oriented programming (OOP) introduces new challenges compared to conventional programming due to two major features: a. Encapsulation -Encapsulation hides the internal data of a class, meaning the tester can’t directly see or manipulate the internal state of an object. -Instead, you must test it through the class’s public methods (getters and setters, for example). Example: -In a library system, the Book class has a private variable isAvailable (true if the book is available, false otherwise). -You can’t directly change isAvailable. Instead, you must test it indirectly using methods like borrowBook() or returnBook() to see if they update isAvailable correctly. b. Inheritance (and Polymorphism) - Inheritance allows classes to extend or override the behavior of parent classes. - Testing becomes harder because the behavior of methods may change based on the specific subclass (polymorphism). - You need to create new test cases for each derived (child) class. Example: The library system has: o A parent class Member with a method borrowLimit(). o A subclass PremiumMember that overrides borrowLimit() to allow borrowing more books. You must test borrowLimit() separately for both Member and PremiumMember to ensure they work correctly in their contexts. Complication with Multiple Inheritance: o If PremiumMember also inherits from another class, say VIPBenefits, you have to check how both parent classes influence PremiumMember. 2. Adapting Conventional for OO a. Class Testing (Unit Testing in OO) Instead of testing functions, you test entire classes. Check all methods of the class and ensure the object's state behaves as expected. Example: Test the Book class: 1. Does borrowBook() change isAvailable to false? 2. Does returnBook() change isAvailable back to true? 3. What happens if you call borrowBook() on a book that’s already borrowed? b. Integration Testing in OO Integration testing checks how multiple classes work together in a scenario. OO testing requires three strategies: 1. Thread-Based Testing o Focuses on how a series of classes respond to a single input or event. o Useful for workflows. Example: Test the process of borrowing a book: 1. A Member requests a Book. 2. The Loan class creates a record of the transaction. 3. Check that the book’s isAvailable status updates correctly. 2. Use-Based Testing o Tests classes needed for one specific use case (real-world scenario). Example: Use case: A member views their borrowing history. o Test the interaction of Member, Loan, and Book classes to display all borrowed books for that member. 3. Cluster Testing o Tests a group of classes that collaborate for a single feature or behavior. Example: Test the "Overdue Notifications" feature: o The Member, Loan, and Notification classes work together. o Check if overdue books trigger notifications correctly. Object-Oriented Test Case Design 1. Unique Identification Each test case should have a unique name or ID to distinguish it from others. This helps you know exactly which class and functionality the test is for. 2. Purpose of the Test Clearly state why the test is being performed. Example: "This test ensures the borrowBook() method updates the isAvailable status correctly." 3. Testing Steps For each test, you outline what will be done. This includes: a. Specified States List the initial conditions or "state" of the object before testing. Example: A Book object starts with isAvailable = true. b. Messages and Operations Specify which methods (operations) or interactions (messages) will be tested. Example: Call the borrowBook() method. c. Exceptions List errors or unusual behaviors you expect and want to test. Example: Test if borrowBook() throws an error when the book is already borrowed. d. External Conditions Mention anything outside the software that must be in place for the test. Example: The Member requesting the book must have an active membership. e. Supplementary Information Add any extra details to help understand or execute the test. Example: Include sample input data, expected output, or notes on the testing tool being used. Object-Oriented Test Methods Different methods help in testing classes and their interactions. 1. Random Testing: Randomly generate test sequences for class methods. o Example: Randomly test combinations of borrowBook() and returnBook(). 2. Partition Testing: o State-Based: Test methods that change the object's state. o Attribute-Based: Test methods using specific attributes. o Category-Based: Group and test methods based on their functions. Example: For a Loan class: o State-Based: Test if startLoan() changes status to "active." o Attribute-Based: Test methods that use dueDate. 3. Inter-Class Testing: Test interactions between client and server classes. o Example: When Member interacts with Loan to borrow a book, ensure proper message passing and method calls. Object-Oriented Metrics Metrics are used to measure and improve the design and testing effort of the OOP systems: Project Metrics: 1. Number of Scenario Scripts: Count the steps for user interactions. 2. Number of Key Classes: Identify critical classes like Book and Loan. 3. Number of Support Classes: Identify helper classes like Database or Notifications. Project Estimation: E = w * (k + s) E is the estimated project effort. W is the average number of person-days per class (typically, 15-20 person- days). k is the number of key classes (estimated from the analysis phase) s is the number of support classes (estimated as m * k, where m is a multiplier identified from the following table: Interface Type m No GUI 2.0 Text-based user interface 2.25 GUI 2.5 Complex GUI 3.0 Design Metrics 1. Weighted Methods per Class (WMC): Measures the complexity of a class. Example: If a Book class has many methods, its WMC is high, indicating complexity. 2. Depth of Inheritance Tree (DIT): Measures the number of inheritance levels. Example: If EBook extends Book, and AudioBook extends EBook, the DIT is 2. 3. Coupling Between Object Classes (CBO): Measures class dependencies. Example: If Loan depends on Book and Member, its CBO is 2. Chapter 5 – Performance Testing Performance Testing Performance testing checks how well a software application works in different conditions. It measures how fast, stable, and reliable the app is, and whether it can handle many users or tasks at the same time without problems. The goal is to ensure that the app performs well even when under heavy use, like during peak times when lots of people are using it. In performance testing, we don’t only measure the response time of the application but also several other quality attributes like – stability, reliability, robustness, scalability, resource utilization, etc. Why Performance Testing? Performance testing is important because it ensures an app can handle different situations smoothly. 1. Checking Reliability What it means: Ensuring that the app consistently works correctly, even when used a lot. Example: Imagine using a weather app to check the temperature every day. If the app shows accurate temperatures each time, it’s reliable. Performance testing helps make sure the app can keep providing the correct data, whether one person or thousands are using it. 2. Identifying Performance Bottlenecks What it means: Finding parts of the app that slow things down. Example: Think of a restaurant with many customers but only one chef. Orders would get delayed because the chef is a bottleneck. Similarly, performance tests can show which parts of an app (like a slow database) need improvement to make everything run faster. 3. Evaluating Scalability: What it means: Checking if the app can handle more users or tasks as needed. Example: Let’s say a social media app works great for 100 users. But what if 10,000 people start using it? Performance testing helps see if the app can "scale up" and still perform well, or if it needs a better server or more memory to handle the extra users. 4. Checking Robustness What it means: Seeing how well the app handles really tough situations, like more users than expected. Example: Imagine a theme park that can comfortably hold 10,000 visitors, but on a holiday, 20,000 people show up. The park would get crowded, and some rides might shut down. In the same way, "stress testing" checks if an app can handle heavy loads (like lots of users) without crashing. Performance testing helps find and fix potential problems before real users experience them. This way, the app stays fast, reliable, and can grow smoothly as more people start using it. It’s like tuning a car’s engine to make sure it runs well, even on steep hills or at high speeds! Types of Performance Testing Load Testing Load testing is a way to check how well an app performs when many people are using it at the same time. It simulates real-world scenarios to see if the app can handle the expected number of users without slowing down, crashing or showing errors. In load testing, we simulate different situations where multiple users are accessing the app i.e., response time, throughput, and error rate. How is Load Testing Done Load testing tools are used to imitate real users by sending requests to the app's server. You can set up: Number of Virtual Users: like creating pretend users who are accessing the app. Duration: How long the test runs? Performance Metrics: Things you want to measure, such as speed or error rates. Graphs & Reports: These show how the app performed during the test. Examples E-commerce Website During Sales: Imagine a shopping website that usually has 500 users at a time. During a big sale, there might be 5,000 users. Load testing helps check if the website can handle this extra traffic without slowing down or crashing. Online Game Launch: Before launching a new online game, a company would run a load test to see if the game servers can support thousands of players logging in simultaneously. Advantages Identifies Bottlenecks: It helps find areas that slow down the app (like a slow database), so they can be fixed before the app goes live. Optimizes Infrastructure: It ensures that the app runs smoothly without needing extra servers, which saves costs. Reduces Risk of Downtime: By identifying weak points, it lowers the chances of the app crashing when it gets busy. Builds Confidence: If the app performs well under load testing, it gives developers and users confidence that it will handle real-world use. Stress Testing Stress testing is a way to check how an app performs when it's pushed beyond its normal limits. It’s like seeing how much weight a chair can hold before it breaks. The goal is to find out when the app starts to have problems, such as slowing down, crashing, or showing errors, and see how it behaves in extreme conditions. What happens in Stress Testing? In stress testing, we simulate situations where the number of users or the amount of data is much higher than expected: Breaking Point: At what point does the app start failing or behaving weirdly? Error Rate: How many errors occur when the app is under heavy stress? Crashes and Recovery: If the app crashes, how quickly does it recover? Memory Issues: Are there memory problems where the app uses up too much memory or doesn’t release it when done? Data Integrity: Does the app keep data safe and correct even during high stress? Tools like Apache JMeter can simulate many users using the app at the same time. If an app is designed for 100 users, stress testing might add 120 or more users to see if the app can still perform well. The testing tool imitates real users performing various tasks to check how the app handles the extra load. Example: Online Ticket Sales: For a popular concert, stress testing ensures the website won’t crash when thousands of fans try to buy tickets at the same time, even though it’s designed to handle fewer users. Advantages Measures Robustness: It shows how well the app can handle extreme situations without breaking down. Improves Recovery Handling: It tests how quickly the app can bounce back after a crash. Finds Security Risks: Heavy loads might reveal security issues that aren’t obvious under normal conditions. Detects Memory Leaks: It can identify if the app keeps using memory that should have been released, which could slow down the system over time. Ensures Data Safety: Even during high stress, the app should maintain data accuracy and prevent data corruption. Endurance Testing Endurance testing, also known as soak testing, checks how an app performs over a long period of time under continuous use. It’s like leaving a car engine running for several days to see if any problems appear that wouldn’t show up in a short test drive. During endurance testing, the app is kept running for an extended period, such as 2-3 days, with a steady load: Continuous load, memory leak, and long-term performance issues. Example: Social Media App: Imagine an app like Instagram being used continuously for a few days, with users posting photos, watching videos, and sending messages. Endurance testing checks if the app stays fast and stable over a long time. Advantages Finds Long-Term Issues: It helps uncover problems that don’t show up in short tests, such as gradual slowdowns or crashes after a long time. Detects Memory Leaks: If the app doesn’t release memory properly, it could start using too much memory over time, leading to performance problems. Builds Confidence: It reassures developers and clients that the app can run smoothly for long periods without needing maintenance. Improves Customer Satisfaction: Users are less likely to experience performance issues if these problems are found and fixed before launch. Reduces Maintenance Costs: Catching and fixing long-term issues early prevents the need for expensive repairs later on. Spike Testing Spike testing a type of performance testing where an app is suddenly hit with a large number of users all at once to see how it handles the sudden increase in load. It checks if the app can recover quickly and keep working normally after the spike in users. Example: Breaking News on a News Website: When big news breaks, many people might visit a news website at once. Spike testing helps ensure the site can handle this surge in traffic without crashing. Volume Testing Volume testing is a type of performance testing where an app is tested by feeding it a large amount of data to see how well it handles it. The goal is to find out if the app can process, store, and manage large data volumes without problems like slowdowns, crashes, or errors. Example: Banking App: A banking app might be tested by loading it with millions of transaction records to see if it can still display account history quickly. How Is Volume Testing Done? Determine the Amount of Data: Decide how much data to test with based on future growth predictions. Understand the Database: Know the type of database being used and how it handles large amounts of data. Simulate Real-World Data: Create test data that mimics what users would actually do with the app. Set Up the Testing Environment: Prepare the hardware and software configurations that will be used in the test. Choose the Right Tools: Decide on testing tools that can automate the process of adding large amounts of data and checking the app's behavior. Performance Testing Process The performance testing process helps check if an app or website runs smoothly under different conditions, such as many users using it at the same time. It involves several steps, from planning what to test to analyzing the results. 1. Performance Test Scenarios First, we decide which parts of the app to test. We don't test everything—just the areas that get a lot of traffic or are important for many users. For example, on an e-commerce website, we might choose to test: Logging in: Checking how long it takes for users to log in. Browsing products: Testing how well the site handles users scrolling through product pages. Adding items to the cart: Seeing if the "Add to Cart" button works well when many people use it. The test scenarios include "Think Time," which is the time users take between actions (like filling out a form or reading a page). This simulates real user behavior since people don't click buttons instantly—they take a few seconds to read or type. 2. User Distribution Once the scenarios are selected, we decide how many users will perform each scenario. For example, if testing an email app: Reading an unread email: 50% of users could be doing this. Composing an email: 30% of users might be sending emails. Deleting an email: 18% of users could be deleting emails. Registering a new account: 2% of users might be new users signing up. This helps make the test more realistic by mimicking what users actually do. 3. Scripting The scenarios are then scripted using testing tools like JMeter, LoadRunner, or Silk Performer. These tools simulate what real users do by running the scripts and mimicking user actions. For instance, the script might log in as a user, browse the homepage, and log out. 4. Dry Run Before starting the actual test, a "dry run" is done with just 1 or 2 users to ensure everything works properly. This helps spot any issues in the scripts or app before running the full test. 5. Running the Test and Analyzing the Results After the dry run, the real test begins with the planned number of users. The test can last for a set time (e.g., 30 minutes) or a certain number of actions per user (e.g., each user logs in and out 10 times). When the test is done, the results are analyzed. This might include checking: Response Time: How quickly the app responds to user actions. Error Rate: How often the app fails to do what it’s supposed to. Resource Usage: How much memory, CPU, or other resources are used. If the testing tool doesn’t give clear results, extra plugins or tools may be used to create graphs or detailed reports. Performance Testing Tools Examples 1. Apache JMeter Type: Free, open-source tool. Good for: Testing websites, APIs, and web services. Pros: It's free, and widely used in different companies. Cons: It has a bit of a learning curve, so it may take time to learn how to use it effectively. Example: JMeter can be used to simulate thousands of users browsing a website to measure response times and detect errors. 2. LoadRunner Type: Paid tool with a free community version (up to 50 users). Good for: Comprehensive performance testing for large projects. Pros: Provides detailed analysis and supports various protocols. Cons: Can be expensive if you need to test with many users. Example: A banking app can be tested with LoadRunner to simulate users transferring money, paying bills, and checking balances. 3. WebLOAD Type: Paid tool with a free edition (up to 50 users). Good for: Testing large enterprise applications. Pros: Allows integration with various development tools for better analysis. Cons: Paid version is needed for larger tests. Example: Testing a news website to see how it handles a spike in traffic during a major event. 4. LoadNinja Type: Paid tool. Good for: Quick testing without complex scripting. Pros: Works with real browsers on the cloud, making it easy and fast to set up. Cons: Requires a subscription. Example: LoadNinja could be used to simulate real users booking tickets on an airline's website. 5. Locust Type: Free, open-source tool that uses Python for scripting. Good for: Distributed testing across multiple machines. Pros: Supports large-scale distributed testing. Cons: Requires knowledge of Python for scripting. Example: Locust could be used to test an online multiplayer game by simulating many players logging in and playing at the same time. 6. NeoLoad Type: Paid tool with drag-and-drop features. Good for: Reducing scripting time and complexity. Pros: Supports integration with test automation tools like Selenium. Cons: Subscription-based pricing. Example: NeoLoad can be used to test an e-commerce website where users search for products, add them to the cart, and make purchases. Chapter 6 - Software Maintenance and Evolution Software Maintenance “The modification of a software product after delivery to correct faults, to improve performance or other attributes, or to adapt the product to a changed environment” Software Maintenance is like fixing or improving an app after it’s already been released. Imagine you have a mobile app that people use every day. Once it's out, you might need to make changes to keep it working well. This could mean fixing bugs (problems users report), adding new features (like making it faster or adding a dark mode), or updating it to work with new phone software versions. Software Evolution Software Evolution is the idea that software, just like living things, needs to change and grow over time to stay useful. Instead of just fixing problems (like in maintenance), software evolution focuses on continuously improving and adapting the software so that it doesn’t become outdated. Lehman’s Laws of Software Evolution Explained Simply: 1. Continuing Change: What it means: If you don’t keep updating and adapting your software, it will become less useful over time. Example: Imagine a social media app that hasn’t been updated in years. It doesn’t have the latest features, so people stop using it in favor of newer apps. 2. Increasing Complexity: What it means: As you add more features and updates, the software gets more complex. If you don’t actively simplify it, it can become hard to manage. Example: Adding new features to a shopping app without organizing the code can make it slow and buggy. To keep it efficient, you need to periodically clean up the code. 3. Conservation of Familiarity: What it means: Over the software’s life, each update tends to bring about a similar amount of change. This helps users stay familiar with the software. Example: Your favorite chat app doesn’t change its look completely in each update but adds small improvements, like better emojis or faster loading, so it still feels familiar to users. 4. Continuing Growth: What it means: Users expect the software to keep getting better and more capable over time. If it doesn’t grow, users may lose interest. Example: A photo-editing app that only has basic filters needs to add more advanced tools (like AI background removal) to keep users happy. 5. Declining Quality: What it means: If you don’t adapt your software to new devices or changes in technology, it will start to feel old and low-quality. Example: An app that was great on older phones may crash on new ones if it’s not updated for the latest operating systems. 6. Feedback System: What it means: Evolving software is a complex process involving many people (developers, users, managers). Feedback loops help decide what changes to make next. Example: An app company collects feedback from users, tests different updates internally, and releases new versions based on that feedback to improve user satisfaction. Program Comprehension Program Comprehension is all about understanding how software engineers read and understand code, especially when dealing with large and complex software systems. The goal is to create tools and methods that help developers quickly grasp how a program works, so they can fix issues, add features, or improve it more easily. Reverse Engineering Reverse engineering is like taking something apart to see how it works so you can understand or rebuild it. In software, it involves analyzing a system to figure out its parts and how they interact, often to create a higher- level understanding or documentation. Example: Imagine you have a game but lost the original code. You can reverse engineer the game to figure out how it was built so you can recreate or improve it. Reverse Engineering Re-Engineering Restructuring - Re-documentation (same level of - Real changes made to the code - Refactoring (no changes to abstraction. - Usually done as round trip: functionality) - Design Recovery (higher levels of design recovery... design - Revamping (only the user abstraction. improvement … re-implementation interface is changed) (Understanding existing software (Making actual changes to improve (Cleaning up the code without without changing it, often to the software.) changing what it does.) recover lost information.) Example: You might reverse engineer a website to understand its design, re-engineer it to improve the code, and restructure it to make the code cleaner without changing its look. Key to Maintenance is in Development Higher quality => less (corrective) maintenance Anticipating changes => less (adaptive and perfective) maintenance Better tuning to user needs => less (perfective) maintenance Less code => less maintenance Software Maintenance Activities The process of maintaining software involves several steps: Step 1. Understand the Existing System: Study the code and any documentation exist about the system to be modified. Use tools to recover the high-level design models of the system. Step 2. Define Maintenance Objectives. Set goals for what needs to change. Step 3. Design, Implement, and Test: Make changes and test to ensure nothing breaks. Revalidate: Running regression tests to make sure that the unchanged code still works and is not adversely affected by the new changes. Step 4. Train and Release: Inform users of updates and release a new version. Program Evolution - S-type Programs (“Specifiable”): Simple programs with clear rules that don’t evolve. Explanation: These are simple programs with fixed rules and clear specifications. Once they are created, they do not need to change because the problem they solve is well-defined. Example: A calculator app that performs basic arithmetic (addition, subtraction, multiplication, division). It’s based on simple, unchanging mathematical rules, so once it's built, there’s no need to update it unless there’s a bug. - P-type Programs (“Problem-solving”): Programs that solve real-world problems and need regular updates. Explanation: These programs are built to solve real-world problems where the requirements might not be completely clear at first. They often need updates and improvements as the understanding of the problem improves or as new needs arise. Example: A navigation app (like Google Maps). It needs regular updates to add new roads, adjust routes, and improve traffic predictions based on real-world changes and user feedback. - E-type Programs (“Embedded”): Systems that are deeply integrated into their environment and must change as the world changes. They are constantly evolving because both the software and its environment influence each other. Example: A weather forecasting system. It needs constant updates to stay accurate as weather patterns change, technology improves, and user expectations grow. Software Maintenance Types Corrective Maintenance: Fixing errors. Adaptive Maintenance: Update software to work with new hardware or systems. Perfective Maintenance: Enhancing the software to make it better. Preventive Maintenance: Preventing issues before they happen. Example: Adding a new feature to a messaging app is perfective maintenance, while fixing a crash is corrective. Program Comprehension Strategies - Bottom-Up Model: Understanding the code by starting with details and building up to a bigger picture. - Top-Down Model: Starting with a general idea and then drilling down into the details. - Integrated Model: Switching between both approaches as needed. Example: A developer trying to understand an unfamiliar piece of software might first look at the code line by line (bottom-up) or start with the overall structure (top-down). Two Main Levels of Reverse Engineering 1. Binary Reverse Engineering Explanation: This involves analyzing the binary executable (the program you can run on your computer) to recover the original source code or understand how the program works. This is usually needed when the original source code is lost or not available. Example: o Recovering lost code: Imagine a company that has an old software program but lost the original source code. By using binary reverse engineering, they can extract the code from the program and make updates or improvements. 2. Source Code Reverse Engineering Explanation: In this case, you already have access to the source code, but you want to understand the higher-level design and structure of the software. This helps developers understand the system better so they can maintain or improve it. Example: o Understanding complex systems: If a company takes over a project written by another team, they might use reverse engineering to create diagrams that explain how different parts of the software work together. This makes it easier for new developers to understand and work on the project. Reverse Engineering Objectives Reverse engineering helps in: Understanding Complex Systems: Breaking down large systems to understand them better. Generate alternative views: Reverse engineering tools should provide different views of the systems so as to enable the designers to analyze the system from different angles. Recovering Lost Information: Retrieving details that were not documented. Detecting Side Effects: Finding hidden problems before they cause failures. Facilitating Reuse: Identifying parts of the software that can be reused in other projects. Example: If an old program’s documentation is missing, reverse engineering can help recreate it so developers can continue making updates Source Code Reverse Engineering Techniques Program Slicing: Analyzing the code to see how it affects specific variables or functions. Design Recovery: Understanding how a program is structured to recreate its design. Architecture Recovery: Recovering the high-level structure of a system. Example: If you want to optimize a slow feature in an app, you might use program slicing to understand which parts of the code affect its performance. Tools for Code Maintenance - Reformatters / Documentation Generators: These tools help make code easier to read by automatically adding comments or organizing the code. - Improve Code Browsing: Tools that let you visualize how parts of the code depend on each other, making it easier to navigate complex codebases. - Simple Code Transformation: Tools that help refactor code (like renaming variables) or detect duplicated code to prevent bugs. - Design Recovery: build a basic class diagram. Use program traces to build sequence diagrams. Example: Using a reformatter to automatically tidy up code before sharing it with a team makes it more understandable. Dependency Graph A dependency graph is a visual representation showing how different parts of a system rely on each other. It uses arrows to show which parts depend on others. Example: In a web application, a dependency graph can help you see which modules rely on the database and which are independent. Clone detector “Reusing software by means of copy and paste is a frequent activity in software development. The duplicated code is known as a software clone and the activity is known as code cloning. Software clones may lead to bug propagation and serious maintenance problems” - Code Cloning: When developers copy-paste code from one place to another, creating 'clones' of the original code. - Clone Detectors: Tools that find these clones to help reduce bugs and simplify maintenance. Example: If the same code exists in multiple places, a bug in one place might also exist in the other. A clone detector helps find and fix this. Package Decomposition Package decomposition involves breaking down a system into smaller, manageable pieces (packages). This helps in organizing the code better and understanding the system's structure. Example: Splitting a large app into separate modules for user management, payments, and notifications to make it easier to maintain. Finding Dependencies Finding dependencies means identifying how different parts of a system rely on each other. This helps in understanding which parts need to be updated together. Example: Before updating a database module, checking its dependencies ensures that other parts of the app won't break. Class Abstraction Class abstraction is the process of identifying the essential features of a class while hiding unnecessary details. This helps simplify complex systems by focusing on high-level design. Example: In an e-commerce system, you might abstract a 'User' class to include only relevant attributes like username, email, and order history, ignoring internal details. Chapter 7 – Software Quality Assurance What is Software Quality Assurance (SQA)? SQA involves activities to ensure software meets its requirements and works as expected. It is applied throughout the software process. Components of SQA Quality Management Engineering tools and methods Reviews and testing strategies Documentation control Compliance with standards Measurement and reporting mechanisms Example: Imagine you’re baking a cake. SQA ensures you follow the recipe, measure ingredients accurately, check the taste midway, and package the cake properly for delivery. Quality Control (QC) Uses inspections, reviews, and tests to ensure the software meets requirements. Detects problems (e.g., testing a product for defects). Example: QC is like tasting a dish to see if it's good; SQA ensures you followed the recipe to prevent mistakes. Measures, Metrics, and Indicators Measure: A single quantitative value (e.g., the number of errors in one program module). Metric: A derived value (e.g., average errors per module). Indicator: Insights from metrics (e.g., identifying which phase produces most defects). Example: In exams, a measure could be marks for one question, a metric is the total marks, and an indicator tells you which subject needs improvement. Quality Assurance (QA) Ensures problems don’t occur in the first place by improving processes. Example: QA is like creating a checklist for baking steps. QC is tasting the cake to ensure it’s good. Auditing and Reporting: o QA aims to provide management with information about product quality. o Quality = Conformance to requirements, standards, and expectations. Example: QA is like checking if the blueprint of a house matches the final construction. Teams Involved in SQA Two Key Groups: 1. Software Engineering Team: Conduct reviews, use metrics, and test software. 2. SQA Group: Independent auditors ensuring compliance with quality standards. Example: Engineers build software, while auditors verify its quality. Software Reviews Purpose: Detect errors only. Types of Reviews: o Informal Discussions: ▪ What is it? A casual review where people talk about a product, document, or code without following strict rules or procedures. ▪ How it works: Team members (like developers or designers) get together, look at the work, and share feedback. There's no official meeting or record-keeping. ▪ Example: A developer asks a colleague to quickly check their code before submitting it. ▪ When to use: When you need quick feedback or input and don’t want to spend too much time on formalities. o Formal Technical Reviews (FTRs): ▪ What is it? A structured, official review process where a team examines work (like software code or documents) in detail, following a predefined procedure. ▪ How it works: A formal meeting is held, roles are assigned (like moderator, reviewer, etc.), and the review focuses on finding defects or improvements. Everything is documented, and a report is created. ▪ Example: A team gathers to formally review a software requirements document, identifying missing or unclear parts. ▪ When to use: When you need thorough and documented feedback, especially for critical or complex work. FTR Objectives: o Find errors in logic and implementation. o Ensure requirements are met. o Develop uniformly structured software. Guidelines for FTRs Participants: 3-5 people (e.g., producer, leader, reviewers). Review Meeting Steps: 1. Prepare in advance. 2. Keep meetings short (

Use Quizgecko on...
Browser
Browser