Software Testing - Notes PDF
Document Details
Uploaded by Deleted User
Tags
Summary
These notes cover the principles, types, and responsibilities of software testing. It details the importance of testing in the software development lifecycle and the various aspects of testing.
Full Transcript
Software testing is an important process in the software development lifecycle.It involves verifying and validating that a software application is free of bugs, meets the technical requirements set by its design and development , and satisfies user requirements efficiently and effectively. This pro...
Software testing is an important process in the software development lifecycle.It involves verifying and validating that a software application is free of bugs, meets the technical requirements set by its design and development , and satisfies user requirements efficiently and effectively. This process ensures that the application can handle all exceptional and boundary cases, providing a robust and reliable user experience. By systematically identifying and fixing issues, software testing helps deliver high-quality software that performs as expected in various scenarios. Principles of Software Testing Below-mentioned are the principles of software testing: 1. Testing shows the Presence of Defects:- The goal of software testing is to make the software fail. Software testing reduces the presence of defects. Software testing talks about the presence of defects and doesn’t talk about the absence of defects. Software testing can ensure that defects are present but it can not prove that software is defect-free. Even multiple tests can never ensure that software is 100% bug-free. Testing can reduce the number of defects but not remove all defects. 2. Exhaustive Testing is not Possible:- It is the process of testing the functionality of the software in all possible inputs (valid or invalid) and pre-conditions is known as exhaustive testing. Exhaustive testing is impossible means the software can never test at every test case. It can test only some test cases and assume that the software is correct and it will produce the correct output in every test case. If the software will test every test case then it will take more cost, effort, etc., which is impractical. 3. Early Testing:- To find the defect in the software, early test activity shall be started. The defect detected in the early phases of SDLC will be very less expensive. For better performance of software, software testing will start at the initial phase i.e. testing will perform at the requirement analysis phase. 4. Defect Clustering :- In a project, a small number of modules can contain most of the defects. The Pareto Principle for software testing states that 80% of software defects come from 20% of modules. 5. Pesticide Paradox:- Repeating the same test cases, again and again, will not find new bugs. So it is necessary to review the test cases and add or update test cases to find new bugs. 6. Testing is Context-Dependent:- The testing approach depends on the context of the software developed. Different types of software need to perform different types of testing. For example, The testing of the e-commerce site is different from the testing of the Android application. 7. Absence of Errors Fallacy:- If a built software is 99% bug-free but does not follow the user requirement then it is unusable. It is not only necessary that software is 99% bug-free but it is also mandatory to fulfill all the customer requirements. Testing as an engineering activity involves the systematic process of evaluating a system or component to verify that it functions as intended. It is a crucial phase in the development lifecycle that ensures quality, reliability, and performance. Key aspects of testing as an engineering activity include: 1. Verification & Validation: Testing checks if the product meets design specifications (verification) and fulfills user needs (validation). 2. Detection of Defects: Engineers use testing to identify and fix bugs, errors, or deviations from the expected behavior. 3. Automated and Manual Testing: Depending on the project, testing can be automated (using scripts or tools) or manual (carried out by human testers). 4. Test Design and Execution: It involves designing test cases based on requirements and then executing them to observe system behavior. 5. Types of Testing: This includes unit, integration, system, performance, and acceptance testing, among others, to cover different aspects of the software's functionality. The ultimate goal of testing is to ensure that the system or product is reliable, functional, and meets specified quality standards before being released to users. In a software development organization, a tester plays a critical role in ensuring that the software product meets the desired quality standards before it is delivered to users. The tester's primary focus is to identify defects, ensure that the software works as expected, and improve the overall quality of the application. Key Responsibilities of a Tester: 1. Understanding Requirements: o The tester analyzes and reviews project requirements, user stories, or functional specifications to understand what needs to be tested. o They collaborate with developers, business analysts, and stakeholders to clarify any ambiguities in requirements. 2. Test Planning and Strategy: o Based on requirements, the tester develops a comprehensive test plan that outlines the scope, objectives, resources, schedule, and deliverables for the testing phase. o The strategy includes deciding the types of tests to run, the tools to be used, and the environments for testing. 3. Designing Test Cases: o Testers create detailed test cases that define the input, execution steps, and expected results. o They design both positive and negative test scenarios to cover various aspects of functionality, performance, security, etc. 4. Test Execution: o Testers execute the test cases in different environments, recording the results, and identifying any deviations or bugs. o They perform various levels of testing, such as unit, integration, system, regression, and acceptance testing. o They might also perform automated tests using tools when required. 5. Defect Reporting and Tracking: o When issues are found, testers log defects with detailed steps to reproduce, severity, and priority, using bug tracking tools. o They work closely with developers to help identify the root cause of defects and assist in reproducing issues. 6. Regression Testing: o After bugs are fixed, testers perform regression testing to ensure that the fixes haven't introduced new issues in other parts of the application. o This ensures the overall stability of the software after changes. 7. Collaboration with Development Team: o Testers collaborate closely with developers during the development phase, providing early feedback, reviewing unit tests, and validating fixes. o They act as the quality gatekeepers before the product is released. 8. Test Automation: o Depending on the project, testers may design and implement automated tests to speed up regression testing and improve test coverage. o Automation helps with continuous integration and delivery (CI/CD) pipelines. 9. Performance and Security Testing: o In some projects, testers are responsible for running performance and load tests to ensure the software can handle the required load. o They may also conduct security testing to identify potential vulnerabilities. 10. User Acceptance Testing (UAT): o Testers often support or lead user acceptance testing by assisting end-users to verify that the software meets their needs in real-world scenarios. Importance of a Tester’s Role: Improving Quality: Testers are the final checkpoint for ensuring that a product is reliable, efficient, and user-friendly. Risk Reduction: They help identify potential issues early, reducing the risk of failures or defects in production. Customer Satisfaction: By catching bugs early and ensuring the product works as intended, testers help to deliver a high-quality product that meets customer expectations. Testers play a vital role in balancing quality with speed and cost in the software development process, helping to create a better product. In software testing, defect classes categorize the types of issues or bugs found during testing based on their nature, source, and impact on the software system. Classifying defects helps testers, developers, and project teams understand the root cause of issues, prioritize fixes, and improve the quality of the software. Common Defect Classes in Software Testing: 1. Functional Defects: o Definition: These occur when the software does not perform according to the specified functional requirements. o Example: A login function that fails to authenticate a valid user. o Impact: Affects the core functionality of the system, leading to failures in delivering expected outputs or behavior. 2. Performance Defects: o Definition: These relate to issues with the speed, responsiveness, and stability of the application under certain conditions. o Example: The software slows down or crashes when multiple users log in at the same time. o Impact: Affects user experience, especially under heavy usage, making the system inefficient or unusable. 3. Usability Defects: o Definition: These defects affect the ease of use, accessibility, or the overall user experience of the software. o Example: Confusing navigation menus or poorly labeled buttons that make the software difficult to use. o Impact: Decreases user satisfaction, leading to frustration or inability to use certain features effectively. 4. Security Defects: o Definition: These occur when the software is vulnerable to attacks, unauthorized access, or data breaches. o Example: A flaw that allows an attacker to bypass authentication and access sensitive user data. o Impact: Poses serious risks such as data loss, privacy violations, and reputational damage. 5. Compatibility Defects: o Definition: These are issues that arise when the software does not work properly across different environments, such as operating systems, browsers, or devices. o Example: A web application that works on Chrome but fails to render correctly on Firefox. o Impact: Limits the software’s usability on various platforms and devices, reducing its reach. 6. Interface Defects: o Definition: These involve problems with how different system components or external systems (APIs, databases, etc.) interact with each other. o Example: An API call that returns incorrect data or fails to connect to a third-party service. o Impact: Prevents the software from functioning correctly due to communication failures between components. 7. Data Defects: o Definition: These occur when the software processes, stores, or handles data incorrectly. o Example: Incorrect calculations, data corruption, or the loss of data during transactions. o Impact: Leads to incorrect outputs, loss of important information, or inconsistency in data across the system. 8. Boundary-Related Defects: o Definition: These defects occur when the system fails to handle boundary conditions correctly, such as minimum and maximum input values. o Example: A field that should accept a maximum of 100 characters but crashes when 101 characters are entered. o Impact: Causes the system to behave unpredictably at input extremes, leading to crashes or data errors. 9. Logic Defects: o Definition: These are issues that arise from incorrect logic in the implementation of algorithms or decision-making processes. o Example: A discount calculation function that gives the wrong discount amount due to a flawed algorithm. o Impact: Affects the correctness of results, leading to faulty outputs or actions. 10. Configuration Defects: o Definition: These defects occur when the software is not configured properly for a given environment, or settings are incorrect. o Example: A software application that fails to work in a specific region because of incorrect regional settings. o Impact: Prevents the system from operating correctly in certain setups or conditions. 11. Installation Defects: o Definition: These occur during the installation or setup of the software on a target machine or environment. o Example: The installation wizard crashes halfway through the setup process. o Impact: Prevents users from successfully installing or deploying the software, leading to unusability. 12. Recovery Defects: o Definition: These defects arise when the system fails to recover from unexpected failures or crashes. o Example: After a system crash, data entered before the crash is lost or corrupted. o Impact: Reduces the software’s reliability and the ability to handle errors gracefully. 13. Concurrency Defects: o Definition: These defects arise in multi-threaded environments when processes or threads do not synchronize correctly. o Example: A race condition where two processes modify the same data simultaneously, causing inconsistencies. o Impact: Can cause serious errors, data corruption, or application crashes, especially in high-performance systems. Importance of Defect Classification: Prioritization: Helps prioritize the resolution of critical defects, such as security or functional bugs, over less severe ones like usability defects. Root Cause Analysis: Aids in diagnosing the origin of defects, allowing the development team to improve design and coding practices. Risk Management: Identifying the classes of defects helps in assessing potential risks and planning mitigation strategies. By classifying defects, testers can communicate the nature and severity of issues more effectively, leading to better coordination between testers, developers, and stakeholders. Defect Repository in Software Testing :- A defect repository is a centralized database or system used to record, track, and manage defects (also called bugs) found during software testing. It provides a structured approach for testers and developers to collaborate on identifying, fixing, and resolving defects in the software. Key Components of a Defect Repository: 1. Defect ID: A unique identifier assigned to each defect for easy tracking. 2. Summary/Title: A brief description of the defect to give a quick idea of the problem. 3. Detailed Description: A clear and detailed explanation of the defect, including steps to reproduce, expected vs. actual results, and any relevant context. 4. Severity and Priority: o Severity: Indicates the impact of the defect on the system (e.g., critical, major, minor). o Priority: Refers to the urgency with which the defect should be fixed (e.g., high, medium, low). 5. Status: The current state of the defect (e.g., New, In Progress, Resolved, Closed, Reopened). 6. Reporter: The person who found and reported the defect. 7. Assignee: The developer or team responsible for fixing the defect. 8. Date Reported: The date the defect was logged into the system. 9. Environment: Information about the environment in which the defect was encountered (e.g., operating system, browser version). 10. Attachments: Screenshots, logs, or files that provide additional information for diagnosing the issue. 11. Comments/History: A log of actions, updates, or discussions related to the defect as it moves through its lifecycle. Benefits of a Defect Repository: Centralized Tracking: All defects are tracked in one place, making it easier to manage and monitor their status. Improved Collaboration: Testers, developers, and project managers can communicate about defects and their resolutions. Metrics and Reporting: Helps in generating defect reports and metrics, such as the number of open defects, defect trends, and resolution time. Prioritization: Helps teams focus on critical issues by categorizing and prioritizing defects. Popular Defect Repository Tools: JIRA Bugzilla Redmine MantisBT Azure DevOps Test Case Design in Software Testing:- Test case design refers to the process of creating detailed test cases that specify how a system or application should be tested. A well-designed test case ensures that the software behaves as expected under various conditions. Test case design is an essential part of the overall testing strategy, as it guides the testing process to uncover defects. Key Components of a Test Case: 1. Test Case ID: A unique identifier for each test case. 2. Title/Summary: A brief description of the purpose of the test case. 3. Preconditions: Any setup, configurations, or conditions that must be in place before executing the test. 4. Test Data: The specific data values or inputs used during the test (e.g., user credentials, search terms). 5. Test Steps: Detailed, step-by-step instructions on how to perform the test. 6. Expected Result: The expected outcome or behavior of the system based on the input or test condition. 7. Actual Result: The actual behavior observed during the execution of the test case. 8. Postconditions: Any actions or conditions that need to be reset or cleaned up after the test. 9. Status: Indicates whether the test case passed or failed. 10. Priority: The importance of this test case relative to other cases, particularly for regression or critical functionality. Types of Test Case Design Techniques: 1. Black-Box Testing Techniques: o Focuses on testing the functionality of the software without knowledge of its internal code structure. o Equivalence Partitioning: Divides input data into equivalent partitions where all inputs in a partition should produce similar results. Example: If a field accepts values between 1 and 100, you can test one value from each partition (e.g., 0 for invalid, 50 for valid, 101 for invalid). o Boundary Value Analysis: Tests the boundaries of input ranges because defects often occur at the edges. Example: For a range of 1 to 100, you would test values 0, 1, 100, and 101. o Decision Table Testing: Tests combinations of inputs based on a decision table, especially when different actions occur based on various input conditions. o State Transition Testing: Validates the system’s behavior when transitioning from one state to another. Example: Testing different states of an ATM machine such as Idle, Enter PIN, Account Selection, etc. 2. White-Box Testing Techniques: o Involves testing the internal structure or code of the application. o Statement Coverage: Ensures every line of code is executed at least once during testing. o Branch/Decision Coverage: Ensures that every possible branch (e.g., if/else conditions) is tested. o Path Coverage: Tests all possible paths through the code to verify all combinations of branches are executed. 3. Experience-Based Testing: o Exploratory Testing: Testers design and execute test cases on the fly, relying on their experience and understanding of the system. o Error Guessing: Based on prior knowledge or intuition, testers guess potential areas where defects are likely to be found. Importance of Effective Test Case Design: Ensures Coverage: Well-designed test cases ensure that all functional, performance, and security aspects are tested. Early Defect Detection: Good test case design helps in identifying defects early in the testing process, saving time and costs. Repeatability: Test cases can be reused during future test cycles, especially in regression testing. Clarity and Consistency: Well-written test cases provide clear instructions, enabling consistent execution by different testers. Test Case Design Tools: TestRail qTest Zephyr HP ALM/Quality Center In conclusion, the defect repository helps track and manage defects efficiently, while test case design ensures that the testing process is thorough and structured, helping to identify defects and validate that the software works as intended. In the development and maintenance of a defect repository, both developers and testers play crucial roles. Their collaboration ensures that defects are logged, tracked, fixed, and closed efficiently, contributing to the overall quality of the software product. Tester’s Role in Developing a Defect Repository:- Testers are the primary users of the defect repository during the software testing process. They are responsible for identifying, documenting, and tracking defects found during various testing phases. Their contributions include: 1. Logging Defects Accurate Defect Reporting: Testers log defects in the repository when they encounter bugs, providing detailed and accurate information for developers to understand the issue. Defect Details: Testers ensure the defect report includes key details like: o Steps to Reproduce: Clear, step-by-step instructions on how the defect can be recreated. o Expected and Actual Results: The expected behavior versus the actual behavior observed. o Environment Information: Relevant details about the operating system, browser, device, or any other environmental factors that may influence the defect. o Attachments: Screenshots, log files, or videos that further help explain the issue. 2. Categorizing and Prioritizing Defects Severity and Priority: Testers often categorize the defect’s severity based on its impact on the system (e.g., critical, major, minor) and suggest a priority (e.g., high, medium, low) to guide developers on which defects need urgent fixes. Defect Classification: Testers classify the defect based on its type (e.g., functional, performance, security) so it can be addressed appropriately by developers. 3. Verifying Defects Defect Validation: After the developer fixes a defect, testers retest the system to confirm whether the defect is resolved. If fixed, the defect is marked as "Closed"; otherwise, the defect is "Reopened" for further investigation. Regression Testing: Testers perform regression testing to ensure that the fix has not impacted other parts of the application, logging any new defects that may arise. 4. Monitoring and Updating the Defect Lifecycle Status Updates: Testers are responsible for keeping track of the defect’s lifecycle by updating its status (e.g., New, Assigned, In Progress, Resolved, Closed) as it moves through different stages. Adding Comments: They provide updates, communicate with developers, and add new information to help in resolving the defect effectively. 5. Metrics and Reporting Defect Reports: Testers generate defect reports and metrics from the repository to assess the software’s quality and the effectiveness of the testing process. These reports help track the number of defects, their severity, the time taken for resolution, and other quality metrics. Developer’s Role in Developing a Defect Repository:- Developers primarily focus on fixing defects reported by testers and ensuring that the issues are resolved efficiently. Their support in the defect repository involves: 1. Reviewing and Analyzing Defects:- Defect Analysis: Developers review the defects reported by testers to understand the issue in detail. They examine the steps to reproduce, logs, and additional information to determine the root cause of the problem. Reproducibility: Developers attempt to replicate the issue in their local or development environment to verify the defect before proceeding with the fix. 2. Assigning and Prioritizing Defects:- Defect Assignment: Once a defect is reviewed, developers or project managers assign the defect to the appropriate developer based on the type of defect, their expertise, or availability. Priority Confirmation: Developers may re-evaluate the priority and severity assigned by testers to align it with development goals and timelines, especially if multiple defects are being handled simultaneously. 3. Fixing Defects:- Code Changes: Developers implement the necessary code changes to fix the reported defects. They use the details in the defect report (steps to reproduce, environment, etc.) to ensure the problem is addressed effectively. Root Cause Documentation: Developers may add comments in the defect repository explaining the root cause of the issue and the changes they made to resolve it. This helps testers and other team members understand the fix. 4. Updating Defect Status Status Updates: After fixing a defect, developers update its status in the repository from "In Progress" to "Resolved". They may also add notes about the fix and any additional testing they conducted to validate the solution. Linking to Code Changes: In some cases, developers link the defect to specific code commits or pull requests, providing traceability between the fix and the defect. 5. Collaborating with Testers Clarifying Defects: If developers need more information about a defect or find it hard to reproduce, they collaborate with testers to gain additional details, clarify ambiguity, or discuss the defect’s impact. Handling Reopened Defects: If testers reopen a defect after retesting, developers further investigate the issue and work on additional fixes if needed. 6. Monitoring Metrics and Trends Quality Metrics: Developers, along with testers, may analyze metrics related to the defects, such as defect density, resolution time, and trends. This helps them identify patterns, improve code quality, and refine future development efforts. Root Cause Analysis: Developers may use information from the repository to perform root cause analysis, helping them identify systemic issues or recurring problems in the code. Collaboration Between Developers and Testers in Managing the Defect Repository:- 1. Continuous Feedback Loop: Both developers and testers work together in an iterative loop where testers report issues, developers fix them, and testers validate the fixes. This feedback loop is crucial in refining the software and ensuring that it meets quality standards. 2. Defect Triage: Developers and testers often participate in defect triage meetings to evaluate the severity, priority, and assignment of defects. This helps ensure that critical issues are resolved promptly. 3. Shared Responsibility for Quality: Both roles contribute to the overall quality of the software. Testers identify issues early, while developers resolve them, both contributing to a defect-free product. 4. Improving Process Over Time: Over time, both testers and developers analyze data from the defect repository to refine processes. For example, patterns in the defects may reveal certain code modules that need more attention or suggest improvements to testing strategies. Benefits of Developer and Tester Collaboration in the Defect Repository: Improved Communication: Developers and testers communicate effectively through the repository, reducing misunderstandings and resolving defects faster. Efficient Defect Resolution: With clear defect reporting from testers and timely fixes from developers, defects are resolved more efficiently. Better Traceability: Linking defect fixes to code commits or specific releases ensures traceability and accountability for each defect. Continuous Quality Improvement: Both teams use the repository’s data to monitor and improve software quality, reducing future defect occurrence. In summary, developers and testers work hand-in-hand to manage a defect repository effectively. Testers identify and document defects, while developers analyze, fix, and update them, ensuring a smooth defect lifecycle and improving the quality of the software product. UNIT II Test Case Design in Software Testing :- Test case design is the process of creating test cases that specify how a system or application should be tested to ensure it meets the desired requirements. A well-structured test case includes key components such as preconditions, test data, test steps, expected and actual results, and postconditions. The goal is to validate that the software behaves as expected under various conditions. Test Case Design Strategies:- 1. Black-Box Testing Strategies (Functional Testing): o Equivalence Partitioning: Divides input data into partitions where all inputs should yield the same result. o Boundary Value Analysis (BVA): Tests values at the boundary of input ranges, where errors often occur. o Decision Table Testing: Uses a table to test combinations of inputs and outputs. o State Transition Testing: Focuses on testing the software’s behavior as it moves between states. o Use Case Testing: Tests user scenarios to validate real-world use cases. 2. White-Box Testing Strategies (Structural Testing): o Statement Coverage: Ensures every line of code is executed at least once. o Branch/Decision Coverage: Ensures all possible paths or decision branches are tested. o Path Coverage: Tests all possible execution paths in the code. o Condition Coverage: Ensures all logical conditions are tested for true/false outcomes. 3. Experience-Based Testing: o Exploratory Testing: Testers design and execute test cases on the fly, learning as they test. o Error Guessing: Testers use experience to guess where defects are likely to be found. o Ad-Hoc Testing: Informal testing without structured planning. Importance: Test case design ensures comprehensive testing, early defect detection, and structured validation of the software's functionality and behavior, helping to ensure high-quality software. Black-Box Testing Approach to Test Case Design Black-box testing focuses on testing the functionality of the software without any knowledge of its internal code or structure. The tester interacts with the system by providing inputs and observing outputs, ensuring that the software behaves as expected based on requirements. Key Techniques in Black-Box Testing: 1. Equivalence Partitioning: o Divides input data into logical partitions where each partition should exhibit similar behavior. o Only one test case is needed from each partition to represent the whole group. o Example: For a form accepting numbers between 1 and 100, the partitions could be: values less than 1 (invalid), values between 1-100 (valid), and values greater than 100 (invalid). 2. Boundary Value Analysis (BVA): o Focuses on testing the boundaries of input ranges, as defects often occur at these limits. o Tests include values just below, at, and just above the boundary. o Example: For an age input field allowing 18 to 60, test cases would include 17, 18, 60, and 61. 3. Decision Table Testing: o Tests combinations of inputs by creating a table that maps input conditions to expected outcomes. o Ensures all possible input combinations are tested. o Example: For a login form, the table might include combinations of valid/invalid usernames and passwords. 4. State Transition Testing: o Tests how the system transitions between different states based on user actions or events. o Ensures that the system behaves correctly as it moves from one state to another. o Example: Testing an ATM's transitions between states like "Insert Card," "Enter PIN," and "Select Transaction." 5. Use Case Testing: o Tests real-world scenarios or user interactions with the system to ensure the software meets functional requirements. o Example: A use case for an e-commerce website might test the entire process of selecting a product, adding it to the cart, and completing the checkout. Benefits of Black-Box Testing: Focuses on user experience and functionality. Helps catch issues related to missing or incorrect features. Does not require knowledge of the code, making it ideal for testers who focus on the system’s behavior. This approach is ideal for functional testing, ensuring that the system meets user requirements and behaves correctly across various scenarios. Random Testing is a software testing technique that involves generating random inputs to test a program or system. The idea is to uncover defects by exploring a wide range of scenarios that might not be covered by structured test cases. Here’s a breakdown of how it works and its advantages: Key Concepts Input Generation: Random inputs can be generated using random number generators, selecting from a predefined set of valid and invalid values. The inputs can vary in type, length, and structure, depending on the software being tested. Execution: The system is executed with these randomly generated inputs to observe its behavior and output. Both functional and non-functional aspects can be tested, such as performance and stress handling. Monitoring:- During execution, the system's response is monitored for unexpected behavior, crashes, or incorrect outputs.Logs and error messages are captured for analysis. Advantages :- Broad Coverage: Random testing can reveal edge cases that structured testing might miss, providing a broader test coverage. Simplicity: It’s relatively easy to implement since it doesn’t require extensive test case design. Automation Friendly: Can be easily automated, making it suitable for continuous integration/continuous deployment (CI/CD) environments. Limitations:- Lack of Direction: Without specific goals, it can be less efficient than targeted testing approaches. Reproducibility: Random inputs can make it hard to reproduce specific issues, as the same test may not yield the same input again. Limited Insight: It may not provide as detailed an understanding of the software's behavior as systematic testing might. Boundary Value Analysis (BVA) is a software testing technique used to identify errors at the boundaries of input domains rather than within the range itself. The principle behind BVA is that defects are more likely to occur at the extremes of input values. It is particularly useful when input data is defined by a range of values, such as numeric ranges or data with limits. Explanation with Example Let's consider a simple example: A software application that accepts marks as input, ranging from 0 to 100. We want to test the input field to ensure that it handles the boundary values correctly. Steps for BVA: Identify Boundaries: For the input range 0 to 100: Lower boundary: 0 Upper boundary: 100 Create Test Cases for Boundary Values: For each boundary, test just below, on, and just above the boundary. Lower Boundary Test Cases: Below the boundary: -1 (Invalid input) On the boundary: 0 (Valid input) Just above the boundary: 1 (Valid input) Upper Boundary Test Cases: Below the boundary: 99 (Valid input) On the boundary: 100 (Valid input) Just above the boundary: 101 (Invalid input) Diagram for Boundary Value Analysis Imagine a line representing the input range from -∞ to +∞, with a focus on the boundary points: -∞ | -1 0 1.... 99 100 101 | +∞ ----|------------------------------------------------|---- Invalid Valid Valid Valid Valid Invalid Input Input Input Input Input Input Example Test Cases Test Case Description Expected Result TC1 Input = -1 (Below lower Error message boundary) TC2 Input = 0 (On lower boundary) Accepted TC3 Input = 1 (Just above lower Accepted boundary) TC4 Input = 99 (Just below upper Accepted boundary) TC5 Input = 100 (On upper Accepted boundary) TC6 Input = 101 (Above upper Error message boundary) This approach ensures that the system behaves as expected at the critical points of the input range. Summary Boundary Value Analysis helps uncover edge case defects that might not be caught by testing with random or average values. By focusing on boundaries, it provides a systematic and efficient way to ensure that input constraints are handled correctly in the software.