Software Testing Lecture Notes PDF
Document Details
Uploaded by StylishSpessartine
جامعة العلوم والتقانة
Dania Mohamed Ahmed
Tags
Summary
Software testing lecture notes. The lecture notes cover topics such as program testing, program testing goals, validation, defect testing, and various testing strategies.
Full Transcript
Software testing Part (1) Lecture (5) Dania Mohamed Ahmed Program testing Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use. When you test software, you execute a program using artificial da...
Software testing Part (1) Lecture (5) Dania Mohamed Ahmed Program testing Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use. When you test software, you execute a program using artificial data. You check the results of the test run for errors, anomalies or information about the program’s non-functional attributes. Can reveal the presence of errors NOT their absence. Testing is part of a more general verification and validation process, which also includes static validation techniques. Program testing goals 1. Demonstrate Requirement Compliance: Custom Software: Ensure that every requirement in the requirements document is tested. Generic Software Products: Verify that all system features and their combinations are tested to confirm they work as intended. 2. Discover Defects: Identify Issues: Detect incorrect, undesirable, or non-conforming behavior in the software. Defect Testing: Focus on finding system crashes, unwanted interactions, incorrect computations, and data corruption. Validation and defect testing Validation Testing: Purpose: To confirm that the system performs correctly and meets its requirements when subjected to test cases that represent typical use scenarios. Focus: Ensures the software behaves as expected in real-world conditions and aligns with the specified requirements. Example: Testing a banking application with standard transactions like deposits and withdrawals to ensure it processes these actions correctly. Defect Testing: Purpose: To identify defects by designing test cases that aim to uncover hidden issues, often through unconventional or extreme scenarios. Focus: Detects incorrect, undesirable, or non-conforming behavior that might not be evident in normal usage. Example: Deliberately inputting unexpected data or using the application in unintended ways to see if it handles such situations gracefully or fails. Testing process goals 1. Validation Testing: Objective: Confirm that the software meets its requirements and operates as intended. Success Criteria: The test is considered successful if it demonstrates that the system performs correctly and aligns with the specified requirements. 2. Defect Testing: Objective: Identify faults or defects in the software where its behavior is incorrect or deviates from its specification. Success Criteria: The test is considered successful if it exposes a defect by causing the system to perform incorrectly or fail to meet its specifications. An input-output model of program testing Verification: "Are we building the product right”. The software should conform to its specifications. Validation: "Are we building the right product”. The software should do what the user really requires V & V confidence Verification and Validation (V&V) are crucial processes in software development, aiming to ensure that a system meets its specified requirements and is fit for its intended purpose. The primary goal of V&V is to establish confidence that a software system or product meets its requirements and is suitable for its intended use. This confidence is built through rigorous testing and evaluation processes. Factors Influencing the Level of Confidence in V&V: 1. Software purpose: the confidence level depends on how critical the software is to an organization. 2. User expectations: users may have low expectations of certain kinds of software. 3. Marketing environment: getting a product to market early may be more important than finding defects in the program. Inspections and testing Both inspections and testing are fundamental to ensuring the quality and reliability of software, but they focus on different aspects of the verification and validation process. Software Inspections: software inspections are a type of static verification technique where the system's documentation, design, or code is reviewed and analyzed without executing the software. The goal is to identify defects, inconsistencies, or potential improvements through detailed examination. Software Testing: Software testing is a dynamic verification technique that involves executing the software with test data to observe and analyze its behavior. The goal is to ensure that the software behaves as expected and meets the requirements. Inspections and testing Software inspections These involve people examining the source representation with the aim of discovering anomalies and defects. Inspections do not require the execution of a system so may be used before implementation. They may be applied to any representation of the system (requirements, design, configuration data, test data, etc.). They have been shown to be an effective technique for discovering program errors. Advantages of inspections During testing, errors can mask (hide) other errors. Because inspection is a static process, you don’t have to be concerned with interactions between errors. Incomplete versions of a system can be inspected without additional costs. If a program is incomplete, then you need to develop specialized test harnesses to test the parts that are available. As well as searching for program defects, an inspection can also consider broader quality attributes of a program, such as compliance with standards, portability and maintainability. Inspections and testing Inspections and testing are complementary and do not oppose verification techniques. Both should be used during the V & V process. Inspections can check conformance with a specification but not conformance with the customer’s real requirements. Inspections cannot check non-functional characteristics such as performance, usability, etc. A model of the software testing process Stages of testing The three primary stages of testing each serve a distinct purpose in ensuring the software is robust, reliable, and ready for end-users. Here are the definitions for each stage: 1. Development Testing: refers to the testing activities conducted during the software development phase to identify and correct defects and ensure that individual components and integrated systems work as intended. It is an iterative and ongoing process integrated with the development activities. 2. Release Testing: is the phase where a complete version of the software is tested by a separate testing team, distinct from the development team, to verify that the software is stable, reliable, and ready for release to end-users. 3. User Testing: involves having actual users or potential users test the software in their own environment to gather feedback and evaluate how well the software meets their needs and expectations. Development testing Development testing encompasses all testing activities performed by the development team throughout the software creation process. It is crucial for identifying and fixing defects early in the development cycle. The main types of development testing are: 1. Unit Testing: Focus: Testing individual program units or object classes. Objective: To verify the functionality of specific objects or methods in isolation. 2. Component Testing: Focus: Testing composite components formed by integrating several individual units. Objective: To validate the interactions and interfaces between integrated components. Development testing 3. System Testing: Focus: Testing the complete, integrated system, including some or all components. Objective: To ensure that the system as a whole functions correctly, with a focus on verifying interactions between components. Each level of testing builds upon the previous one, from individual units to integrated components and finally to the entire system, ensuring comprehensive validation throughout the development process. Unit testing Unit testing is the process of testing individual components in isolation. It is a defect-testing process. Units may be: Individual functions or methods within an object Object classes with several attributes and methods Composite components with defined interfaces are used to access their functionality. Object class testing When testing object classes, ensure comprehensive coverage by: 1. Testing All Features: Verify every operation that the object supports. 2. Attributes: Set and check all attribute values to confirm correct behavior. 3. State Changes: Put the object into all possible states and simulate events that trigger state changes to ensure it behaves as expected in each scenario. Inheritance makes it more difficult to design object class tests as the information to be tested is not localized. Weather station testing When testing a weather station, it's crucial to create comprehensive test cases for its various functions, ensuring that all possible states and transitions are covered. Key Functions Testing: 1. reportWeather: Ensure the weather report is accurate based on current data. 2. calibrate: Verify that the calibration process correctly adjusts sensors or instruments. 3. test: Check the functionality of the system's self-test features. 4. startup: Confirm proper system initialization and state transition. 5. shutdown: Test that the system shuts down correctly, including cleanup and state saving. Weather station testing State Model Testing: 1. Identify State Transitions: Document the different states and transitions of the system. 2. Event Sequences: Simulate events to test how the system responds to state changes. Example State Transition Sequences: 1. Shutdown -> Running -> Shutdown: Test if the system can transition from Shutdown to Running and back to Shutdown correctly. 2. Configuring -> Running -> Testing -> Transmitting -> Running: Test the sequence from Configuration to Running, then Testing, Transmitting, and back to Running to ensure proper functionality in each state. 3. Running -> Collecting -> Running -> Summarizing -> Transmitting -> Running: Verify transitions from Running to Collecting data, back to Running, then to Summarizing, Transmitting, and back to Running. The weather station object interface Automated testing Automated unit testing involves using tools and frameworks to run and check tests without manual intervention. Automation: Unit tests should be automated to ensure they are run consistently and efficiently without manual effort. Frameworks: Utilize test automation frameworks (e.g., JUnit) to write and execute tests. These frameworks provide generic test classes that you extend to create specific test cases. Test Execution: Frameworks can run all implemented tests and report their outcomes, often through a graphical user interface (GUI). Automated Test Components 1. Setup: Initialize the system with the necessary inputs and expected outputs for the test case. 2. Call: Execute the object or method under test. 3. Assertion: Compare the actual result from the test with the expected result. If the assertion is true, the test passes; if false, it fails. Unit test effectiveness Unit test effectiveness is measured by how well the test cases verify that a component performs its intended functions and how they help identify defects. Purpose: Test cases should confirm that the component behaves as expected under normal conditions and reveal defects if they exist. Types of Unit Test Cases: 1. Normal Operation Tests: Objective: Ensure the component works correctly with standard, expected inputs and scenarios. Focus: Validate that the component meets its specifications and performs its intended functions. Unit test effectiveness 2. Edge Case and Error Handling Tests: Objective: Test the component with abnormal or unexpected inputs to ensure it handles them gracefully without crashing. Focus: Identify common issues and defects by simulating problematic conditions and ensuring robust error handling. By incorporating both types of test cases, you ensure that the component is reliable both under typical usage and in the face of potential issues. Testing Strategies 1. Partition Testing: Concept: Divide inputs into groups with common characteristics. Approach: Select test cases from each group to ensure all types of input are processed correctly. 2. Guideline-Based Testing: Concept: Use established testing guidelines to select test cases. Approach: Apply insights from previous experiences to target common programming errors and improve test effectiveness. Partition testing Concept: Inputs and outputs can be grouped into different classes, known as equivalence partitions, where each class represents a set of data that is treated similarly by the program. Equivalence Partitions: These are domains where the program behaves consistently for all members within the same class. Approach: Select test cases from each partition to ensure the program handles all relevant scenarios within each class correctly. Equivalence partitioning Equivalence partitions Testing Guidelines for Sequences 1. Single-Value Sequences: Test Case: Use sequences that contain only one value. Objective: Ensure the software correctly handles and processes the smallest possible sequence. 2. Varied Sequence Sizes: Test Case: Use sequences of different sizes in separate tests. Objective: Verify that the software can handle and correctly process sequences of various lengths. 3. Element Access: Test Case: Derive tests that access the first, middle, and last elements of the sequence. Objective: Ensure that the software properly handles and processes elements at different positions within the sequence. Testing Guidelines for Sequences 4. Zero-Length Sequences: Test Case: Test with sequences of zero length. Objective: Confirm that the software handles empty sequences gracefully without errors. These guidelines help ensure that your tests cover a range of scenarios, including edge cases, and verify that the software handles sequences effectively. General Testing Guidelines 1. Generate All Error Messages: Approach: Use inputs that trigger every possible error message. Objective: Verify that the system identifies and handles various errors correctly. 2. Input Buffer Overflow: Approach: Design inputs to overflow input buffers. Objective: Assess the system’s robustness and error handling with large inputs. 3. Repeated Inputs: Approach: Repeatedly use the same input or series of inputs. Objective: Ensure consistent handling and check for issues like memory leaks or state corruption. General Testing Guidelines 4. Invalid Outputs: Approach: Force the generation of invalid outputs. Objective: Test the system’s ability to handle and recover from incorrect results. 5. Extreme Computation Results: Approach: Generate results that are too large or too small. Objective: Confirm that the system can handle extreme values without overflow or underflow issues. These guidelines are designed to uncover weaknesses and ensure the system’s robustness across a variety of conditions.