Summary

This document provides an overview of different software testing methodologies, including unit testing, integration testing, and system testing. It also discusses the importance of testing in software development and offers examples and explanations.

Full Transcript

CLASS#7 – TESTING WHAT IS SW TESTING? Software testing is a process of evaluating a system or its components to find if it satisfies specified requirements. It ensures the product is defect-free and meets user needs. "It’s not just about finding bugs; it’s about improving the ove...

CLASS#7 – TESTING WHAT IS SW TESTING? Software testing is a process of evaluating a system or its components to find if it satisfies specified requirements. It ensures the product is defect-free and meets user needs. "It’s not just about finding bugs; it’s about improving the overall quality of the product.“ 1.You need to test software to know that it works. 2.Testing leads to a system that is more robust and resilient to failure. WHY DEVELOPERS DON’T TEST? “I know it works!” WHY DEVELOPERS DON’T TEST? “I know it works!” “ I don’t write broken code” WHY DEVELOPERS DON’T TEST? “I know it works!” “ I don’t write broken code” “ I have no time” 4 LEVELS OF SW TESTING Acceptan ce Testing System Testing Integration Testing Unit Testing UNIT TESTING This type of testing uses tests for a single component or a single unit in software testing and this kind of testing is performed by the developer. Unit testing is also the first level of functional testing. Unit is the smallest testable portion of the system or application. The main aim is to test that each component or unit is correct in fulfilling requirements and desired functionality. UNIT TESTING EXAMPLE TEST COVERAGE Test coverage is the percentage of lines of code that are executed during all of the tests. - High coverage gives high confidence - Test coverage reports can reveal which lines of code were not tested. UNIT TESTING EXAMPLE (DISCUSSION) What are the unit tests needed for a software that controls the temperature of the room? Main functionality: - Keeps temperature at 25 degrees - Turns on for 30mins when it is 2pm - Turns on “Quiet mode” when it is 10pm INTEGRATION TESTING Integration testing is software testing where modules get logically integrated and tested as one complete system test. It aims to expose any defects, bugs, or errors in the interaction between these software modules, while emphasizing on the data communication between various modules. Thus, it’s also known as “I & T” (Integration and Testing). INTEGRATION TEST EXAMPLE Before beginning integration testing, ensure each component works well on its own through unit testing. Once these individual validations are complete, you can proceed with integration testing. Here are some example test cases: 1. Ensure items, quantities, prices, and discounts transfer correctly from cart to checkout. 2. Confirm billing and shipping info populates accurately for logged-in users, and new user info saves properly. 3. Verify checkout redirects to the correct payment gateway with secure order amount and ID. 4. For successful payments, display a “Payment Successful” message; for failures, show retry prompts. 5. Confirm order confirmation message and email include items, order ID, and shipping details. 6. Ensure inventory updates in real-time after order confirmation to reflect items sold. 7. Check payment status (e.g., “Pending,” “Completed”) updates correctly on user order history. TYPES OF INTEGRATION TESTING 1.Big Bang approach 2.Incremental approach BIG BANG It is the simplest integration testing approach, where all the modules are combined and the functionality is verified after the completion of individual module testing. In simple words, all the modules of the system are simply put together and tested. While big-bang integration testing can be useful in some situations, it can also be a high-risk approach, as the complexity of the system and the number of interactions between components can make it difficult to identify and diagnose problems INCREMENTAL APPROACH Top Down Bottom Up Sandwich EXAMPLE Simulate responses Module: Website Login aka L login Module: Order aka O Reporting Module Order Summary aka OS (Not yet Order Payment by Email developed) Order Module: Payment aka P Summary Cash Module Cash Payment aka CP (Cart) Module Debit/Credit Payment aka DP (Not yet developed) Credit Simulate UI Module Wallet Payment aka WP (Not yet developed) Wallet Module: Reporting aka R (Not yet developed) PROCESS OF SANDWICH INTEGRATION TESTING: 1.Divide the System into Layers: 1. Identify top-level modules (e.g., UI, main controllers) and low-level modules (e.g., data storage, backend logic). 2. Example: In an online learning platform: 1. Top-level: “Product Selection" (UI). 2. Low-level: "UserAuthentication" and "DatabaseAccess". 2.Test Top-Down: 1. Start testing from the top-level modules. 2. Use stubs to simulate the behavior of middle or lower layers. 3. Example: Test “Product Selection" UI using a stub for “ProductDetailsService". 3.Test Bottom-Up: 1. Simultaneously, test low-level modules independently and integrate them upward. 2. Use drivers to mimic calls from higher layers. 3. Example: Test "UserAuthentication" and "DatabaseAccess" modules with mock inputs. Top-Down Integration Bottom-Up Integration Aspect Sandwich Integration Testing Testing Testing Begins at the top- Begins at the lowest- Simultaneously begins at top- Starting level modules (e.g., level modules (e.g., level and bottom-level Point UI or main utility or backend modules. controller). components). From top to bottom, From bottom to top, Combines both top-down and Integration integrating lower- integrating higher- bottom-up, meeting in the Direction level modules level modules middle. gradually. gradually. Stubs are used to Drivers are used to Use of Requires both stubs (for top simulate lower-level mimic higher-level Stubs/Driver modules) and drivers (for modules that are not modules that are not s bottom modules). yet developed. yet developed. Focuses on Focuses on high- foundational Focuses on testing critical level functionality Focus Area functionality early features at both top and early (e.g., (e.g., database, bottom levels simultaneously. workflows, UI). utilities). Top-Down Integration Bottom-Up Integration Aspect Sandwich Integration Testing Testing Testing Ideal for applications Best for systems Suitable for large, layered with critical user where low-level Use Case systems with well-defined top interfaces or functionality is and bottom modules. workflows. complex or critical. Testing a banking app where UI Testing an API-based Testing an e-commerce system: workflows are service: Begin with Test the UI for order placement Examples critical: Begin with individual backend (top) and backend login and dashboard, services, using drivers payment/stock modules using stubs for for UI requests. (bottom). backend modules. Aspect Top-Down Integration Testing Bottom-Up Integration Testing Sandwich Integration Testing Pros Foundational functionality is Simultaneous testing ensures early Early Testing UI and high-level workflows tested early, ensuring a solid detection of critical issues in both are tested early. base. layers. Fault Defects in high-level modules Issues in low-level modules can Faults in both top and bottom layers Localization are easier to detect initially. be isolated and fixed quickly. are detected simultaneously. Mimics real-world workflows Foundational components are Provides broader system coverage, Realistic as top modules interact with rigorously tested before higher- ensuring both UI and backend are Workflow lower modules. level workflows are integrated. stable. Prioritizes low-level modules, Testing Focus Prioritizes high-level user which are crucial for system Balances focus on high-level and experience and functionality. stability. low-level functionality. Works well for systems with Scalability simple or layered Works well for systems with Ideal for large, complex systems architectures. independent backend modules. with distinct layers. Defects found early in top Defects found early in Defect modules prevent cascading foundational modules reduce Combines early defect Management issues. downstream issues. detection at both levels. Aspect Top-Down Integration Testing Bottom-Up Integration Testing Sandwich Integration Testing Cons UI and workflows are tested Middle-layer modules may not get Low-level modules are Delayed Testing late, which may delay dedicated testing, leading to tested late in the process. feedback on user experience. gaps. Requires significant effort to Requires significant effort to Requires effort to develop both Stub/Driver create stubs for lower create drivers for higher stubs and drivers, increasing Effort modules. modules. complexity. Issues in low-level modules Issues in high-level workflows Fault isolation can be more Fault Localization may go unnoticed until later are harder to detect until challenging due to simultaneous stages. integration with UI. testing. Moderate complexity, but Moderate complexity, but High complexity due to dual- Complexity limited to the top-down limited to the bottom-up direction testing and integration. approach. approach. Initial testing may miss Risk of incomplete testing if focus Initial testing may miss user- Coverage Gaps foundational issues if stubs is not equally balanced across experience flaws. don’t behave accurately. layers. May not be ideal for Requires well-defined system Use Case May not be ideal for systems systems with critical architecture for effective Limitation heavily reliant on UI workflows. backend components. implementation. SYSTEM TESTING System testing is a level of software testing where the complete and integrated application is tested as a whole. It verifies that the system meets its specified requirements. The focus is on testing the overall behavior, functionality, and compliance of the system, rather than individual modules or components. Objectives of System Testing 1.Validation: Ensure the system behaves as expected under real-world conditions. 2.End-to-End Testing: Validate the interaction of various modules and components when combined. 3.Non-Functional Testing: Evaluate performance, usability, reliability, and security. 4.Compliance: Ensure the system adheres to regulatory and business standards. TEST CASE CREATION Think of Happy Scenario & Sad Scenario & Edge cases Write down the steps to simulate the scenario Write down the expected results Write down the pass/fail criteria TEST EXAMPLES Test Case ID Test Scenario Expected Result The system displays relevant products TC01 Search for a product by name. matching the search term. Add a product to the cart and verify The product appears in the cart with TC02 the cart contents. correct price and quantity. Proceed to checkout with an invalid The system displays an error message TC03 payment method. for invalid payment details. Place an order successfully and The order is processed, and an email TC04 check if an order confirmation email is sent. with order details is received. The platform renders correctly and Access the platform using mobile TC05 and desktop browsers. is fully functional on both mobile and desktop browsers. TC02 Add a product to the cart and verify the cart contents. Test Data: Test Steps: Quantity: 2 Field Generated Data Subtotal: $99.98 Login as a Registered User ID 123456 User: Apply Promo Code: User ID: 123456 Username john_doe Promo Code: SAVE10 Username: john_doe User Type Registered User Verify the discount: Search for the Product: Product ID P98765 Discount Applied: $10 Search Keyword: Product Name Wireless Headphones "Wireless Headphones" Expected Total: $89.98 Product Price $49.99 Verify product details (ID: Complete the Order: Product Availability In Stock P98765, Price: $49.99, Availability: In Stock). Proceed to checkout and Cart ID CART12345 validate order Add Product to Cart: confirmation. Quantity to Add 2 Add Quantity: 2 Promo Code SAVE10 Validate the cart: Discount Amount $10 Cart ID: CART12345 Expected Total $89.98 (after Item added: Wireless discount) Headphones VARIATIONS & EDGE CASES To ensure comprehensive testing, vary the test data: 1.Out of Stock Product: Use a product with Availability: Out of Stock and verify error handling. 2.Guest User: Use User Type: Guest to test the cart functionality without login. 3.Invalid Promo Code: Use an expired or invalid promo code to verify error messages. 4.Edge Cases: Add maximum allowable quantity to the cart (e.g., 99 items). HOW TO REPORT A DEFECT? - write down the steps to reproduce Expected Result: - write down the environment status The promo code should reduce the total by (stage/preprod/prod) $10. - write down preconditions The total price after applying the promo code should be $89.98. - write down expected results and actual results Actual Result: - Add screenshots The promo code is accepted, but the total price is calculated incorrectly as $99.98, - Add logs / timestamp if possible without applying the discount. - Add version # / build # Example: SOME DEFECTS REPORTING TOOLS JIRA HP ALM .. others TEST REPORT Test Summary Report summarizes the completion of the respective test activities and deliverables and supporting documentation that provides documented evidence that the tests are executed, and each of the test cases are evaluated. Prepared after the completion of the testing process at the end by the Test Managers, this report consists of all the necessary information related to the process of software testing and the results delivered by it. Prepared by testing team, approved by test manager, provided to different stakeholders. TEST REPORT UAT (USER ACCEPTANCE TESTING) User Acceptance Testing (UAT) is the final phase of the software testing lifecycle, conducted to ensure that the system meets the business requirements and is ready for deployment. UAT is typically performed by the end users, business stakeholders, or clients in a real-world environment to validate whether the software is fit for use. Be careful, stakeholders will ask for new requirements! Purpose of UAT Verify the software works as intended from the user's perspective. Ensure the system satisfies business needs and use cases. Identify issues missed during earlier testing phases. Provide confidence to stakeholders before the product goes live. EXAMPLE SCENARIO FOR UAT System: E-commerce website Test Case: Validate that users can successfully search for a product, add it to the cart, and complete a payment. Acceptance Criteria: Product search results are accurate. Cart reflects correct product details and prices. Payment gateway processes the transaction successfully. Order confirmation email is sent. OTHER TYPES OF TESTING Exploratory Tester actively explores the software without Testing: predefined scripts. Regression Verifies that new changes do not adversely affect Testing: existing functionality. Confirms that previously identified defects have Re-Testing: been fixed. Smoke A quick check to ensure major functionalities Testing: work after a build or fix. Sanity A narrow, focused test to verify specific fixes or Testing: features. OTHER TYPES OF TESTING Penetration Testing (Pen Simulates attacks to identify security Testing): weaknesses. A/B Testing: Compares two versions of the software to determine which performs better. OTHER TYPES OF TESTING Testing Type Purpose Testers Timing Focus Find bugs After Internal Functionality, and issues in development, Alpha Testing development internal a controlled before beta or QA team issues environment release Test in real- After alpha Usability, world External testing, performance, Beta Testing scenarios users (beta before user and get testers) general feedback feedback release Final After beta Select users Gamma validation testing, just Stability, final or final Testing before the before issues testers full release release NON-FUNCTIONAL TESTING Tests the system's non-functional aspects like performance, reliability, and usability. Performance Testing: Measures system responsiveness and stability under load. Load Testing: Tests system behavior under expected load. Stress Testing: Determines system limits under extreme conditions. Spike Testing: Observes system behavior during sudden surges in load. Scalability Testing: Assesses system’s ability to scale up with demand. Usability Testing: Assesses the user-friendliness of the system. AUTOMATED TESTING AUTOMATED TESTING - Faster, specially if we need a lot of regression testing. - Used for performance, load and stress testing - Cost-Effective: Saves costs over time for repetitive testing tasks. - Improved Coverage: Tests thousands of scenarios across platforms. - Enhanced Accuracy: Reduces human errors in repetitive tasks. SOME TESTING TOOLS 1. Functional Testing monitoring. Selenium: Web application automation (cross-browser support). 3. API Testing Postman: Manual and automated API Appium: Mobile app automation for testing (REST APIs). Android and iOS. SoapUI: Functional and performance TestComplete: UI testing for desktop, testing for SOAP/REST APIs. web, and mobile apps. 2. Performance Testing 4. Continuous Testing Jenkins: CI/CD tool integrating with Apache JMeter: Load and performance automation tools. testing for web and APIs. CircleCI: Automates testing in DevOps LoadRunner: Enterprise-level pipelines. performance testing for various protocols. Gatling: Load testing with real-time WHEN TO USE MANUAL VS. AUTOMATED TESTING Testing Type Recommended Method Exploratory Testing Manual Testing Regression Testing Automated Testing Usability Testing Manual Testing Load/Performance Testing Automated Testing Short-Term Projects Manual Testing Long-Term Projects Automated Testing STAGES WHITE BOX VS BLACK BOX Acceptan ce Testing Black Box System Testing Integration Testing White Box Unit Testing BDD & TDD Acceptan ce Testing Behavior Driven Development System Doing right thing? Testing Integration Testing Test Driven Development Doing things right? Unit Testing

Use Quizgecko on...
Browser
Browser