_Interview Reviewer.pdf
Document Details
Uploaded by AdorableEpigram
Tags
Full Transcript
Automation Frameworks or Testing Frameworks: Data-Driven Testing Framework Keyword-Driven Testing Framework Meaning: This framework separates the test data from the test scripts, allowing the same test script...
Automation Frameworks or Testing Frameworks: Data-Driven Testing Framework Keyword-Driven Testing Framework Meaning: This framework separates the test data from the test scripts, allowing the same test script Meaning: This framework uses keywords to to be executed with different sets of input data. It represent actions to be performed on the helps to validate the application with multiple data application. Testers define keywords (e.g., "Click", sets. "EnterText") that map to specific functions or Example: A test script is written to validate login methods in the code. The test script is then written functionality. Instead of hardcoding the username using these keywords. and password, the script reads these values from Example: Keywords like "OpenBrowser", an external data source like an Excel sheet or "NavigateTo", "ClickButton" are defined, and a test CSV file. script might look like: Hybrid Testing Framework Username Password ExpectedRes ult Meaning: Combines two or more of the above frameworks to leverage the strengths of each. validUser1 validPass1 Login Typically, it involves a mix of data-driven and successful keyword-driven frameworks, allowing for both invalidUser validPass2 Login failed flexibility in test data and ease of script creation. Example: A test framework that uses data-driven validUser2 invalidPass Login failed principles to drive tests with different data sets and keyword-driven principles to create easy-to-read test scripts. Behavior-Driven Development (BDD) Framework Modular Testing Framework Meaning: This framework is designed to Meaning: This framework involves creating small, encourage collaboration between developers, independent test scripts (modules) that represent testers, and non-technical stakeholders by writing different parts of the application. These modules tests in plain language. BDD frameworks like can be combined to form larger test cases. Cucumber use Gherkin syntax, which is a simple, Example: A login script, a search script, and a checkout script are created as separate modules. structured language to define test scenarios. These modules are then combined to form a Example complete end-to-end test for an e-commerce site. Scenario: User logs Scenario: in successfully Unsuccessful Login Summary: with Incorrect Password Automation Frameworks or Testing Frameworks: A Testing Framework is a set of guidelines, tools, and Given the user is on the Given I am on the login practices that help create, execute, and report automated login page page, tests. It organizes test cases, manages test data, and When the user enters When I enter a valid provides tools to interact with the application. The main valid credentials username and an goal is to ensure that tests are consistent, reusable, and incorrect password, maintainable, improving testing efficiency and effectiveness - Data-Driven: Tests are driven by data inputs. Then the user should be And I click the "Sign In" - Behavior-Driven Development: tests are written redirected to the button, homepage in a natural language that describes the desired behavior of a system from the user's Then I should remain on perspective. the login page, - Keyword-Driven: Tests are driven by keywords representing actions. And I should see an error - Hybrid: A combination of data-driven and message indicating that keyword-driven. the username or password is incorrect. Bug Life Cycle New → Assigned → Open → Fixed → Retest → Verified → Closed (or Reopen if not fixed). Severity vs. Priority: In software testing, severity and priority are two key concepts used to classify and manage defects or bugs. Though they are related, they serve different purposes: ○ Severity: Refers to the impact of a defect on the application’s functionality. It reflects how serious the bug is and how it affects the application's performance. ○ Priority: Refers to the urgency of fixing a defect. It indicates how quickly the defect should be resolved based on its impact on the project or business needs. Severity Levels Priority Levels Critical Severity High Priority Meaning: The defect causes a complete failure of the Meaning: The defect must be fixed immediately. It has a application or a critical part of it. The system is unusable, and no direct impact on the business, customers, or critical workaround is available. functionality, making it a top priority for resolution. Example: The application crashes every time the user tries to log Example: A critical bug that prevents users from in, preventing any access to the system. completing purchases on an e-commerce website. Major Severity Medium Priority Meaning: The defect severely impacts the functionality of the Meaning: The defect should be fixed soon, but it’s not application, but some parts may still be usable. A workaround urgent. It impacts the application, but there might be might be available, but it significantly affects the user experience. workarounds or it affects less critical areas. Example: The application fails to process payments correctly, but Example: A bug in the admin reporting tool that generates users can still browse products and add them to the cart. incorrect reports, but the reports are only needed monthly. Moderate Severity Low Priority Meaning: The defect causes some non-critical functionality to Meaning: The defect can be fixed at a later time. It has fail or behave incorrectly. It doesn’t prevent the use of the minimal impact on the business or functionality and application but might cause inconvenience to users. can be deferred to a future release. Example: The "Forgot Password" functionality sends the reset Example: A minor UI inconsistency on a settings page that email after a long delay instead of instantly. is rarely used by users. Minor Severity Meaning: The defect causes a minor issue with the application’s functionality. It may involve some UI inconsistencies or minor bugs that don’t affect the core functionality. Example: A misalignment of text in a dialog box that doesn’t affect usability. Trivial Severity Meaning: The defect has little to no impact on the application. These are cosmetic issues or very minor bugs. Example: A typo in a tooltip text or a small visual inconsistency that does not affect functionality. Examples of Severity vs. Priority 1. High Severity, Low Priority Example: The application crashes when a rarely used feature is accessed, such as an admin reporting tool that is used once a year for generating annual reports. Explanation: The defect is severe because it crashes the application (high severity), but since the feature is rarely used and the next report generation is several months away, the fix is not urgent (low priority). 2. High Severity, High Priority Example: The application crashes when the user tries to make a payment during checkout. Explanation: The defect is both severe and urgent because it affects a critical functionality (high severity) and impacts the users' ability to complete a purchase, which directly affects the business (high priority). 3. Low Severity, High Priority Example: A typo in the company’s name on the homepage of a live website. Explanation: The defect is minor because it doesn’t affect functionality (low severity), but it’s a high priority to fix because it impacts the company’s reputation and could be embarrassing (high priority) 4. Low Severity, Low Priority Example: A minor misalignment of a button on a settings page that is rarely accessed. Explanation: The defect is cosmetic and does not affect functionality (low severity), and since the page is rarely accessed, it’s not urgent to fix (low priority Test Documentation Test documentation is an essential part of the software Step 6: Execute Test Cases testing process. It involves creating and maintaining Why: To verify that the application meets the various documents that outline the testing strategy, requirements and behaves as expected. What to Include: plan, procedures, and results. Proper documentation helps ○ Execution Summary ensure that testing is thorough, organized, and reproducible. ○ Test Case Execution Details Here’s an overview of the key types of test documentation: ○ Issues Encountered When: Once test cases and test data are Step 1: Create a Test Plan prepared. Why: The Test Plan sets the foundation for all Step 7: Document Defects (Defect Report) testing activities. It defines the scope, objectives, Why: To track and manage any issues or defects resources, schedule, and strategy for the testing found during testing. process. What to Include: What to Include: ○ Defect ID and Description ○ Objectives ○ Steps to Reproduce ○ Scope ○ Severity and Status ○ Resources ○ Attachments (e.g., screenshots) ○ Schedule When: As defects are identified during test ○ Risks execution. ○ Test Strategy Step 8: Generate the Test Results Report When: At the beginning of the testing phase, Why: To summarize the outcomes of the test before any test design or execution begins. execution and provide an overview of the Step 2: Develop the Traceability Matrix system's quality. Why: To ensure that every requirement is What to Include: covered by test cases, providing full traceability ○ Summary of Test Cases between requirements, test cases, and defects. ○ Defects Summary What to Include: ○ Test Coverage ○ Requirement ID and Description ○ Recommendations ○ Test Case ID and Description ○ Conclusion ○ Status of test cases When: After test execution is completed. When: After requirements have been finalized and Step 9: Create the Test Summary Report before creating detailed test cases. Why: To provide a high-level summary of the Step 3: Write Test Cases testing process, outcomes, and overall quality Why: Test cases provide detailed steps to verify status. specific functionalities and ensure that the What to Include: application behaves as expected. ○ Test Objectives What to Include: ○ Test Execution Summary ○ Test Case ID and Description ○ Defects Summary ○ Preconditions ○ Test Coverage ○ Test Steps ○ Test Environment ○ Expected Results ○ Risks and Issues ○ Actual Results ○ Recommendations ○ Status ○ Conclusion When: After the Traceability Matrix is prepared When: At the end of the testing cycle, just before and before the test execution phase. the final project review or release. Step 4: Create Test Data Step 10: Compile the Test Closure Report Why: Test Data is required to execute test cases Why: To formally close the testing phase, and simulate real-world scenarios. summarizing key findings and lessons learned. What to Include: What to Include: ○ Data Sets ○ Summary of Testing ○ Data Sources ○ Final Test Results ○ Data Formats ○ Lessons Learned When: Alongside the creation of test cases, ○ Recommendations especially for tests that require specific input data. When: After all testing activities are complete and Step 5: Develop Test Scripts (For Automation) just before or after the release. Why: Test scripts automate the execution of test Summary of the Steps: cases, ensuring efficiency and repeatability. 1. Test Plan What to Include: 2. Traceability Matrix ○ Script ID and Description 3. Test Cases ○ Script Code 4. Test Data ○ Test Data 5. Test Scripts (for automation) ○ Expected Results 6. Test Execution ○ Execution Instructions 7. Defect Report When: After test cases are finalized, particularly 8. Test Results Report for automation projects. 9. Test Summary Report 10. Test Closure Report Test Case Templates Test Case ID: A unique identifier for the test case. Test Title: A brief description of what the test case will verify. Preconditions: Any setup required before executing the test case. Test Steps: Step-by-step instructions on how to execute the test. Test Data: Any data inputs required for the test case. Expected Result: The expected outcome of the test. Actual Result: The actual outcome after executing the test (filled in after testing). Status: Pass/Fail (filled in after testing). Remarks: Any additional notes or observations. Excel Test Case Template: Test Test Title Preconditions Test Steps Test Data Expected Result Actual Status Remarks Case ID Result TC_001 Verify User is on the 1.Navigate to the Username: User is redirected [To be Ensure the successful login login page. Valid login page. [email protected] to the dashboard, filled after dashboard with valid username and 2. Enter valid om and the profile is execution] loads within 3 credentials password are username. Password: displayed. seconds. available. 3. Enter valid Password123 password. 4.Click "Login" button. TC_002 Verify error User is on the 1.Navigate to the Username: Error message [To be Check that message for login page. login page. invalidUser@exa "Invalid username filled after the error invalid login 2. Enter invalid mple.com or password." is execution] message is username. Password: displayed, and the displayed in 3. Enter invalid WrongPassword user remains on red text. password. the login page. 4. Click "Login" button. Defect Life Cycle Understand the defect lifecycle, from detection to closure: ○ New: The defect is reported. ○ Assigned: The defect is assigned to a developer. ○ Open: The developer is working on the defect. ○ Fixed: The defect is resolved. ○ Retest: The tester verifies the fix. ○ Closed: The defect is fixed and verified. ○ Reopen: If the defect persists, it is reopened. ○ Deferred: The defect is postponed for a future release. ○ Example: After updating the payment gateway in an app, regression testing Functional Testing ensures that the entire checkout process Functional testing focuses on verifying that the software’s still functions correctly. features and functions work as expected according to 8. User Interface (UI) Testing the requirements. It deals with what the system does. ○ Meaning: Testing the graphical user 1. Unit Testing interface to ensure it meets design ○ Meaning: Testing individual specifications and provides a good components or units of code to ensure user experience. they perform as expected. ○ Example: Verifying that all buttons, text ○ Example: Testing a function that fields, and labels are correctly aligned calculates the total price of items in a on a web page. shopping cart. 9. Exploratory Testing 2. Integration Testing ○ Meaning: Testing where the tester ○ Meaning: Testing the interaction actively explores the application between integrated components or without predefined test cases to units to ensure they work together as discover defects. intended. ○ Example: A tester navigates through an ○ Example: Testing the interaction application trying various user actions to between the login module and the user uncover unexpected issues. dashboard to ensure a user is redirected 10. End-to-End Testing correctly after logging in. ○ Meaning: Testing the entire application 3. System Testing flow from start to finish, including ○ Meaning: Testing the entire integrated interactions with external systems. system to verify that it meets the ○ Example: Testing the process of placing specified requirements. an order on an e-commerce site, from ○ Example: Running tests on an browsing to payment and confirmation. ○ e-commerce platform to ensure all 11. Ad-hoc Testing components (search, cart, checkout) ○ Meaning: Informal testing where the work together seamlessly. tester seeks to find defects without a 4. Acceptance Testing structured approach or documentation. ○ Meaning: Testing to determine whether ○ Example: Randomly interacting with the the system satisfies the business app to try and uncover any unexpected requirements and is ready for issues. deployment. 12. Localization Testing ○ Example: A client testing a CRM system ○ Meaning: Testing to ensure the software to ensure it meets their needs before behaves correctly in different locales, accepting it for production use. languages, and regions. 5. Smoke Testing ○ Example: Verifying that date formats, ○ Meaning: A preliminary test to check currency, and text are properly displayed the basic functionality of the in the French version of an application. application, ensuring that the critical 13. Globalization Testing features work. ○ Meaning: Ensuring that the software can ○ Example: After a new build, checking if function globally, supporting various the application launches successfully languages, regions, and cultural and if key features like login and conventions. navigation work. ○ Example: Testing an application to 6. Sanity Testing ensure it supports different languages ○ Meaning: A quick test to verify that a without breaking layout or functionality. specific function or bug fix works as 14. Interoperability Testing expected after minor changes. ○ Meaning: Testing to ensure the ○ Example: Testing a specific bug fix software can interact with other related to the checkout process in an systems or software effectively. online store. ○ Example: Verifying that a web 7. Regression Testing application correctly interacts with ○ Meaning: Testing existing functionality different browsers or other third-party to ensure that recent code changes tools. haven’t introduced new defects. Non-Functional Testing 7. Recovery Testing Non-functional testing focuses on evaluating the Meaning: Testing how well the software system's performance, reliability, usability, and other recovers from crashes, hardware failures, non-functional aspects. It deals with how the or other catastrophic events. system performs. Example: Simulating a server crash and 1. Performance Testing verifying that the application can recover and Meaning: Testing how the system performs restore data without loss. under various conditions, such as load, 8. Localization Testing stress, and scalability. Meaning: Testing to ensure that the software Example: Running a load test on a web is correctly adapted for a specific locale or application to measure response times when region, including language, cultural 1,000 users are accessing it simultaneously. nuances, and local regulations. 2. Load Testing Example: Verifying that date formats, Meaning: Testing the system's performance currencies, and translations are correctly under expected load conditions to ensure implemented for a Spanish version of the it can handle user traffic or data processing. application. Example: Simulating thousands of users 9. Compliance Testing logging into an online banking system Meaning: Testing to ensure that the software simultaneously to check how the system complies with relevant laws, regulations, handles the load. and guidelines. 3. Stress Testing Example: Checking a financial application to Meaning: Testing the system's behavior ensure it adheres to the Payment Card under extreme conditions, often beyond its Industry Data Security Standard (PCI DSS). capacity, to see how it handles high-stress 10. Scalability Testing scenarios. Meaning: Testing to determine the system’s Example: Increasing the number of ability to scale up or down in response to transactions on an e-commerce site to the changes in user load. point where the system starts to degrade, to Example: Evaluating how an application identify its breaking point. performs when the number of users 4. Usability Testing increases from 100 to 10,000. Meaning: Testing to evaluate the software’s 11. Reliability Testing ease of use, user interface, and overall Meaning: Testing to ensure that the user experience. software consistently performs as Example: Observing users as they try to expected under specific conditions over navigate a new mobile app and identifying time. areas where they struggle or get confused. Example: Running a banking application 5. Security Testing continuously for a week to check if it can Meaning: Testing to ensure that the software handle transactions without failing. is secure from vulnerabilities, threats, and attacks. Example: Conducting penetration testing on a web application to find and fix potential security loopholes. 6. Compatibility Testing Meaning: Testing to ensure the software works correctly across different environments, such as browsers, devices, and operating systems. Example: Verifying that a web application functions correctly on Chrome, Firefox, Safari, and Edge browsers. 1. Testing Techniques ○ Integration Testing: Focuses on Definition: Testing techniques refer to specific interactions between integrated methods or strategies used to design and components or modules. execute tests. These techniques help testers ○ System Testing: Involves testing the create effective test cases and identify potential complete and integrated software defects in the software. system to ensure it meets the Focus: The approach or method used to perform specified requirements. testing. ○ Acceptance Testing: Validates the Examples: software against user requirements ○ Black-Box Testing: Focuses on testing and determines whether it is ready for the software’s functionality without release. knowing the internal code. Usage: Testing levels help structure the testing ○ White-Box Testing: Involves testing process throughout the development lifecycle, the internal structures or workings of ensuring issues are caught early and the software an application, such as code is validated before release. coverage. ○ Exploratory Testing: Involves actively Key Differences: exploring the application without Scope vs. Method vs. Purpose: predefined test cases to find defects. ○ Testing Techniques: Focus on the ○ Boundary Value Analysis: A technique methods used to design and execute within black-box testing that focuses tests, applicable across different types on the values at the boundaries of and levels. input domains. ○ Testing Types: Focus on the purpose or Usage: Testing techniques are used to design aspect of the software being tested, test cases that effectively find defects, and they such as functionality, performance, or can be applied at any stage of the testing process. security. 2. Testing Types ○ Testing Levels: Focus on the scope or Definition: Testing types refer to the different stage of testing within the development categories or objectives of testing, each lifecycle, from unit testing to acceptance designed to validate specific aspects of the testing. software, such as functionality, performance, or Examples: security. ○ A black-box testing technique can be Focus: The purpose or aspect of the software applied at the system testing level to being tested. perform functional testing. Examples: ○ White-box testing might be used at the ○ Functional Testing: Validates that the unit testing level to ensure that each line software performs according to the of code is executed properly. specified requirements. Application: ○ Non-Functional Testing: Tests aspects ○ Testing Techniques are applied during like performance, usability, and the creation and execution of tests, reliability, which are not related to guiding how tests are designed and specific functions. conducted. ○ Regression Testing: Ensures that ○ Testing Types categorize the overall recent changes have not negatively objectives of testing efforts, ensuring all impacted existing functionality. necessary aspects of the software are ○ Security Testing: Identifies validated. vulnerabilities to protect the software ○ Testing Levels define when and where in from security threats. the development process testing should Usage: Testing types help categorize and focus occur, ensuring a structured and staged testing efforts on different aspects of the approach. software, ensuring comprehensive evaluation. Summary 3. Testing Levels Testing Techniques: How you test (methods). Definition: Testing levels refer to the different Testing Types: What you test stages in the software development lifecycle (categories/objectives). where testing is performed. Each level focuses Testing Levels: When/where you test on different scopes of the application, from (stages/scope in the lifecycle). individual units to the entire system. Focus: The stage or scope of the development These three concepts are interrelated and often overlap, but process where testing occurs. they each serve a distinct purpose in the overall testing Examples: strategy. Understanding them helps in planning and ○ Unit Testing: Tests individual executing a comprehensive testing process that covers all components or units of code, typically necessary aspects of the software. performed by developers. Software Testing Life Cycle (STLC) Meaning: STLC is a sequence of phases in the testing process to ensure that software is tested thoroughly and meets quality standards before release. Phases: 1. Requirement Analysis: Understand what needs to be tested. 2. Test Planning: Define testing strategy, schedule, and resources. 3. Test Case Development: Create detailed test cases and test data. 4. Test Environment Setup: Prepare the environment for testing. 5. Test Execution: Run test cases, log defects, and perform regression testing. 6. Test Cycle Closure: Analyze results, document findings, and wrap up testing. Example: In an e-commerce project, STLC involves analyzing the checkout process requirements, planning the test cases, setting up the test environment, running tests on different payment methods, and finally closing the test cycle after all issues are resolved. Software Development Life Cycle (SDLC) Meaning: SDLC is a series of steps followed during software development to design, develop, test, and deploy software. Phases: 1. Planning: Define the project scope and objectives. 2. Requirements Analysis: Gather and document user requirements. 3. Design: Create the software architecture and design. 4. Implementation: Write the code and build the application. 5. Testing: Test the software to find and fix defects (STLC occurs here). 6. Deployment: Release the software to production. 7. Maintenance: Provide ongoing support and updates. Example: In the same e-commerce project, SDLC includes planning the online store, gathering requirements for features like product listing, designing the user interface, coding the website, testing it (using STLC), deploying it online, and maintaining the site post-launch. ○ TestNG Overview: TestNG is a testing framework inspired by JUnit and NUnit, designed to simplify testing in Java. It supports test configuration, parallel execution, data-driven testing, and more. ○ Common Annotations Annotations: Annotations in TestNG define how methods should be treated during the testing process. @Test: Marks a method as a test method. @BeforeMethod and @AfterMethod: Methods annotated with these will run before and after each test method respectively. @BeforeClass and @AfterClass: Run before and after all the methods in the current class. @BeforeSuite and @AfterSuite: Execute before and after all the tests in a suite. @BeforeTest and @AfterTest: These methods run before and after each test in the tag of the TestNG XML file. ○ Test Configuration: Test Prioritization: Using priority to control the order in which tests run. Grouping Tests: Organizing tests into groups with the groups attribute in @Test. This allows for selective execution of specific test groups. Disabling Tests: You can skip tests by setting the enabled attribute to false in the @Test annotation. Timeout: Setting a maximum duration for test execution using the timeOut attribute in @Test. ○ TestNG Assertions Assertions: Assert class in TestNG is used to validate test results. Common assertions include assertEquals, assertTrue, assertFalse, etc. Soft Assertions: Unlike regular assertions, soft assertions allow the test to continue after an assertion failure, collecting all errors before reporting. ○ Data-Driven Testing @DataProvider: As discussed earlier, this is used to run a test method multiple times with different sets of data. Parameterization: Passing parameters to test methods via TestNG XML files using the @Parameters annotation. ○ Reports and Logs Test Reports: TestNG generates detailed HTML and XML reports for test execution, which provide insights into test results. Logging: TestNG integrates with various logging frameworks to capture detailed logs during test execution. ○ Dependency Management Test Dependencies: You can define dependencies between test methods using the dependsOnMethods or dependsOnGroups attributes in the @Test annotation. This ensures that some tests only run after others have successfully completed. ○ Listeners and Reporters: TestNG Listeners: Implementing listeners like ITestListener or ISuiteListener allows for custom actions during the test execution lifecycle, such as logging or reporting. Custom Reporters: TestNG allows you to create custom reports by implementing IReporter or using third-party plugins. Questions for Automation 1. Write a challenging testing project you've errors are caught and logged worked on and how you overcame properly. obstacles to achieve success? ○ I also use custom exception ○ At Market Innovators Inc. I led a handling to throw meaningful project to automate the testing of a error messages when a test fails. complex web application web ○ For example, if a web element is dynamic content. not found, I log the exact element ○ The main challenge was handling and the action that failed. frequent changes in the Additionally, I ensure that application’s UI, which affected screenshots are captured on test stability. failure for visual debugging. ○ To overcome this, I implemented 5. How do you approach debugging a flaky robust locator strategies using test that sometimes passes and Selenium, incorporated flexible sometimes fails? waits, and worked closely with ○ First, I analyze the test to see if developers to understand upcoming there are any timing issues, such changes. These steps improved as waiting for elements to load. test reliability and ensured ○ I would add explicit waits or successful project completion. increase the timeout settings. I 2. Can you explain how you would use also check for any dependencies Selenium with TestNG for a data-driven that test? What are the advantages of using 6. Can you explain the Page Object Model TestNG over JUnit in certain scenarios? and how you’ve used it in your projects? ○ In Selenium, I would use TestNG's ○ The Page Object Model (POM) is a @DataProvider to supply design pattern that helps in multiple sets of data to a single organizing code for automated test method. tests. It involves creating separate ○ This allows for testing various classes for each page of the input combinations without application, encapsulating the writing multiple test cases. page’s elements and actions. This ○ TestNG offers more flexibility than abstraction allows for: JUnit with features like parallel test ○ Code Reusability: Reusing page execution, better annotations, and methods across multiple tests. support for dependency testing, ○ Maintainability: Making it easier to making it more suitable for complex update tests when the UI changes test scenarios. by modifying only the page class. 3. Can you describe how you've ○ In my projects, I have implemented implemented the Page Object Model in POM by creating a base page class your automation framework? What are with common methods and the key benefits of using POM? extending it for specific pages. This ○ In my previous project, I approach improves the organization implemented the Page Object and scalability of the test suite. Model by creating a separate 7. Describe how you handle dynamic class for each page of the elements in your automation scripts. application, encapsulating the UI ○ To handle dynamic elements in my elements and the actions that can automation scripts, I use several be performed on them. strategies: ○ This separation helps in Dynamic Locators: I use XPath or maintaining the code more CSS selectors that can efficiently and makes it easier to accommodate changes in element reuse and update when the UI attributes, such as dynamic IDs or changes. POM also reduces code duplication and enhances test class names. readability. Wait Strategies: I implement explicit 4. How do you handle exceptions in your waits to handle elements that take Java-based test automation scripts, and time to load or become interactable. how would you ensure that a failed test For instance, I use WebDriverWait provides useful debugging information? to wait for specific conditions. ○ I use try-catch blocks to handle exceptions, ensuring that any Retry Mechanism: In cases where ○ Data-Driven: Focuses on elements might load intermittently, I separating test data from test use retry logic to attempt actions scripts, useful when you need to run the same test with multiple data multiple times before failing the test. sets. 8. What is the Page Object Model? ○ Keyword-Driven: Uses keywords ○ The Page Object Model (POM) is a to represent user actions, making it design pattern in test automation easier for non-programmers to that creates an object repository for understand and maintain tests. web UI elements. Each web page in ○ Behavior-Driven (BDD): Combines the application is represented by a tests with business-readable class in the code, and the class specifications, ideal for ensuring contains the methods that interact collaboration between developers, with that page. testers, and business stakeholders. 9. How does POM help in maintaining and 12. How do you design test cases for scaling test scripts? automation? ○ POM helps in maintaining and ○ I design test cases by first scaling test scripts by promoting identifying the test objectives, code reusability and separation of breaking down the functionality into concerns. It simplifies test smaller components, and then maintenance since any changes in creating steps that validate each the UI can be updated in one place, component. I ensure the cases are the corresponding page object, clear, reusable, and maintainable, without affecting the test logic. with a focus on key functionalities. 10. Can you provide an example of how you 13. What factors do you consider when have implemented POM in your projects? deciding which test cases to automate? Below is a brief example of how POM is ○ I consider factors like test case implemented in a login test case: frequency, repetitiveness, complexity, and potential for human error. I prioritize automating tests that are time-consuming, critical to the application, or need to be executed across multiple environments. 14. How do you ensure that your test scripts are reusable and modular? ○ I ensure reusability and modularity by following best practices like creating separate functions for common actions, using the Page Object Model (POM), and organizing code into easily maintainable classes. This approach reduces duplication and simplifies updates. 15. Can you provide an example of a reusable method you have created? For example, I created a reusable method for logging into an application: 16. What is data-driven testing and how do 11. Can you explain the differences between you implement it in your framework? these frameworks and when to use each? ○ Data-driven testing is a testing ○ I'm familiar with Data-Driven, methodology where test cases are Keyword-Driven, and executed using various sets of input Behavior-Driven frameworks. data. The main goal is to verify that the application behaves correctly with different data sets, enhancing 17. How do you handle dynamic elements in test coverage and robustness. This your tests? approach separates test logic from ○ Dynamic elements are those whose test data, allowing for more properties (e.g., ID, class) change comprehensive testing with minimal frequently. Handling them requires code changes. strategies to ensure your test scripts can interact with these Implementation Methods: elements reliably. 18. Strategies for Handling Dynamic 1. Using @Parameters in TestNG: Allows you Elements: Use of Flexible Locators: to pass data from a configuration file or XML ○ XPath with Contains: to your test methods. Use partial matches or 2. Using.properties Files: Stores more flexible XPath key-value pairs in a properties file, which can expressions to locate be loaded into your test scripts to provide elements. For example, dynamic test data. //button[contains(t ext(),'Submit')]. CSS Selectors: ○ Use CSS attributes that are less likely to change. For example, button[id^='submit' ] targets buttons with IDs starting with "submit". Regular Expressions: ○ For elements with dynamic IDs or classes, use regular expressions in XPath or CSS selectors to match patterns. Dynamic Waits: ○ Employ waits to handle elements appearing or changing state dynamically. Ensure your script waits until the element is interactable. 19. What strategies do you use to handle synchronization issues in your tests? ○ Synchronization issues arise when tests interact with elements that may not yet be fully loaded or ready. Effective strategies include: Explicit Waits: ○ Use WebDriverWait to wait for specific conditions (e.g., element visibility, clickability). Implicit Waits: ○ Set a default wait time for finding elements, useful for general cases. Fluent Waits: ○ A more flexible wait approach that allows custom polling intervals and conditions. Handling Stale Element References: ○ Re-find elements if they become stale during interactions. 20. How do you manage waits (explicit, implicit) and avoid common pitfalls? Explicit Waits: Preferred for waiting for specific conditions. Avoid hardcoded sleeps as they can lead to flaky tests. Use conditions such as visibility, clickability, or presence. 21. Implicit Waits: Set globally, but avoid over-reliance as they can increase test execution time. Use them in combination with explicit waits for better control. Avoiding Common Pitfalls: Mixing Wait Types: Avoid using implicit waits with explicit waits as it can lead to unpredictable results. Excessive Waiting: Use the minimum required wait time to reduce test execution time. Hardcoded Sleep: Avoid using Thread.sleep() for waits; it's less reliable and can cause test instability. Questions for Manual 1. Will your test scenarios differ depending on who will use the system? If yes, how? If no, why? ○ Yes, test scenarios can differ based on user roles and their permissions within the system. ○ From my experience, some users have admin access, while others are standard users. ○ These roles may have different access levels and functionalities, so it’s important to tailor test cases accordingly to ensure each role's specific needs are met. 2. Where do you spend your time more when testing? Testing positive cases or negative cases? Why? ○ I spend more time on negative cases since testing positive cases confirms that the system works as expected. ○ However, focusing on negative cases helps identify potential failures and vulnerabilities. This approach ensures the system can handle unexpected situations and inputs effectively. 3. Most memorable bug/issue that you have filed? Why? ○ One of the most memorable bugs I encountered involved incorrect payment options in the system. ○ When users navigated from the 'Hatid' screen to the 'Eats' summary screen, 'Cash Payment' appeared as an option with an error message saying, "The selected payment option is invalid." ○ The valid options were supposed to be 'Wallet' and 'Cash Payment by Receiver.' This bug was significant because it affected payment accuracy and user trust. Through extensive testing and collaboration, we found a logic flaw in the payment sequence. Fixing it not only corrected the displayed options but also improved the system’s reliability, highlighting the importance of thorough testing and effective teamwork. One of the most memorable bugs I encountered involved an issue with accepting non-existent locations in the address fields for Delivery Details and Summary Details. The system allowed users to enter and submit addresses that did not exist, which could lead to failed deliveries and user frustration. This bug was significant because it directly impacted the accuracy and reliability of delivery services. Fixing this issue was crucial for ensuring that only valid addresses were accepted, which improved the accuracy of deliveries and the overall user experience. This experience highlighted the importance of validating user input to maintain the integrity of the system. 4. How do you handle the pressure of meeting tight deadlines and dealing with potentially stressful testing situations? ○ I manage tight deadlines and stressful situations by staying organized and prioritizing tasks effectively. ○ I break down complex testing tasks into manageable components and use tools like JIRA to track progress and address issues promptly. Effective communication with team members and a proactive approach to problem-solving help tasty on top of deadlines and reduce stress. 5. Describe your approach to test case design and how you ensure comprehensive test coverage. ○ My approach to test case design involves understanding the requirements thoroughly and identifying key functionalities and edge cases. ○ I create detailed test cases that cover both positive and negative scenarios, ensuring comprehensive coverage. 6. How do you prioritize your tasks and manage your time effectively when juggling multiple testing projects? ○ I prioritize the task by assessing their urgency and impact on the project. ○ I use project management tools like JIRA to track and organize tasks, set clear deadlines and allocate time based on priority. ○ I also break down larger task into smaller, manageable steps to regularly review progress to adjust priorities as needed. 7. Can you describe your background and experience in software testing? I have over 5 years of experience in software testing, having worked in roles such as QA Tester/Web Developer at BCD Pinpoint Direct Marketing Inc and Lead Automation Tester at Market Innovators Inc. My experience spans manual and automated testing for web, iOS, and Android applications. I have expertise in functional testing, bug reporting, and test automation using tools like Selenium, Appium, and Detox. Additionally, I’ve managed deployments and performed manual API testing using Insomnia and Postman. 8. What motivated you to pursue a career in software testing? My motivation for pursuing a career in software testing is I enjoy the challenge of identifying issues and improving software functionality. The dynamic nature of testing, along with the opportunity to work closely with development teams, keep me engaged and motivated. 9. What do you believe sets you apart from other candidates for this Test Analyst position? My diverse experience in both manual and automated testing across different platforms, coupled with my expertise in tools like Selenium, Appium, and Postman, sets me apart. My strong problem solving skills and commitment to continuous improvement make me a valuable asset. 10. You've been assigned to a project with constantly changing requirements. How would you adapt your testing strategy to accommodate these changes? To adapt to a project with constantly changing requirements, I would implement an agile testing methodology. This allows test cases to be continuously developed and refined as the project evolves. Additionally, I would prioritize testing based on risk, focusing on areas of the application that are most affected by the changes or have the potential for defects. To further manage the challenges posed by shifting requirements, I would leverage test automation. Automation helps cover repetitive and stable aspects of the application efficiently. Automated tests can be quickly updated and re-executed, allowing manual testing to focus on new or altered functionalities. This combined approach ensures that the testing process remains aligned with evolving project needs and maintains high quality and effectiveness throughout the project lifecycle. 11. If you were asked to lead a team of junior Test Analysts, how would you mentor and guide them in their work? I would mentor junior Test Analysts by providing clear guidance on testing methodologies, best practices and tools usage. I'd encourage open communication. offer regular feedback, and create opportunities for hands-on learning through paired testing or shadowing