Software Testing Overview
32 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

Explain the key difference between dynamic and static testing.

Dynamic testing involves executing the software to observe its behavior, while static testing analyzes the code and documentation without running the software.

What is the difference between quality control and quality assurance in software development?

Quality control focuses on detecting defects after development, while quality assurance aims to prevent defects throughout the entire development process.

What are the essential elements included in a test plan?

A test plan typically includes the test scope, testing environment, entry and exit criteria, and testing objectives.

Describe the difference between retesting and regression testing in software testing.

<p>Retesting is focused on verifying a specific defect fix, while regression testing checks if recently fixed defects or added features haven't caused issues in other parts of the application.</p> Signup and view all the answers

What is the purpose of smoke testing in software development, and when is it usually performed?

<p>Smoke testing checks the core functionalities of a new build to ensure it's stable enough to proceed with further testing. It's typically performed after a build is deployed.</p> Signup and view all the answers

Explain the difference between severity and priority when it comes to software defects.

<p>Severity refers to the impact a defect has on the application's functionality, while priority indicates how urgently the defect needs to be fixed from a business perspective.</p> Signup and view all the answers

Give an example of a defect with high priority and low severity, and explain why it has those classifications.

<p>A spelling mistake in the company name on the website is a high priority but low severity defect because it doesn't affect the website's functionality, but it is important to fix it for brand consistency and user perception.</p> Signup and view all the answers

Explain the difference between low priority and high severity defect in software testing and give an example.

<p>A low priority and high severity defect occurs when a defect has a significant impact on the software's functionality but isn't urgent to fix immediately. For example, a crash that occurs when a user tries to order an extremely large quantity of a product would be low priority (because it's unlikely to be frequent) but high severity (because it causes a crash).</p> Signup and view all the answers

Who is responsible for determining the severity of a bug?

<p>The Tester is responsible for determining the severity of a bug.</p> Signup and view all the answers

What criteria should be reviewed if the team disagrees that a defect is valid?

<p>The requirements should be reviewed to confirm if the issue aligns with the specified requirements.</p> Signup and view all the answers

What are some scenarios when automation of tests is preferred?

<p>Automation is preferred for tests that take a long time to execute, repetitive tests, and complex scenarios that are hard to test manually.</p> Signup and view all the answers

What essential elements should be included in a defect report?

<p>A defect report should include a descriptive title, steps to reproduce, environment details, severity, priority, and attachments or screenshots.</p> Signup and view all the answers

What is the difference between positive and negative testing?

<p>Positive testing verifies what the system should do, while negative testing determines what the system should not do.</p> Signup and view all the answers

Define a test suite.

<p>A test suite is a collection of related test cases.</p> Signup and view all the answers

What does a test scenario represent?

<p>A test scenario represents what is to be tested, such as the registration process.</p> Signup and view all the answers

What constitutes a test environment?

<p>A test environment is a combination of hardware and software, including the OS, web browser, and application under test.</p> Signup and view all the answers

What is the purpose of the Release Notes in a sprint?

<p>Release Notes summarize what occurred during the sprint including testing details, build number, issues status, and next steps.</p> Signup and view all the answers

What defines the exit criteria in a testing phase?

<p>Exit criteria are conditions that must be verified to determine when testing can be stopped or deemed complete.</p> Signup and view all the answers

Explain the concept of Defect Age in software testing.

<p>Defect Age refers to the time difference between when a defect is detected and when it is fixed.</p> Signup and view all the answers

What is the significance of Entry criteria in the testing process?

<p>Entry criteria are prerequisites that must be met, such as having test data and environment prepared, before starting testing.</p> Signup and view all the answers

Differentiate between SDLC and STLC.

<p>SDLC refers to Software Development Life Cycle while STLC pertains to Software Testing Life Cycle, each focusing on their respective phases.</p> Signup and view all the answers

What roles are commonly found in a Scrum team?

<p>Common roles in a Scrum team include the Product Owner, Scrum Master, and Development Team members.</p> Signup and view all the answers

List the Agile meeting types and their purpose in the Sprint process.

<p>Agile meeting types include Sprint Planning, Daily Standup, Sprint Review, and Sprint Retrospective, facilitating collaboration and progress tracking.</p> Signup and view all the answers

What does it mean when a bug is marked as 'deferred'?

<p>A bug marked as 'deferred' indicates that it will be fixed but not until a future sprint.</p> Signup and view all the answers

What is the goal of performance testing? Briefly describe its purpose and key aspects.

<p>Performance testing aims to assess the responsiveness, stability, and overall performance of a system under varying workloads. It evaluates how the application behaves under pressure, ensuring it can handle expected user traffic and maintain desired levels of response time.</p> Signup and view all the answers

Explain the difference between alpha testing and beta testing, including who typically conducts each type of testing.

<p>Alpha testing is conducted internally by developers or testers within the development team, typically in a controlled environment. It focuses on identifying bugs and issues before releasing the product to external users. Beta testing, on the other hand, involves real users outside the development team who provide feedback on usability and functionality in a real-world setting.</p> Signup and view all the answers

Describe the core difference between verification and validation in software testing. Provide a brief example to illustrate each concept.

<p>Verification checks if the product is built according to specifications and requirements, ensuring it complies with design documents. For example, verifying if a login form accurately validates input fields and user data. Validation, on the other hand, ensures the finished product meets user needs and expectations. An example could be validating if the login form design is user-friendly and intuitive for intended users.</p> Signup and view all the answers

What distinguishes black box testing from white box testing? Provide an example scenario to illustrate how each approach would be applied to test a login feature.

<p>Black box testing focuses on the functional behavior of a system without examining its internal code structure. For example, testing a login feature by providing valid and invalid usernames and passwords to verify successful logins and error messages. White box testing, however, analyzes the internal code structure and logic to ensure proper functionality. This could involve reviewing the code to ensure secure password encryption and proper user authentication implementation.</p> Signup and view all the answers

Define gray box testing and explain how it combines elements of both black box and white box approaches. Provide a hypothetical case study demonstrating its application.

<p>Gray box testing combines aspects of both black box and white box testing by leveraging partial knowledge of the system's internal structure. Testers might use knowledge of the system's architecture or design to guide their testing, while still evaluating the system's behavior as a black box. For instance, a tester might know that a login feature uses a specific database to store user credentials. They could then test the login functionality, considering potential SQL injection vulnerabilities, while also evaluating the interface's usability.</p> Signup and view all the answers

In your own words, describe the purpose of unit testing and its significance in the software development lifecycle. Identify the individuals typically responsible for conducting unit tests.

<p>Unit testing is a crucial stage in software development that focuses on testing individual components or modules of the application in isolation. The primary goal is to ensure that each unit functions correctly and meets its intended purpose. This helps identify and rectify bugs early in the development process, leading to more robust and reliable software. Typically, developers are responsible for conducting unit tests.</p> Signup and view all the answers

Outline the main objectives of system integration testing (SIT). Explain why this type of testing is vital before deploying an application to production.

<p>System integration testing (SIT) primarily aims to verify how different modules or components of a system interact with each other when integrated together. Its objective is to ensure seamless communication and data flow between these modules, preventing potential integration-related issues. SIT is crucial before deploying an application to production because it helps identify and address interoperability problems early on, reducing the risk of system failures and ensuring a smooth transition to the live environment.</p> Signup and view all the answers

Describe the general characteristics of non-functional testing. List three common types of non-functional tests and briefly explain their individual focuses.

<p>Non-functional testing focuses on evaluating aspects of the software system that are not directly related to its specific features or functions. It assesses attributes like performance, security, usability, and reliability, rather than simply verifying if the software works as intended. Some common types of non-functional tests include:</p> <ul> <li><strong>Performance testing:</strong> Evaluates the system's responsiveness, stability, and resource utilization under varying workloads.</li> <li><strong>Usability testing:</strong> Assesses how easy and intuitive the system is to use for intended users.</li> <li><strong>Security testing:</strong> Verifies the system's ability to protect sensitive data and prevent unauthorized access.</li> </ul> Signup and view all the answers

Flashcards

Unit Testing

Testing individual components by developers to ensure correctness.

Integration Testing

Testing the interaction between integrated units, typically done by testers.

System Integration Testing (SIT)

Testing the system as a whole to validate system-to-system interactions.

User Acceptance Testing (UAT)

Testing to validate if the product meets user requirements, often done by clients.

Signup and view all the flashcards

Static Testing

Reviewing code and requirements documentation without executing the code.

Signup and view all the flashcards

Dynamic Testing

Testing the application by executing code and verifying outputs.

Signup and view all the flashcards

Black Box Testing

Testing behavior of the application with no regard to internal code structure.

Signup and view all the flashcards

Verification vs Validation

Verification checks if the product is built right; Validation checks if it is the right product for the user.

Signup and view all the flashcards

Severity

Impact of a defect on the application assessed by the tester.

Signup and view all the flashcards

Bug Priority

Determination of defect's urgency, decided by the Product Owner.

Signup and view all the flashcards

Requirements Review

Process of checking if a defect aligns with initial requirements.

Signup and view all the flashcards

Automated Testing

Using scripts to run tests that are time-consuming or repetitive.

Signup and view all the flashcards

Defect Report Essentials

Key components include title, steps to reproduce, and severity.

Signup and view all the flashcards

Positive Testing

Verifying that the system behaves as expected based on requirements.

Signup and view all the flashcards

Negative Testing

Determining what the system should not do to uncover defects.

Signup and view all the flashcards

Test Case

A set of steps designed to test a specific scenario, both positive and negative.

Signup and view all the flashcards

Quality Control

Processes to ensure products meet quality standards.

Signup and view all the flashcards

Quality Assurance

Processes to improve and ensure quality in development.

Signup and view all the flashcards

Retesting

Verifying if a defect has been fixed in the software.

Signup and view all the flashcards

Regression Testing

Testing related functions after changes to ensure no new defects.

Signup and view all the flashcards

Smoke Testing

Initial testing to check core functionalities after a new build.

Signup and view all the flashcards

Severity vs Priority

Severity: impact of a defect; Priority: urgency to fix it.

Signup and view all the flashcards

BUG Life Cycle

Stages of a bug from creation to closure, including new, assigned, opened, closed, reopened, deferred, duplicated, or not a bug.

Signup and view all the flashcards

Entry Criteria

Prerequisites that must be met before starting testing, such as test data and environment readiness.

Signup and view all the flashcards

Exit Criteria

Conditions that must be met to conclude testing or declare a specific condition verified.

Signup and view all the flashcards

RTM (Requirements Traceability Matrix)

A document that links requirements to tests to verify fulfillment.

Signup and view all the flashcards

Releasing Notes

A document summarizing the sprint activities, including coverage, issues, and future actions.

Signup and view all the flashcards

Defect Age

The time difference between when a defect is detected and when it is fixed.

Signup and view all the flashcards

Testing Stop Criteria

Factors that determine when to stop testing, such as coverage, deadlines, and management decisions.

Signup and view all the flashcards

Agile Meeting Types

Different meetings in Agile, including Sprint Planning, Daily Standup, Sprint Review, Retrospective, and Backlog Refinement.

Signup and view all the flashcards

Performance Testing

Testing to assess system responsiveness and stability under load.

Signup and view all the flashcards

Functional Testing

Testing to verify that the software performs its functions as specified in requirements.

Signup and view all the flashcards

Non-Functional Testing

Testing aspects like performance, usability, and scalability, rather than specific functions.

Signup and view all the flashcards

Gray Box Testing

A mix of black and white box testing, focusing on both outputs and internal structure understanding.

Signup and view all the flashcards

Verification

The process to ensure the product is built correctly and meets specified requirements.

Signup and view all the flashcards

Validation

The process to ensure the product meets user needs and requirements post-development.

Signup and view all the flashcards

Exploratory Testing

Testing based on the tester's experience and intuition when time or specifications are limited.

Signup and view all the flashcards

Study Notes

Manual FAQ

  • Testing Levels
    • Unit testing: Performed by developers
    • Integration testing: Performed by testers
    • System Integration Testing (SIT): Performed by testers
    • User Acceptance Testing (UAT/Acceptance Testing): Testing like beta and alpha testing
      • Alpha testing: Testing of the final product on the development site
      • Beta testing: Client testing of a beta version of the application
    • Static testing
    • Dynamic testing
    • Functional testing
    • Non-functional testing: Includes performance, usability, and scalability

Performance Testing

  • Technique to determine system responsiveness and stability under various workloads
  • Load testing: Measures system behavior under specific loads
    • Monitored transactions and load on the database and application server
  • Stress testing: Determines the upper limit capacity of the system and how it handles increased load
  • Endurance/Soak testing: Tests system performance over extended periods under specific loads
  • Spike testing: Sudden increase in users and measuring system response to sustained high loads

When is Performance Testing Needed?

  • Not mandatory for all applications.
  • Only necessary for client-server-based applications.

Other Testing Techniques

  • Black box testing: Testing application behavior without knowing internal structure
  • White box testing: Testing application code (mainly done by developers using automation)
  • Grey box testing: Combination of black box and white box (based on tester experience)

Verification vs. Validation

  • Verification: Checking if the product is built correctly based on requirements (Quality Control)
  • Validation: Checking if the product meets client needs/expectations (Quality Assurance). A product can be verified but still not meet client needs

Static vs. Dynamic Testing

  • Static testing: Testing without code execution (examining documents & reviews)
  • Dynamic testing: Testing by executing the software (black box or white box)

Quality Control vs. Quality Assurance

  • Quality Assurance: Proactive process focusing on processes and preventing defects; ensures the system meets the specified requirement from the beginning of the project
  • Quality Control: Reactive process to ensure deliverables are defect free based on quality requirements; an element of the STLC

Test Plan Components

  • Test scope
  • Environment
  • Entry and exit criteria
  • Testing objectives

Retesting vs. Regression Testing

  • Retesting: Tests after a defect is fixed to verify the fix
  • Regression testing: Tests related features after a defect or new functionality to ensure unintended impact

Severity vs. Priority

  • Severity: Impact of a defect on application functionality
  • Priority: Urgency of fixing the defect relative to business requirements (based on business value)

Bug Priority

  • Determined by the product owner

Post Conditions in Test Cases

  • Example: Square root test - Precondition is number > 0, post condition is square root displayed

When to Automate Tests

  • Time-consuming tests
  • Repetitive tests
  • Complex scenarios

Defect Report Contents

  • Descriptive title
  • Steps to reproduce
  • Environment
  • Severity
  • Priority
  • Attachments & screenshots

Best Practices for Test Cases

  • End-user perspective
  • Simple and clear
  • Defined priority and test data

Positive vs. Negative Testing

  • Positive testing: Verifies system functionality according to specifications (what the system should do)
  • Negative testing: Checks system reaction to invalid or unexpected inputs (what the system shouldn't do)

Test Suites & Scenarios

  • Test suite: Collection of related test cases
  • Test scenario: Describes the test's focus (positive and/or negative)

Test Case

  • Detailed steps of a test scenario (negative and positive)

Test Environment

  • Combination of hardware and software for testing

Bug Life Cycle

  • New, Assigned, Opened, Closed, Reopened, Deferred, Duplicated, Not a bug

Black Box Testing Types

  • Equivalence Partitioning
  • Boundary Value Analysis
  • Decision Table

Entry & Exit Criteria

  • Entry criteria: Prerequisites for testing (e.g., data, environment)
  • Exit criteria: Conditions to stop testing (e.g., specific criteria met)

RTM (Requirement Traceability Matrix)

  • Document linking requirements to tests

Release Notes

  • Document used to summarize sprint-related events

When to Stop Testing

  • Criteria met or deadlines reached

SDLC vs STLC

  • SDLC: Standard phases for software development (Requirements Gathering, Design, Build, Test, Deployment, Maintenance)
  • STLC: Testing phases within SDLC (Requirement Analysis, Test Planning, Test Development, Test Environment Setup, Test Execution & Closure)

SCRUM Roles

  • Scrum Master: Facilitator
  • Product Owner: Defines requirements
  • Agile Development Team: Develops product

Agile vs Scrum

  • Agile: Continuous iterations of development and testing
  • Scrum: Agile framework for delivering working software in sprints

What is Defect Age?

  • Difference between when a defect was found and when fixed.

Mobile App Testing

  • Hardware compatibility
  • Source Code evaluation
  • Connection Interruptions
  • Usability and Functionalities.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Manual Testing FAQ PDF

Description

This quiz covers key concepts in software testing, including unit testing, integration testing, and different types of performance testing. Understand various testing levels, techniques, and their applications in software development. Perfect for anyone looking to enhance their knowledge in software quality assurance.

More Like This

Revisão de Desenvolvimento de Software
45 questions
Testes de Desempenho em Software
45 questions

Testes de Desempenho em Software

BlitheBildungsroman102 avatar
BlitheBildungsroman102
Performans Testi ve Yöntemleri
10 questions
Use Quizgecko on...
Browser
Browser