Software Testing: Chapter 8

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

How does software testing contribute to the broader verification and validation process?

Software testing is a component of the broader verification and validation process, which includes techniques such as static validation.

Distinguish between validation testing and defect testing regarding the goals and test case design.

Validation testing aims to demonstrate that the software meets requirements, while defect testing aims to find situations where the software behaves incorrectly. Test cases for defect testing might be obscure and unlike normal usage.

How do system purpose, user expectations, and the marketing environment influence the required confidence level in verification and validation?

If the software is critical to an organization a higher level of confidence is needed. If users have low expectations of certain kinds of software, then less confidence is needed. Getting a product to market early may also be more important than finding defects.

Explain how software inspections and software testing serve as complementary verification techniques.

<p>Software inspections find problems in the static system representation, while software testing exercises and observes the product's behavior. Both are used to verify the software.</p> Signup and view all the answers

Contrast the focus and scope of unit testing, component testing, and system testing within the context of development testing.

<p>Unit testing focuses on objects or methods, component testing focuses on component interfaces, and system testing focuses on testing component interactions.</p> Signup and view all the answers

In object-class testing, what does complete test coverage involve, and how does inheritance complicate the testing process?

<p>Complete test coverage involves testing all operations, setting/interrogating attributes, and exercising all possible states. Inheritance complicates testing because information to be tested is not localized.</p> Signup and view all the answers

How do the setup, call, and assertion parts contribute to the structure and execution of automated tests?

<p>The setup part initializes the system with inputs and expected outputs. The call part calls the object or method, and the assertion part compares the result with the expected result.</p> Signup and view all the answers

Describe what partition testing is, and explain its purpose in unit testing.

<p><code>Partition testing</code> identifies classes of inputs that have common characteristics. The purpose is to choose tests from within each of these groups.</p> Signup and view all the answers

How can guidelines derived from previous experiences with programmer errors be used to improve testing effectiveness?

<p>Guideline-based testing uses the guidelines to choose test cases. These guidelines reflect previous experience of the kinds of errors that programmers often make when developing components.</p> Signup and view all the answers

How does the concept of 'equivalence partition' aid in designing effective test cases?

<p>Each input or output falls into different classes where all members of a class are related. Each class is an equivalence partition where the program behaves equivalently for each class member.</p> Signup and view all the answers

Explain why conformance to specification does not necessarily guarantee that the software will meet the customer's real requirements.

<p>Inspections can check conformance with a specification, but not conformance with the customer's real requirements.</p> Signup and view all the answers

How are the objectives of component testing related to the concept of a 'composite component interface'?

<p>Testing composite components should focus on showing that the component interface behaves according to its specification. You can assume unit tests on the individual objects have been completed.</p> Signup and view all the answers

Outline the main objectives of interface testing and list the four types of interface.

<p>Objectives are to detect faults due to interface errors and invalid assumptions about interfaces. The four types are parameter interfaces, shared memory interfaces, procedural interfaces, and message passing interfaces.</p> Signup and view all the answers

How does focusing on component integration and data transfer across interfaces in system testing contribute to overall software quality?

<p>Focusing on component integration and data transfer ensures that components are compatible, interact correctly, and transfer the right data at the right time across their interfaces, as well as to test the emergent behavior of a system.</p> Signup and view all the answers

How do the roles and goals of system testing and release testing differ, particularly with respect to the teams involved and the type of defects targeted?

<p>System testing is performed by the development team to discover bugs. Release testing is performed by a separate team and checks that the system meets its requirements and is good enough for external use.</p> Signup and view all the answers

What fundamental principle underlies requirements-based testing, and how is this principle applied in practice?

<p>The principle is examining each requirement to develop a test. Requirements-based testing involves creating tests directly from the documented requirements.</p> Signup and view all the answers

How do test scenarios based on usage scenarios contribute to the validation of system features and user interactions?

<p>Scenario testing involves inventing a typical usage scenario and using it to derive test cases. It allows for ensuring the system behaves well for the expected workflows of a typical user.</p> Signup and view all the answers

How do testing policies that define required system test coverage help address the impossibility of exhaustive system testing?

<p>Testing policies test areas that will be accessed through menus should be tested and combinations of functions accessed through the same menu must be tested.</p> Signup and view all the answers

What is a key benefit of 'Test Driven Development' that might not be achieved by writing the tests after the code?

<p>Writing tests before code helps drive the design and ensures all code has at least one test, also it helps simplify debugging.</p> Signup and view all the answers

Explain what regression testing is and why it is regarded as straightforward when automated testing is employed.

<p>Regression testing is testing the system to check that recent changes have not 'broken' previously working code. It's straightforward with automated tests because all tests are rerun every time a change is made to the program.</p> Signup and view all the answers

Describe the primary goal of release testing and explain why it typically employs black-box testing techniques.

<p>The primary goal is to convince the supplier that the system is good enough for use. It uses black-box testing because tests are derived from the system specification.</p> Signup and view all the answers

In requirements tests, what is the benefit of setting up patient allergy tests with one or more allergy medications?

<p>Setting up a patient record in which allergies to two or more drugs are recorded helps to confirm that the correct warning for each drug is issued.</p> Signup and view all the answers

Give two examples of features being tested by a usage scenario.

<p>Authentication by logging onto the system is tested, and downloading/uploading is also tested.</p> Signup and view all the answers

How does user testing capture reliability, performance, usability and robustnes that may be missed by system and release testing?

<p>User testing captures influences from the user's working environment. A users working environment cannot be replicated in a testing environment.</p> Signup and view all the answers

Explain the primary difference between alpha testing and beta testing in terms of testing environment and user involvement.

<p>Alpha testing is done with the development team on the developer's site and involved users. Beta testing releases the software to users so they can experiment and raise issues with the developers.</p> Signup and view all the answers

What unique challenge is introduced when acceptance testing is performed within agile methods, and why does this challenge arise?

<p>The main problem is whether the embedded user is 'typical' and can represent all of the system stakeholders. This arises as there is no separate acceptance testing process.</p> Signup and view all the answers

Explain what the term 'interface misuse' means in interface testing, and give an example.

<p>'Interface misuse' is when a calling component calls another component and makes an error in its use, for example, when parameters are in the wrong order.</p> Signup and view all the answers

What is the significance of stress testing? How may it be applied?

<p>Stress testing can be applied to measure the breaking point in performance. It may reveal performance failures or unexpected behaviour.</p> Signup and view all the answers

Describe the relationship between component testing and interface testing of a system.

<p>Component testing verifies the behaviour of composite component interfaces. Interface testing identifies faults due to errors or invalid assumptions about its interfaces.</p> Signup and view all the answers

What is the role of 'instrumentation' in the context of software testing, and provide examples of common instrumentation practices.

<p>The information provided does not refer to the concept of 'instrumentation'.</p> Signup and view all the answers

When is it appropriate to apply formal methods such as model checking or theorem proving in the software testing process, and what advantages do they offer?

<p>The information provided does not refer to the concept of formal methods.</p> Signup and view all the answers

Can Software inspections find both non-conformance to the customer's specification as well as non-conformance to customer's real requirements? Explain.

<p>Software inspections can only find non-conformance to the customer's specification. It cannot be used to find non-conformance to customer's real requirements.</p> Signup and view all the answers

How does acceptance testing in custom software differ from that in commercial off-the-shelf (COTS) software?

<p>Acceptance testing in custom software involves testing whether the deliverable can be deployed in the environment and is ready to be accepted. Acceptance testing for COTS software is evaluating its fit for purpose is evaluating its fit for purpose and integration capabilities.</p> Signup and view all the answers

Why is it particularly important to perform stress testing on systems that are designed for high availability or real-time processing?

<p>Stress testing is a form of performance testing where the system is deliberately overloaded to test its failure behavior.</p> Signup and view all the answers

What are the key differences between verification and validation in the context of software testing, and why are both important?

<p>Verification: &quot;Are we building the product right&quot;. Validation: &quot;Are we building the right product&quot;. Both are important to ensure the product is what the customer wants as well as what they asked for.</p> Signup and view all the answers

How might automated testing and continuous integration practices specifically support or enhance regression testing efforts in software development?

<p>With automated testing all tests can be easily rerun every time a change is made to the program. Tests must run <code>successfully</code> before the change is committed.</p> Signup and view all the answers

How do concerns for external security threats and data privacy, which are particularly relevant in the context of modern software, impact the scope and design of software testing efforts?

<p>The information provided does not refer to external security threats and data privacy.</p> Signup and view all the answers

When might exploratory testing be preferred over scripted testing approaches?

<p>The information provided does not refer to exploratory testing.</p> Signup and view all the answers

What measures should be taken to address the problem that the embedded user is typical when using Agile methodologies for user acceptance?

<p>The problem here is whether or not the embedded used is <code>typical</code> and can represent the interests of all system stakeholders.</p> Signup and view all the answers

Flashcards

Program Testing

The process intended to show a program does what it should and to discover defects.

Verification and Validation (V&V)

Verifies software meets requirements and is fit for purpose.

Software Inspections

Analyzing static system representations to find problems.

Software Testing

Executing software to observe behavior

Signup and view all the flashcards

Unit Testing

Testing individual program units or object classes.

Signup and view all the flashcards

Component Testing

Testing composite components by focusing on testing component interfaces.

Signup and view all the flashcards

System Testing

Testing the system as a whole, focusing on component interactions.

Signup and view all the flashcards

Equivalence Partitioning

Ensuring individual outputs fall into related classes.

Signup and view all the flashcards

Testing Guidelines for Sequences

Testing with sequences that have only a single value.

Signup and view all the flashcards

Interface Misuse

Checks if an external component is used correctly.

Signup and view all the flashcards

Test-Driven Development (TDD)

Testing approach where tests are written before code.

Signup and view all the flashcards

Regression Testing

Ensuring changes haven't broken previously working code.

Signup and view all the flashcards

Release Testing

Testing a system ready for use outside the development team.

Signup and view all the flashcards

Requirements-Based Testing

Testing a system based on its documented requirements.

Signup and view all the flashcards

Performance Testing

Testing to gauge emergent system properties like performance.

Signup and view all the flashcards

Stress Testing

Testing to determine system limits by overloading.

Signup and view all the flashcards

User Testing

Testing with user feedback and advice.

Signup and view all the flashcards

Acceptance Testing

Customer tests a system to decide whether it is ready to be deployed in their environment.

Signup and view all the flashcards

Static Verification

Static analysis of code, designs, and requirements to identify defects.

Signup and view all the flashcards

Dynamic Verification

Dynamic verification of software by observing product behavior under test.

Signup and view all the flashcards

Study Notes

  • Chapter 8 focuses on Software Testing
  • Summary of topics covered includes development, test-driven, release and user testing

Program Testing

  • Software testing is performed to reveal program errors before deployment.
  • Software testing involves executing a program with artificial data.
  • During software testing, the results of the test are assessed for errors, anomalies, and non-functional attributes.
  • Software testing reveals the presence, not the absence, of errors.
  • Testing is part of the verification and validation process, and includes static validation techniques.

Program Testing Goals

  • Software testing goals include demonstrating that the software meets its requirements to the developer and the customer
  • For custom software, this means one test for every requirement in the requirements document
  • For generic software, this includes tests for all features and combinations to be incorporated in the product release
  • Software testing goals include discovering situations in which the software's behavior is incorrect, undesirable, or non-conforming
  • This is accomplished by defect testing, which is concerned with finding undesirable system behavior, such as crashes and incorrect computations.

Validation vs Defect testing

  • Validation testing expects the system to correctly perform specified test cases from its expected use.
  • Defect testing exposes defects through test cases that can be deliberately obscure.

Testing Process Goals

  • Validation testing demonstrates to the developer and customer that the software meets its requirements.
  • Defect testing discovers faults/defects in the software where its behavior is incorrect or not in conformance with its specification.
  • A successful defect test is one where the system makes an error and exposes a defect.

Verification vs Validation

  • Verification asks, "Are we building the product right?".
  • Validation asks, "Are we building the right product?".
  • Verification ensures the software conforms to its specifications.
  • Validation ensures the software meets a user's requirements.

V & V Confidence

  • The goal of Verification and Validation (V & V) is to establish confidence that the system is 'fit for purpose'
  • V & V depends on system's purpose, user expectations and marketing environment
  • Software purpose impacts the levels of confidence depending on how critical the software is to the organization
  • User expectations; users may have low expectations of certain kinds of software
  • Marketing Environment; getting a product to market early may be more important than defect discovery

Inspections vs Testing

  • Software inspections focus on analyzing the static system to find problems
  • Software testing exercises the code to observe product behavior
  • Software inspections may be supplemented by tool-based document and code analysis and are discussed further in Chapter 15
  • In software testing, the system is executed with test data and its operational behavior is observed

Software Inspections

  • Software inspections examine the source representation to discover anomalies and defects
  • Unlike testing, software inspections do not require the execution of the system
  • Software inspections can be applied to any representation of the system, such as requirements, design, and configuration data.
  • Software inspections find program errors effectively.

Advantages of Inspections

  • Inspections are a static process with which users don't have to be concerned with interactions between errors.
  • Inspections can be performed on incomplete versions of a system without additional costs.
  • Inspections consider broader quality attributes, such as compliance, portability, and maintainability

Inspections and Testing Qualities

  • Inspections and testing are complementary verification techniques
  • Both should be used during the V & V process
  • Inspections check conformance to specification
  • Inspections cannot check non-functional characteristics

Stages of Testing

  • Development Testing: The system is tested during development to detect bugs and defects.
  • Release Testing: A separate team tests a complete version of the system before it is released.
  • User Testing: Users or potential users test the system in their environment.

Development Testing

  • Development testing includes all testing activities done by the team developing the system
  • Unit Testing: Testing individual program units or object classes, focusing on the functionality of objects or methods.
  • Component Testing: Testing integrated individual units to create composite components, focusing on component interfaces.
  • System Testing: Some or all system components are integrated and tested as a whole, focusing on component interactions.

Unit Testing

  • Unit testing tests individual components in isolation and is part of defect testing
  • Units tested:
  • Individual functions or methods within an object
  • Object classes with several attributes and methods
  • Composite components with defined interfaces used to access their functionality

Object Class Testing

  • Full coverage of a class involves testing operations associated with testing all object attributes and exercising the object in all possible states
  • Inheritance makes it more difficult to design object class tests because the information to be tested is not localized

Weather Station Testing

  • Weather testing requires defined test cases for reportWeather, calibrate, test, startup, and shutdown
  • Using a state model, sequences of state transitions are identified for testing, and the transitions cause these sequences

Automated Testing

  • Unit testing should be automatically run and checked without manual intervention wherever possible
  • Automated unit testing use the test automation framework to write and run program tests
  • Unit testing frameworks provide generic test classes extended to create specific test cases
  • All implemented tests and reports are automatically run to indicate success/failure

Automated Test Components

  • Setup Part: System initialized with the test case that has inputs and expected outputs
  • Call Part: The object/method to be tested is called
  • Assertion Part: The result of the call is compared with the expected result; a "true" evaluation indicates success, or a "false" failure.

Unit Test Effectiveness

  • Test cases should verify the component works correctly as expected
  • Test cases should reveal component defects
  • The first type of unit test case should reflect normal program operation, while the other shows how the component is working as expected
  • The second type involves testing experience of common problems to check abnormal inputs are properly processed

Testing Strategies

  • Partition testing identifies groups of inputs with common characteristics
  • Tests should be chosen from within each of these groups
  • Guideline-based testing uses testing guidelines to choose test cases
  • Guidelines often reflect experience of common programmer errors when developing components

Partition Testing

  • Input data and output results should fall into different classes where all class members are related
  • Each of these classes in an equivalent partition/domain where the program behaves equivalently for each class member
  • Test cases should be chosen from each partition

Testing Guidelines (Sequences)

  • Test software with sequences which have only a single value.
  • Use sequences of different sizes in different tests.
  • Derive tests so that the first, middle and last elements of the sequence are accessed.
  • Test with sequences of zero length.

General Testing Guidelines

  • Choose inputs that generate all error messages
  • Design inputs that cause input buffers to overflow
  • Repeat the same input or series of inputs multiple times
  • Force invalid outputs to be generated
  • Force computation results to be too large/small

Key Points on Testing

  • Testing can only show errors, not verify their absence
  • Development testing is the software development team's responsibility, while a separate team is responsible for pre-release testing
  • Development testing includes:
  • Unit testing, testing individual objects and methods
  • Component testing, test related groups of objects
  • System testing, test partial or complete systems

Component Testing

  • Software components are often composite and made up of interracting objects
  • Functionality is accessed through the defined component interface
  • Testing composite components shows the component interface behaves according to its specification, where unit tests are assumed to be complete

Interface Testing

  • Interface testing objectives include detecting faults due to interface errors or invalid assumptions about interfaces
  • Types include;
  • Parameter interfaces: data passed from one method/procedure to another
  • Shared memory interfaces: memory block shared between procedures/functions
  • Procedural interfaces: subsystem that encapsulates a set of procedures called by other subsystems
  • Message Passing interfaces: subsystem requests services from other subsystems

Interface Errors

  • Interface misuse: A calling component improperly uses another component's interface, such as parameters in the wrong order
  • Interface misunderstanding: A calling component incorrectly assumes embedded assumptions about the behavior of the called component
  • Timing errors: A called/calling component operates at different speeds while accessing out-of-date information

Interface Testing Guidelines

  • Design tests use extreme parameter values for a called procedure
  • Always test pointer parameters with null pointers
  • Design tests to cause the component to fail
  • Use stress testing in message passing systems
  • Vary component activation order in shared memory systems

System Testing

  • System testing involves integrating components to create a version of the system, then testing the integrated system
  • The focus of system testing is testing the interactions between components
  • System testing checks that components are compatible, interact correctly, and transfer data properly
  • System testing tests the emergent behavior of a system

System and Component Testing

  • During system testing, reusable components developed separately are either integrated from off-the-shelf systems or newly developed components, and the complete system tested
  • System testing is done collectively at a stage where different team member/sub-teams are integrated
  • Some companies have a separate testing team that isn't involved with design/programming

Use-Case Testing

  • Use-cases for system interactions serve as a basis for system testing
  • Each use case involves several system components, and forces the interactions of these tests to occur
  • Sequence diagrams document the components and interactions that are being tested

Testing Policies

  • Exhaustive system testing in impossible, instead testing policies which define required system test coverage may be developed
  • Example testing Policies:
  • All system functions accessed through menus should be tested
  • Combinations of function, for example (text formatting), accessed through the same menu must be tested -Where user input is provided, all functions must be tested with correct and incorrect input

Test-Driven Development

  • Test-Driven Development (TDD) is an approach to program development in which testing is interleaved with code development
  • Tests are written before code, and 'passing' the tests drives development
  • Code and one test are developed incrementally; the next increment doesn't begin until the last one's code is tested TDD was introduced as part of agile methods, most notably Extreme Programming, and is leveraged in plan-driven development processes

TDD Process Activities

  • Begin by identifying the increase in functionality or requirements and is small/implementable in a few lines of code
  • Write an automated test for this new functionality
  • Run the test alongside other tests that were implemented and the new test will fail at first
  • Implement the function and re-run the test
  • Once every test runs successfully, more functionality can be implemented

Benefits of Test-Driven Development

  • Code coverage guarantees that every code base has at least one test written and associated
  • Regression testing develops code incrementally as a program is developed
  • When debugging, the point of failure of a test is obvious, and the modified code can be checked
  • The tests demonstrate what the code should be doing, which is a form of system documentation

Regression Testing

  • Regression testing tests the system to check that changes have not 'broken' previously working code.
  • Automated testing makes regression testing simple
  • All tests must run successfully before the change is committed

Release Testing

  • Intended for use outside of the development team
  • Release testing tests systems as satisfactory for usage
  • The system's performance is shown to deliver specified functional capabilities and reliability while withstanding normal use
  • Release testing is a black-box process and is derived from system specifications

Release Testing vs System Testing

  • Release testing is a form of system testing
  • Separate teams should be responsible for release testing, rather than those involved with system development
  • The system testing goals are to test for defects where release testing tests system requirements and suitability

Requirements Based Testing

  • Requirements testing tests each requirement in the software and develops a test/test for it

Requirements Tests

  • Tests involving setting up patient records to test medications and allergies

Features Tested by Scenario

  • Testing authentication by logging onto the system, home visit scheduling, encryption/decryption, record retrieval/modification, links to drugs database, and call-prompting capabilities.

Performance Testing

  • Part of release testing and tests performance and reliability of a system
  • Tests should reflect the system's use profile
  • Series of tests planning to increase until the system performance is unacceptable
  • Stress testing is used to overload the system to test failure behavior

User Testing

  • User/Customer provides feedback on system testing
  • User testing allows the user environment influences to be assessed on the system in real time and is essential to comprehensive system testing and release

Types of User Testing

  • Alpha testing: Users work with the development team to test the software at the developer's site.
  • Beta testing: A release of the software is made available to users to experiment and report problems to the developers.
  • Acceptance testing: Customers test the system to decide whether or not it is ready, and applied primarily for custom systems.

Acceptance Testing Process Stages

  • The acceptance testing process proceeds as follows: define test criteria, plan acceptance testing and testing, derive acceptance tests, run them, negotiate results, and accept/reject the system.

Agile Methods and Acceptance Testing

  • In agile methods, the user/customer is part of the development team, so they are responsible for making decisions about the system's acceptability.
  • The user/customer defines tests that integrate with other tests to ensure they are automatically run when changes are made
  • The main concern is whether the implemented user is too 'typical' and doesn't represent the other stakeholders' interests

Key Points

  • When testing software, look to 'break' it by leveraging experience and guidelines to choose types of tests that have been effective at discovering defects
  • Automate tests whenever possible, and embed tests into a program that is automatically run every time a system change is made
  • Test-first development is an approach where software tests are written before code
  • In scenario testing, a 'typical' usage scenario is derived in order to derive test cases
  • Acceptance testing is used to decide if the software is good and stable enough to be deployed/used

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Software Testing in Chapter 9
10 questions
All Paths Coverage in Software Testing
10 questions
Software Testing Chapter 8
10 questions
Use Quizgecko on...
Browser
Browser