Podcast
Questions and Answers
How does software testing contribute to the broader verification and validation process?
How does software testing contribute to the broader verification and validation process?
Software testing is a component of the broader verification and validation process, which includes techniques such as static validation.
Distinguish between validation testing and defect testing regarding the goals and test case design.
Distinguish between validation testing and defect testing regarding the goals and test case design.
Validation testing aims to demonstrate that the software meets requirements, while defect testing aims to find situations where the software behaves incorrectly. Test cases for defect testing might be obscure and unlike normal usage.
How do system purpose, user expectations, and the marketing environment influence the required confidence level in verification and validation?
How do system purpose, user expectations, and the marketing environment influence the required confidence level in verification and validation?
If the software is critical to an organization a higher level of confidence is needed. If users have low expectations of certain kinds of software, then less confidence is needed. Getting a product to market early may also be more important than finding defects.
Explain how software inspections and software testing serve as complementary verification techniques.
Explain how software inspections and software testing serve as complementary verification techniques.
Contrast the focus and scope of unit testing, component testing, and system testing within the context of development testing.
Contrast the focus and scope of unit testing, component testing, and system testing within the context of development testing.
In object-class testing, what does complete test coverage involve, and how does inheritance complicate the testing process?
In object-class testing, what does complete test coverage involve, and how does inheritance complicate the testing process?
How do the setup, call, and assertion parts contribute to the structure and execution of automated tests?
How do the setup, call, and assertion parts contribute to the structure and execution of automated tests?
Describe what partition testing
is, and explain its purpose in unit testing.
Describe what partition testing
is, and explain its purpose in unit testing.
How can guidelines derived from previous experiences with programmer errors be used to improve testing effectiveness?
How can guidelines derived from previous experiences with programmer errors be used to improve testing effectiveness?
How does the concept of 'equivalence partition' aid in designing effective test cases?
How does the concept of 'equivalence partition' aid in designing effective test cases?
Explain why conformance to specification does not necessarily guarantee that the software will meet the customer's real requirements.
Explain why conformance to specification does not necessarily guarantee that the software will meet the customer's real requirements.
How are the objectives of component testing related to the concept of a 'composite component interface'?
How are the objectives of component testing related to the concept of a 'composite component interface'?
Outline the main objectives of interface testing and list the four types of interface.
Outline the main objectives of interface testing and list the four types of interface.
How does focusing on component integration and data transfer across interfaces in system testing contribute to overall software quality?
How does focusing on component integration and data transfer across interfaces in system testing contribute to overall software quality?
How do the roles and goals of system testing and release testing differ, particularly with respect to the teams involved and the type of defects targeted?
How do the roles and goals of system testing and release testing differ, particularly with respect to the teams involved and the type of defects targeted?
What fundamental principle underlies requirements-based testing, and how is this principle applied in practice?
What fundamental principle underlies requirements-based testing, and how is this principle applied in practice?
How do test scenarios based on usage scenarios contribute to the validation of system features and user interactions?
How do test scenarios based on usage scenarios contribute to the validation of system features and user interactions?
How do testing policies that define required system test coverage help address the impossibility of exhaustive system testing?
How do testing policies that define required system test coverage help address the impossibility of exhaustive system testing?
What is a key benefit of 'Test Driven Development' that might not be achieved by writing the tests after the code?
What is a key benefit of 'Test Driven Development' that might not be achieved by writing the tests after the code?
Explain what regression testing is and why it is regarded as straightforward when automated testing is employed.
Explain what regression testing is and why it is regarded as straightforward when automated testing is employed.
Describe the primary goal of release testing and explain why it typically employs black-box testing techniques.
Describe the primary goal of release testing and explain why it typically employs black-box testing techniques.
In requirements tests, what is the benefit of setting up patient allergy tests with one or more allergy medications?
In requirements tests, what is the benefit of setting up patient allergy tests with one or more allergy medications?
Give two examples of features being tested by a usage scenario.
Give two examples of features being tested by a usage scenario.
How does user testing capture reliability, performance, usability and robustnes that may be missed by system and release testing?
How does user testing capture reliability, performance, usability and robustnes that may be missed by system and release testing?
Explain the primary difference between alpha testing and beta testing in terms of testing environment and user involvement.
Explain the primary difference between alpha testing and beta testing in terms of testing environment and user involvement.
What unique challenge is introduced when acceptance testing is performed within agile methods, and why does this challenge arise?
What unique challenge is introduced when acceptance testing is performed within agile methods, and why does this challenge arise?
Explain what the term 'interface misuse' means in interface testing, and give an example.
Explain what the term 'interface misuse' means in interface testing, and give an example.
What is the significance of stress testing? How may it be applied?
What is the significance of stress testing? How may it be applied?
Describe the relationship between component testing and interface testing of a system.
Describe the relationship between component testing and interface testing of a system.
What is the role of 'instrumentation' in the context of software testing, and provide examples of common instrumentation practices.
What is the role of 'instrumentation' in the context of software testing, and provide examples of common instrumentation practices.
When is it appropriate to apply formal methods such as model checking or theorem proving in the software testing process, and what advantages do they offer?
When is it appropriate to apply formal methods such as model checking or theorem proving in the software testing process, and what advantages do they offer?
Can Software inspections find both non-conformance to the customer's specification as well as non-conformance to customer's real requirements? Explain.
Can Software inspections find both non-conformance to the customer's specification as well as non-conformance to customer's real requirements? Explain.
How does acceptance testing in custom software differ from that in commercial off-the-shelf (COTS) software?
How does acceptance testing in custom software differ from that in commercial off-the-shelf (COTS) software?
Why is it particularly important to perform stress testing on systems that are designed for high availability or real-time processing?
Why is it particularly important to perform stress testing on systems that are designed for high availability or real-time processing?
What are the key differences between verification and validation in the context of software testing, and why are both important?
What are the key differences between verification and validation in the context of software testing, and why are both important?
How might automated testing and continuous integration practices specifically support or enhance regression testing efforts in software development?
How might automated testing and continuous integration practices specifically support or enhance regression testing efforts in software development?
How do concerns for external security threats and data privacy, which are particularly relevant in the context of modern software, impact the scope and design of software testing efforts?
How do concerns for external security threats and data privacy, which are particularly relevant in the context of modern software, impact the scope and design of software testing efforts?
When might exploratory testing be preferred over scripted testing approaches?
When might exploratory testing be preferred over scripted testing approaches?
What measures should be taken to address the problem that the embedded user is typical
when using Agile methodologies for user acceptance?
What measures should be taken to address the problem that the embedded user is typical
when using Agile methodologies for user acceptance?
Flashcards
Program Testing
Program Testing
The process intended to show a program does what it should and to discover defects.
Verification and Validation (V&V)
Verification and Validation (V&V)
Verifies software meets requirements and is fit for purpose.
Software Inspections
Software Inspections
Analyzing static system representations to find problems.
Software Testing
Software Testing
Signup and view all the flashcards
Unit Testing
Unit Testing
Signup and view all the flashcards
Component Testing
Component Testing
Signup and view all the flashcards
System Testing
System Testing
Signup and view all the flashcards
Equivalence Partitioning
Equivalence Partitioning
Signup and view all the flashcards
Testing Guidelines for Sequences
Testing Guidelines for Sequences
Signup and view all the flashcards
Interface Misuse
Interface Misuse
Signup and view all the flashcards
Test-Driven Development (TDD)
Test-Driven Development (TDD)
Signup and view all the flashcards
Regression Testing
Regression Testing
Signup and view all the flashcards
Release Testing
Release Testing
Signup and view all the flashcards
Requirements-Based Testing
Requirements-Based Testing
Signup and view all the flashcards
Performance Testing
Performance Testing
Signup and view all the flashcards
Stress Testing
Stress Testing
Signup and view all the flashcards
User Testing
User Testing
Signup and view all the flashcards
Acceptance Testing
Acceptance Testing
Signup and view all the flashcards
Static Verification
Static Verification
Signup and view all the flashcards
Dynamic Verification
Dynamic Verification
Signup and view all the flashcards
Study Notes
- Chapter 8 focuses on Software Testing
- Summary of topics covered includes development, test-driven, release and user testing
Program Testing
- Software testing is performed to reveal program errors before deployment.
- Software testing involves executing a program with artificial data.
- During software testing, the results of the test are assessed for errors, anomalies, and non-functional attributes.
- Software testing reveals the presence, not the absence, of errors.
- Testing is part of the verification and validation process, and includes static validation techniques.
Program Testing Goals
- Software testing goals include demonstrating that the software meets its requirements to the developer and the customer
- For custom software, this means one test for every requirement in the requirements document
- For generic software, this includes tests for all features and combinations to be incorporated in the product release
- Software testing goals include discovering situations in which the software's behavior is incorrect, undesirable, or non-conforming
- This is accomplished by defect testing, which is concerned with finding undesirable system behavior, such as crashes and incorrect computations.
Validation vs Defect testing
- Validation testing expects the system to correctly perform specified test cases from its expected use.
- Defect testing exposes defects through test cases that can be deliberately obscure.
Testing Process Goals
- Validation testing demonstrates to the developer and customer that the software meets its requirements.
- Defect testing discovers faults/defects in the software where its behavior is incorrect or not in conformance with its specification.
- A successful defect test is one where the system makes an error and exposes a defect.
Verification vs Validation
- Verification asks, "Are we building the product right?".
- Validation asks, "Are we building the right product?".
- Verification ensures the software conforms to its specifications.
- Validation ensures the software meets a user's requirements.
V & V Confidence
- The goal of Verification and Validation (V & V) is to establish confidence that the system is 'fit for purpose'
- V & V depends on system's purpose, user expectations and marketing environment
- Software purpose impacts the levels of confidence depending on how critical the software is to the organization
- User expectations; users may have low expectations of certain kinds of software
- Marketing Environment; getting a product to market early may be more important than defect discovery
Inspections vs Testing
- Software inspections focus on analyzing the static system to find problems
- Software testing exercises the code to observe product behavior
- Software inspections may be supplemented by tool-based document and code analysis and are discussed further in Chapter 15
- In software testing, the system is executed with test data and its operational behavior is observed
Software Inspections
- Software inspections examine the source representation to discover anomalies and defects
- Unlike testing, software inspections do not require the execution of the system
- Software inspections can be applied to any representation of the system, such as requirements, design, and configuration data.
- Software inspections find program errors effectively.
Advantages of Inspections
- Inspections are a static process with which users don't have to be concerned with interactions between errors.
- Inspections can be performed on incomplete versions of a system without additional costs.
- Inspections consider broader quality attributes, such as compliance, portability, and maintainability
Inspections and Testing Qualities
- Inspections and testing are complementary verification techniques
- Both should be used during the V & V process
- Inspections check conformance to specification
- Inspections cannot check non-functional characteristics
Stages of Testing
- Development Testing: The system is tested during development to detect bugs and defects.
- Release Testing: A separate team tests a complete version of the system before it is released.
- User Testing: Users or potential users test the system in their environment.
Development Testing
- Development testing includes all testing activities done by the team developing the system
- Unit Testing: Testing individual program units or object classes, focusing on the functionality of objects or methods.
- Component Testing: Testing integrated individual units to create composite components, focusing on component interfaces.
- System Testing: Some or all system components are integrated and tested as a whole, focusing on component interactions.
Unit Testing
- Unit testing tests individual components in isolation and is part of defect testing
- Units tested:
- Individual functions or methods within an object
- Object classes with several attributes and methods
- Composite components with defined interfaces used to access their functionality
Object Class Testing
- Full coverage of a class involves testing operations associated with testing all object attributes and exercising the object in all possible states
- Inheritance makes it more difficult to design object class tests because the information to be tested is not localized
Weather Station Testing
- Weather testing requires defined test cases for reportWeather, calibrate, test, startup, and shutdown
- Using a state model, sequences of state transitions are identified for testing, and the transitions cause these sequences
Automated Testing
- Unit testing should be automatically run and checked without manual intervention wherever possible
- Automated unit testing use the test automation framework to write and run program tests
- Unit testing frameworks provide generic test classes extended to create specific test cases
- All implemented tests and reports are automatically run to indicate success/failure
Automated Test Components
- Setup Part: System initialized with the test case that has inputs and expected outputs
- Call Part: The object/method to be tested is called
- Assertion Part: The result of the call is compared with the expected result; a "true" evaluation indicates success, or a "false" failure.
Unit Test Effectiveness
- Test cases should verify the component works correctly as expected
- Test cases should reveal component defects
- The first type of unit test case should reflect normal program operation, while the other shows how the component is working as expected
- The second type involves testing experience of common problems to check abnormal inputs are properly processed
Testing Strategies
- Partition testing identifies groups of inputs with common characteristics
- Tests should be chosen from within each of these groups
- Guideline-based testing uses testing guidelines to choose test cases
- Guidelines often reflect experience of common programmer errors when developing components
Partition Testing
- Input data and output results should fall into different classes where all class members are related
- Each of these classes in an equivalent partition/domain where the program behaves equivalently for each class member
- Test cases should be chosen from each partition
Testing Guidelines (Sequences)
- Test software with sequences which have only a single value.
- Use sequences of different sizes in different tests.
- Derive tests so that the first, middle and last elements of the sequence are accessed.
- Test with sequences of zero length.
General Testing Guidelines
- Choose inputs that generate all error messages
- Design inputs that cause input buffers to overflow
- Repeat the same input or series of inputs multiple times
- Force invalid outputs to be generated
- Force computation results to be too large/small
Key Points on Testing
- Testing can only show errors, not verify their absence
- Development testing is the software development team's responsibility, while a separate team is responsible for pre-release testing
- Development testing includes:
- Unit testing, testing individual objects and methods
- Component testing, test related groups of objects
- System testing, test partial or complete systems
Component Testing
- Software components are often composite and made up of interracting objects
- Functionality is accessed through the defined component interface
- Testing composite components shows the component interface behaves according to its specification, where unit tests are assumed to be complete
Interface Testing
- Interface testing objectives include detecting faults due to interface errors or invalid assumptions about interfaces
- Types include;
- Parameter interfaces: data passed from one method/procedure to another
- Shared memory interfaces: memory block shared between procedures/functions
- Procedural interfaces: subsystem that encapsulates a set of procedures called by other subsystems
- Message Passing interfaces: subsystem requests services from other subsystems
Interface Errors
- Interface misuse: A calling component improperly uses another component's interface, such as parameters in the wrong order
- Interface misunderstanding: A calling component incorrectly assumes embedded assumptions about the behavior of the called component
- Timing errors: A called/calling component operates at different speeds while accessing out-of-date information
Interface Testing Guidelines
- Design tests use extreme parameter values for a called procedure
- Always test pointer parameters with null pointers
- Design tests to cause the component to fail
- Use stress testing in message passing systems
- Vary component activation order in shared memory systems
System Testing
- System testing involves integrating components to create a version of the system, then testing the integrated system
- The focus of system testing is testing the interactions between components
- System testing checks that components are compatible, interact correctly, and transfer data properly
- System testing tests the emergent behavior of a system
System and Component Testing
- During system testing, reusable components developed separately are either integrated from off-the-shelf systems or newly developed components, and the complete system tested
- System testing is done collectively at a stage where different team member/sub-teams are integrated
- Some companies have a separate testing team that isn't involved with design/programming
Use-Case Testing
- Use-cases for system interactions serve as a basis for system testing
- Each use case involves several system components, and forces the interactions of these tests to occur
- Sequence diagrams document the components and interactions that are being tested
Testing Policies
- Exhaustive system testing in impossible, instead testing policies which define required system test coverage may be developed
- Example testing Policies:
- All system functions accessed through menus should be tested
- Combinations of function, for example (text formatting), accessed through the same menu must be tested -Where user input is provided, all functions must be tested with correct and incorrect input
Test-Driven Development
- Test-Driven Development (TDD) is an approach to program development in which testing is interleaved with code development
- Tests are written before code, and 'passing' the tests drives development
- Code and one test are developed incrementally; the next increment doesn't begin until the last one's code is tested TDD was introduced as part of agile methods, most notably Extreme Programming, and is leveraged in plan-driven development processes
TDD Process Activities
- Begin by identifying the increase in functionality or requirements and is small/implementable in a few lines of code
- Write an automated test for this new functionality
- Run the test alongside other tests that were implemented and the new test will fail at first
- Implement the function and re-run the test
- Once every test runs successfully, more functionality can be implemented
Benefits of Test-Driven Development
- Code coverage guarantees that every code base has at least one test written and associated
- Regression testing develops code incrementally as a program is developed
- When debugging, the point of failure of a test is obvious, and the modified code can be checked
- The tests demonstrate what the code should be doing, which is a form of system documentation
Regression Testing
- Regression testing tests the system to check that changes have not 'broken' previously working code.
- Automated testing makes regression testing simple
- All tests must run successfully before the change is committed
Release Testing
- Intended for use outside of the development team
- Release testing tests systems as satisfactory for usage
- The system's performance is shown to deliver specified functional capabilities and reliability while withstanding normal use
- Release testing is a black-box process and is derived from system specifications
Release Testing vs System Testing
- Release testing is a form of system testing
- Separate teams should be responsible for release testing, rather than those involved with system development
- The system testing goals are to test for defects where release testing tests system requirements and suitability
Requirements Based Testing
- Requirements testing tests each requirement in the software and develops a test/test for it
Requirements Tests
- Tests involving setting up patient records to test medications and allergies
Features Tested by Scenario
- Testing authentication by logging onto the system, home visit scheduling, encryption/decryption, record retrieval/modification, links to drugs database, and call-prompting capabilities.
Performance Testing
- Part of release testing and tests performance and reliability of a system
- Tests should reflect the system's use profile
- Series of tests planning to increase until the system performance is unacceptable
- Stress testing is used to overload the system to test failure behavior
User Testing
- User/Customer provides feedback on system testing
- User testing allows the user environment influences to be assessed on the system in real time and is essential to comprehensive system testing and release
Types of User Testing
- Alpha testing: Users work with the development team to test the software at the developer's site.
- Beta testing: A release of the software is made available to users to experiment and report problems to the developers.
- Acceptance testing: Customers test the system to decide whether or not it is ready, and applied primarily for custom systems.
Acceptance Testing Process Stages
- The acceptance testing process proceeds as follows: define test criteria, plan acceptance testing and testing, derive acceptance tests, run them, negotiate results, and accept/reject the system.
Agile Methods and Acceptance Testing
- In agile methods, the user/customer is part of the development team, so they are responsible for making decisions about the system's acceptability.
- The user/customer defines tests that integrate with other tests to ensure they are automatically run when changes are made
- The main concern is whether the implemented user is too 'typical' and doesn't represent the other stakeholders' interests
Key Points
- When testing software, look to 'break' it by leveraging experience and guidelines to choose types of tests that have been effective at discovering defects
- Automate tests whenever possible, and embed tests into a program that is automatically run every time a system change is made
- Test-first development is an approach where software tests are written before code
- In scenario testing, a 'typical' usage scenario is derived in order to derive test cases
- Acceptance testing is used to decide if the software is good and stable enough to be deployed/used
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.