Podcast
Questions and Answers
What primary aspect does software testability address?
What primary aspect does software testability address?
- The cost of executing test cases.
- The speed at which tests can be written.
- The ease of revealing faults through testing. (correct)
- The complexity of the test harness setup.
Why is calculating the probability of a software failure during testing not straightforward?
Why is calculating the probability of a software failure during testing not straightforward?
- Because the number of possible test executions is infinite.
- Because test case design is inherently subjective.
- Because response measures for testability introduce complexity. (correct)
- Because accurately simulating real-world conditions in a test environment is challenging.
In the context of software testing, what is the role of an oracle?
In the context of software testing, what is the role of an oracle?
- To simulate user interactions with the software.
- To manage and organize test cases for efficient testing.
- To determine if the output is correct by comparing it against the program's specification. (correct)
- To execute the software and record the output.
What is the main purpose of a test harness?
What is the main purpose of a test harness?
What is a key consideration when using specialized testing interfaces and methods?
What is a key consideration when using specialized testing interfaces and methods?
What does the record/playback tactic primarily aim to achieve in software testing?
What does the record/playback tactic primarily aim to achieve in software testing?
Why can high coupling among classes negatively impact testability?
Why can high coupling among classes negatively impact testability?
What does the 'Sandbox' testing tactic involve?
What does the 'Sandbox' testing tactic involve?
What is the primary benefit of using executable assertions in code?
What is the primary benefit of using executable assertions in code?
What does limiting nondeterminism aim to achieve in the context of testability?
What does limiting nondeterminism aim to achieve in the context of testability?
What is a significant benefit of designing software to be easily modifiable?
What is a significant benefit of designing software to be easily modifiable?
In the context of user interface design, what is meant by 'user initiative'?
In the context of user interface design, what is meant by 'user initiative'?
What is the primary reason behind carefully allocating system responsibilities to achieve high cohesion and low coupling?
What is the primary reason behind carefully allocating system responsibilities to achieve high cohesion and low coupling?
What aspect of coordination among system elements most directly impacts usability?
What aspect of coordination among system elements most directly impacts usability?
What is the role of the 'Simian Army' at Netflix?
What is the role of the 'Simian Army' at Netflix?
What does the term 'testability' refer to in software engineering?
What does the term 'testability' refer to in software engineering?
Which of the following is a key element of the general scenario for testability?
Which of the following is a key element of the general scenario for testability?
What does the testability tactic 'Abstract data sources' enable?
What does the testability tactic 'Abstract data sources' enable?
What does the tactic 'Localize state storage' primarily facilitate?
What does the tactic 'Localize state storage' primarily facilitate?
What is a typical goal of tactics for testability?
What is a typical goal of tactics for testability?
What is the intention when more emphasis is placed on some faults than other faults?
What is the intention when more emphasis is placed on some faults than other faults?
To what does software testability refer?
To what does software testability refer?
What does the success of the Netflix Streaming service depend on?
What does the success of the Netflix Streaming service depend on?
According to Robert Binder, what is important to making testing tractable?
According to Robert Binder, what is important to making testing tractable?
What is the definition of the response of class C?
What is the definition of the response of class C?
What is the main focus of the testability?
What is the main focus of the testability?
What can the source of stimulus be in conducting the testing?
What can the source of stimulus be in conducting the testing?
Which of the following falls into a way that the system can respond?
Which of the following falls into a way that the system can respond?
What is meant by response measure?
What is meant by response measure?
Regarding having specialized interfaces and methods, how should they be handled?
Regarding having specialized interfaces and methods, how should they be handled?
What is the purpose of tests?
What is the purpose of tests?
Which quality attributes benefit from high cohesion, loose coupling, and separation of concerns?
Which quality attributes benefit from high cohesion, loose coupling, and separation of concerns?
What does the architecture need to do to make it amenable to testing?
What does the architecture need to do to make it amenable to testing?
What are the recommended ways of generating your test data?
What are the recommended ways of generating your test data?
What are the advantages of creating your own test data?
What are the advantages of creating your own test data?
Within the layers, which layers should be tested first?
Within the layers, which layers should be tested first?
When the system needs to take the initiative, what must it rely on?
When the system needs to take the initiative, what must it rely on?
When supporting user interface, what is helpful to giving the user feedback?
When supporting user interface, what is helpful to giving the user feedback?
Regarding the cancel command, what must the system be ready to do?
Regarding the cancel command, what must the system be ready to do?
To support the ability to undo, what must the system maintain?
To support the ability to undo, what must the system maintain?
How does the use of specialized testing interfaces and methods impact performance-critical and safety-critical systems?
How does the use of specialized testing interfaces and methods impact performance-critical and safety-critical systems?
What is the primary goal of employing tactics that 'Limit Structural Complexity' in software design?
What is the primary goal of employing tactics that 'Limit Structural Complexity' in software design?
How does incorporating variability into software architecture impact the software product line?
How does incorporating variability into software architecture impact the software product line?
What architectural approach effectively supports portability?
What architectural approach effectively supports portability?
How can incorporating an 'eventual consistency' model affect system testability?
How can incorporating an 'eventual consistency' model affect system testability?
When a user issues a 'cancel' command, what responsibilities should the system have?
When a user issues a 'cancel' command, what responsibilities should the system have?
How do architectural improvements contribute to green computing?
How do architectural improvements contribute to green computing?
In the context of software safety, what is the focus?
In the context of software safety, what is the focus?
What is a key function of 'actuators' in the context of software-related failures in physical systems?
What is a key function of 'actuators' in the context of software-related failures in physical systems?
How can employing 'Iowability' as an architectural goal benefit an organization?
How can employing 'Iowability' as an architectural goal benefit an organization?
Flashcards
Software Testability
Software Testability
The ease with which software can be made to demonstrate its faults through testing.
Oracle (in testing)
Oracle (in testing)
An agent (human or mechanical) that decides whether the output of a test is correct by comparing it to the program's specification.
Test Harness
Test Harness
Specialized software or hardware designed to exercise the software under test, providing control and observation capabilities.
Simian Army
Simian Army
Signup and view all the flashcards
Latency Monkey
Latency Monkey
Signup and view all the flashcards
Conformity Monkey
Conformity Monkey
Signup and view all the flashcards
Doctor Monkey
Doctor Monkey
Signup and view all the flashcards
Janitor Monkey
Janitor Monkey
Signup and view all the flashcards
Security Monkey
Security Monkey
Signup and view all the flashcards
10-18 Monkey
10-18 Monkey
Signup and view all the flashcards
Source of Stimulus (Testability)
Source of Stimulus (Testability)
Signup and view all the flashcards
Stimulus (Testability)
Stimulus (Testability)
Signup and view all the flashcards
Artifact (Testability)
Artifact (Testability)
Signup and view all the flashcards
Environment (Testability)
Environment (Testability)
Signup and view all the flashcards
Response (Testability)
Response (Testability)
Signup and view all the flashcards
Response Measure (Testability)
Response Measure (Testability)
Signup and view all the flashcards
Specialized interfaces
Specialized interfaces
Signup and view all the flashcards
Record/Playback
Record/Playback
Signup and view all the flashcards
Localize State Storage
Localize State Storage
Signup and view all the flashcards
Abstract Data Sources
Abstract Data Sources
Signup and view all the flashcards
Sandbox
Sandbox
Signup and view all the flashcards
Executable Assertions
Executable Assertions
Signup and view all the flashcards
Component Replacement
Component Replacement
Signup and view all the flashcards
Limit Structural Complexity
Limit Structural Complexity
Signup and view all the flashcards
Limit Nondeterminism
Limit Nondeterminism
Signup and view all the flashcards
Usability
Usability
Signup and view all the flashcards
Using a system efficiently
Using a system efficiently
Signup and view all the flashcards
Minimizing the impact of errors
Minimizing the impact of errors
Signup and view all the flashcards
Adapting the system to user needs
Adapting the system to user needs
Signup and view all the flashcards
Increasing confidence and satisfaction
Increasing confidence and satisfaction
Signup and view all the flashcards
Support User Initiative
Support User Initiative
Signup and view all the flashcards
Support System Initiative
Support System Initiative
Signup and view all the flashcards
Portability
Portability
Signup and view all the flashcards
Development Distributability
Development Distributability
Signup and view all the flashcards
Scalability
Scalability
Signup and view all the flashcards
Elasticity
Elasticity
Signup and view all the flashcards
Deployability
Deployability
Signup and view all the flashcards
Monitorability
Monitorability
Signup and view all the flashcards
Software Safety
Software Safety
Signup and view all the flashcards
Conceptual Integrity
Conceptual Integrity
Signup and view all the flashcards
Effectiveness
Effectiveness
Signup and view all the flashcards
Efficiency
Efficiency
Signup and view all the flashcards
Freedom From Risk
Freedom From Risk
Signup and view all the flashcards
Study Notes
Testability
- Industry estimates state that 30-50% or more of system development costs go to testing
- Software testability relates to how easily software demonstrates its faults through testing via execution
- Testability is the likelihood that software fails on its next execution, assuming it has at least one fault
- If a fault is present, the goal is for it to fail during testing as quickly as possible
- Determining probability isn't easy and other measures will be used
Testing Model
- Testing involves a program, input, and output
- An oracle, whether human or mechanical, ascertains if the output aligns with the program's specification
- Output involves the produced value along with derived quality attribute measures like production time
- The program's internal state can be revealed to the oracle
- The oracle can determine if the program has entered an erroneous state
System Testability
- A system must control each component's inputs and observe their outputs
- This control/observation uses a test harness meaning specialized software or hardware to check the software being tested
- Test harnesses enable recording/playback of data over interfaces, and they can simulate external environments or aid in production
- Test harnesses help execute procedures and record output and may be substantial software, complete with architecture and stakeholders
- Testing is done by various developers, users, or QA personnel
- Portions of a system or the entire system may undergo testing
- The effectiveness in discovering faults and the rate at which tests achieve a desired coverage level is called response measures for testability
- Test cases stem from developers plus testing groups or customers, and may drive development via Agile
Netflix's Simian Army
- Netflix streams video and maintains a Simian Army made of tools for testing in its Amazon EC2 cloud service
- The Simian Army began with Chaos Monkey
- Chaos Monkey randomly terminates processes in the running system to test for failures and serious degradation
- The Latency Monkey induces artificial delays in client-server communication to simulate service degradation and measures upstream services response
- The Conformity Monkey detects and shuts down instances not adhering to best practices, or those instances not belonging to an autoscaling group
- The Doctor Monkey identifies unhealthy instances by using external health checks i.e CPU load
- The Janitor Monkey eliminates clutter and waste from Netflix cloud resources
- The Security Monkey finds and terminates security violations and ensures DRM and SSL certificates are valid
- The 10-18 Monkey finds configuration and runtime problems for customers in multiple regions, using different languages and character sets
- Netflix determines what faults to examine based on their impact and severity
- The Simian Army embodies the logging of operational data to reproduce faults and discover/log behavior in complex systems
Testability Scenario
- Code testing serves as validation to ensure an engineered artifact meets the stakeholder's needs and intended use
- Software architecture only involves running systems and source code, Architectural design reviews are another way of validation for architecture
Testability Scenario Portions
- Source of stimulus is from unit testers, integration testers, developers, and even end users on the customer side that may be automated
- Stimulus refers to tests executed upon completing a unit of coding or delivery to the customer
- Artifact refers to a unit of code, a subsystem, or the whole system being tested
- Environment includes the testing time (development, compile, deployment, runtime), or inside a test harness
- Response states the system can be controlled, tests can be done, and results found
Response Measures
- Response measures show how easily the system under test "gives up" its faults
- These include efforts to find faults and particular fault classes
- Finding a certain percentage of statements requires effort
- Longest test chain signifies test difficulty
- More include measures for effort, probability of more discovery, time frame, and time to prep the test environment
- Additional ones are measured by bringing the system brought into its intended state
- Error risk assessment and measuring risk reduction, in addition to rate the severity of the existing or found faults can also be used
Tactics for Testability
- Easing testing is the goal during software development
Categories of Tactics
- Enhance system traceability by adding system controllability and observability
- Lessen complexity within the system's design
Control and Observe System State
- Essential for control and observation that leads to defining testability
- Best way to control and observe is giving a software component inputs, and then observing its outputs
Tactics Specifics
- Software should keep state data
- Testers can assign values to data
- Accessible info to testers on request
- State data includes the running state, variable values, performance load, intermediate steps to help remake component behavior
Specialized Interfaces
- Interfaces that allow control or capture of a component's variable values, done either through a test harness or regular operation
Specialized Test Routines Examples
- A set/get method to set important attributes
- A report method to report the full state of the object
- A reset method to sets the internal state A method for verbose output, and event logging for performance
Test Interfaces/Methods
- Test interfaces/Methods separated so they can be removed when needed
- Performance/Safety critical systems may have issues fielding test code
- Code must behave the same way it was tested regardless of removals
Other Tactics
- Record/playback: used to re-create a fault when the state that caused it is often difficult to remake, and capturing is done at an interface
- Localize state storage: Convenient if a system, subsytem, or module to start in arbitrary test state if saved in one place
- Abstract Data Sources: Testing control by controlling the input data to test, interfaces more abstracted to insert test data easily
- Sandbox: Isolate the instance of system so its is unconstrained for testing, testing helped if operated with no permanent impact or has rollbacks
More Tactics
- Implementations have ability to no have consequences to an experiment, it's is used for scenario analysis, training, and simulation
- Virtualize resources: Build a controlled version of the resource, system clock and components can be tested at critical time boundaries (shift change)
- Executable Assertions: Design assertions to check values satisfy constraints and is placed at desired locations to indicate faulty state
- Assertions increase traceability when fail
- Type can be annotated with checking code
- Test cases have assertions to embed the test oracle in code, if correct and coded
Replacement
- Replacing barebones code for more for more elaborate software for testing
- Component Replacement swaps component and has more for testing
- Preprocessor Macros state reporting, activate probe statements, or return testing console
- Aspects report state
Limit Complexity
- Difficult to test complex software because of large operating state space
- Concerned about repeatable behavior to find the exact fault, not to just be able to make software fail
- Limit Complexity category has three tactics
Limit Structural Complexity
- Avoid/solve cyclic dependencies between components
- Isolate or encapuslate environment deps
- Reduce deps between components
- Limit inheritance hierarchy
- Limit polymorphism/dynamic calls
- Structural Metric, response of the class (count of methods of C plus methods of others by C), increasing traceability
- Increase cohesion/reduce coupling, all for improving traceability
- Controllability critical to making testing tractable
Less Complex
- Systems needing full data consistency all times more complex
- Build system under "eventual consistency" model for a simple to system design
- Architectures can help by lending traceability, layered style lets lower layers can be tested then higher layers
- Limit Non Determinism (the counterpart to limiting structural complexity)- harding to test system
- Tactic helps with finding all non determinism (parallelism) and weeding out what can be helped
Testability Summary
- System being testable makes it reliable
- Test Harness often used to execute tests
- System state helps control
- Ability to inject faults to system
- Different tactics helps support components
- Systems difficult cause too many connection and computations
Checklist
- Tasks that to execute test capture logging, state being controlled must be made
- Support must be added by systems of injection models
- Main abstractions tests if the code will perform correct of system operations
- Ensure data injection and setup are possible for test to be correct
- Testing should be performed on each possible structure
Additional Checklist
- Insure resources is available to have suites run and results happen the testing, limit access, capture fail data, inject limits (virtual tested should happen before to test correct
Testable Code
- Late bound code should be tested, with bindings and full support of the ability
- The goal should be reachable if is code tested
- Tech support is should be chosen to support all test goals
Test Automation
- Create useful test data set
- Capture sample from production and remove sensitive information
- Automation used more in more situations for less room of human error
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.