🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Introduction What is software ○ Comprises a series of instructions for a computer to execute that consume input, perform computation, and emit output What is Software Engineering ○ The establishment + application of input skills (sci...

Introduction What is software ○ Comprises a series of instructions for a computer to execute that consume input, perform computation, and emit output What is Software Engineering ○ The establishment + application of input skills (scientific, economic, social, and practical knowledge) in order to specify, invent, design, build, validate, deploy, maintain, research, improve software (tasks performed by software engineers) that is correct, reliable, and works efficiently on real machines (goals of the process) ○ Difference between programming + SE Programming only captures the ‘build’ task that is only a small piece of the spectrum of software engineering tasks Software Testing Testing approaches are about finding bugs + development time in a reliable + repeatable way Defects arise for various reasons ⇒ vary by product, team, domain ○ Faulty Requirements (largest category) ⇒ “garbage in, garbage out” If we specify the software incorrectly in the first place, we are going to build the wrong software If we specify the software incompletely (haven’t thought about edge cases sure to happen at runtime), we are not going to build the right software ○ Ignored/Modified Requirements We’ve gone and made decisions at development time to help satisfy the development team (usually to deal with time pressure), but might not be the right thing from the customers perspective While changing the requirement can be good for developers, from the customers POV the software doesn’t behave the way that they actually expect Ignoring requirements can be good for the development team but are often problematic for the users of software system ○ Design errors Occur when translating high-level specifications into low-level designs. ○ Implementation Errors Arise when converting low-level designs into source code. Example: Missing a guard or making a small coding mistake. ○ Untested implementations May happen when skipping tests due to time constraints. Assumption: No mistakes were made. ○ Procedural (User )Errors When the user doesn’t follow the specification While software systems should guard against this to some degree, we can’t always anticipate all the strange and crazy things that users will do with our software systems How Automated Testing Helps Reduce Errors: ○ Automated Testing Definition: Involves executing the system with a variety of inputs/outputs to validate that the system behaves as expected. ○ Effect on Specifications: Creating automated tests requires closely examining the specification to capture both expected and unexpected behaviors. Forces us to validate if the system aligns with the initial requirements. ○ Shortcomings: Can only show the presence of defects, not their absence. Can't account for all user data, environment constraints, or expectations upfront. Approach to Testing: ○ Mindset Shift: Focus on balancing the risk of defects against the cost of finding them. Goal is not to find all bugs but to find the most impactful bugs within the available time. When Coming Up With A Testing Process ○ Choose what to Test: Software systems are large; testing everything can be expensive and impractical. Prioritize and reason about the most important parts of the system. ○ Create Test Cases: Define individual test cases and assertions to cover expected behaviors and potential edge cases. ○ Execute Test Cases: Decide how and when tests will be run: By developers locally before committing code. On a continuous integration (CI) server after each change. ○ Examine Test Results: Continuously monitor test results, especially from CI servers, to ensure issues are caught and resolved promptly. ○ Evaluate the Testing Process: Regularly assess whether the testing process gives enough confidence to ship the product. Ensure that the tests align with the needs of the system being built. Red-Green-Refactor 1. Write a test and make sure it fails → if your test passes without writing an implementation, it has no value at all 2. Write an implementation to make that test pass 3. Refactor and extend your tests and the passing code → make things better 4. Repeat!!! Modern test cycle 1. Develop new code 2. Deploy to testing environment 3. Run your tests 4. Test fails 5. Try to fix the failure through more development 6. Deploy to testing environment 7. Run the tests again 8. Test passes 9. commit/push change 10. Pull new changes ○ Steps 2/7 may not happen on small teams or when testing happens solely on a single development machine ○ The code that is developed in steps 1/5 could involve either writing tests or product costs ○ In Test-Driven-Development (TDD) the first develop step is to write tests for features that haven’t been written yet to guide future development Terminology ○ Test Case: evaluates the single explicit behaviour within your program ○ Test Suites: sets of related test cases ○ SUT/CUT (System/Code Under Test): code each individual test case is actually referring to Thing you are actually trying to validate ○ White (Glass) Box Testing Look at the CUT and see how its using → determine what test cases we want to create Examine the program source code to identify potentially problematic sets of inputs or control flow paths ○ Black Box Testing Look at the specification for CUT and design our tests from that Validates the program without any knowledge of how the system is implemented Relied on predicting problematic inputs by examining public API signatures any any available documentation for the CUT When designing White/Black box test cases, we want those test cases to be effective sensitive to failures in the CUT b/c if we write a test case that can never fail/never detect a failure in the CUT, it has no value ○ Effectiveness: measure the probability the test will find a fault (per unit of effort) Developer creation / maintenance time (or # test executions) ○ Testability: some systems are significantly easier to test than others due to the way they are constructed Highly testable system will enable more effective tests for the same cost than a system whose tests are largely ineffective ○ Repeatability: the likelihood that running the same test twice under the same conditions will yield the same result Properties of Tests We Want to Have ○ FAST → slow tests impede velocity; make it so developers don’t want to run their test suites because they don’t want to wait for the results ○ RELIABLE → when a test case fails, the failure actually corresponds to a actual defect in the CUT If test cases fail randomly, it can erode developer confidence that the test suite is actually giving them value + teaching them things about behaviour of the system ○ ISOLATE failures → a single test case failure to help us quickly pinpoint were in the CUT the problem arises If a test case failure shows us 100,000 lines that could be the source problem, it doesn’t provide a lot of value ○ SIMULATE users → validate the high level behaviour of the system that the users can see If we can’t trigger the kinds of faults that the users experience, the test cases won’t accurately reflect the overall quality of our system Kinds of Tests ○ Unit: exercise individual components, usually methods or functions, in isolation → usually quick to write + tests incur low maintenance effort since they touch small parts of the system → typical ensure that the unit fulfills its contract making tests failures easier to understand relatively fast to execute b/c checking simple properties Good at defect isolation Doesn’t tell us if our system works together Tells you if the individual parts work on their own ○ End-to-End (Acceptance): validate entire customer flow → useful for validating quality attributes (e.g. performance + usability) that cannot be captured in isolation → these tests are great for ensuring that the system behaves correctly but provide little guidance for understanding why a failure arose or how it could be fixed → if you are a user getting a new software system, you will perform acceptance tests Aren’t generally heavily automated, but use your own production data on your own servers → slower b/c more manual process Doesn’t focus on small parts of a system at a time → not great for defect isolation Slow b/c not performed frequently throughout the development process (come more together at the end) Performed by a customer in their own environment Makes sure that what the developers believe are correct matches the conditions the customers believe what is correct ○ Integration: exercise groups of components to ensure that their contained units interact correctly together → touch larger pieces of the system + more prone to spurious failure → identifying the root-cause can be difficult b/c tests validate many different units Looking at subsets of your system → tells us if the unit tests work well together Less isolated than unit tests b/c trying multiple parts at once Slower than unit tests b/c executing more of the system at once Faster + isolated than acceptance tests ○ A|B: a special subset of testing that is typically employed in production environments Two different versions are compared at runtime to validate which one performs best Often used to validate business decisions rather than evaluate functional correctness ○ Smoke/Canary Subsection of integration tests → sanctity check that your system is still working Executes quickly. highly reliable, high effectiveness Try to expose a fault as quickly as possible, making it possible to defer running large swaths of unnecessary tests for a system that is known to be broken E.g. if you have an error in your data model that touches your whole system, there is no need to run slow integration + end-to-end testing ○ System Test Execute broad parts of your system at once to make sure all the different parts fit together Look a bringing together lots of your different units and things you’ve integration tested to make sure they all work together effectively System test is performed by the developer team (ran on a continuous basis) Used with synthetic data Tells us if the whole system is able to accomplish its task in an effective way Difference BTWN Smoke Testing + Integration Testing Smoke Test: Purpose: To verify that the most critical functionalities of a software build are working correctly. It’s a preliminary check to ensure that the build is stable enough for further testing. Scope: Focuses on basic functionalities, often referred to as “sanity checks.” For example, checking if the application launches, if key features are accessible, and if there are any major errors. Execution Timing: Conducted early in the testing cycle, typically after a new build or release. Frequency: Performed frequently, often after every build or significant code change. Outcome: Determines whether the software is ready for more detailed testing. If smoke tests fail, the build is rejected for further testing. Integration Test Purpose: To test the interactions between different modules or components of the software to ensure they work together as expected. Scope: Focuses on how various parts of the application interact. It checks data flow, communication, and behavior when modules are combined. Execution Timing: Conducted after individual components have been tested (unit testing) and before system testing. Frequency: Performed less frequently than smoke tests, typically after significant changes to the codebase or when new components are integrated. Outcome: Identifies issues related to module interactions, ensuring that the combined functionality meets the requirements. Summary Smoke Testing: Quick, high-level checks for basic functionality to confirm build stability. Integration Testing: In-depth testing of interactions between modules to verify that they work together correctly. Testing Unit and System Properties Unit System Fast ✅ ❌ Reliable ✅ - Isolates Failures ✅ ❌ Simulate Users ❌ ✅ Unit ○ Fast → executes very small parts of the system in a quick manner ○ Reliable → very good, executing a method at a time without any external dependencies, we have alot of control to make sure they are very deterministic + execute in a reliable way ○ Isolates Failure → good, we can go look in that test and see what failed ○ Simulate Users → doesn’t give us any insight because a user doesn’t use a single internal method when interacting with your system Users are trying to execute board pieces of features at a time System ○ Not fast → goal is to run broad parts of the system - more code you are going to run, the slower it is going to be ○ Reliability → it is possible but having all of these external dependencies (web services, databases) ends up being problematic ○ Isolates failure → no! It could fail in 1000s lines ○ Simulates users → good at capturing the way users are going to be using the system It looks like unit testing is what we want to be running b/c they have the most positive properties. But without capturing what the users are actually seeing, they can give us a false sense of security. Maybe all of your system does all these great parts in little bits together, but that LAST BIT in that system is where that problem arises In system testing it might see like there’s a lot of negatives, but it gives you a lot of assurance that your system is correct in a way that your users care about These two different tests work together to help understand that your system is well tested and works correctly Why not test? ○ Bad design, slow, boring, doesn’t catch bugs,QA’s job ○ Testing has cost → tests are programs too and take time to write, debug, execute and must be evolved along w/ the system ○ Paradox of automated tests suites Value in Failing Tests → tests are most valuable when it fails b/c it catches a bug that might have gone unnoticed If a test suite always passes, it may seem like running it is just wasting computational resources → not uncovering any new bugs Value in Passing tests → reassure us that the system works as expected + hasn’t regressed (e.g. no previously fixed bugs have reappeared) Provides confidence, but only if we believe that the test suite is comprehensive enough the detect issues What is the paradox If we only focus on failing tests, we might think that running tests which always pass is pointless → no new bugs are being caught But passing tests are valuable because they tell us no unexpected issues have been introduced, especially when changes are made to the codebase Common Testing Assumptions ○ Cost of fixing faults Traditionally → cost of fixing fault rises exponentially the later it’s detected in the development process Discovering faults earlier (during requirements or design phases) prevents the cascading effects that make fixing them more expensive in later stages ○ Modern debate on cost relevance Newer methods like test-driven development (TDD) and continuous testing aim to catch faults earlier and reduce the exponential cost curve ○ Cost of automated test Writing tests - take time/effort Running tests - consumes computational resources Diagnosing failing tests - time consuming, especially if tests are incorrectly written/outdated Fixing tests - require maintenance b/c tests are programs themselves/have bugs ○ Benefits of automated testing Catching real faults - save significant time/money by preventing bugs from reaching production Provide confidence in the stability of the code base → allow increased velocity in development - ship changes more confidently ○ Balancing costs with benefits Although its easier to account for the costs than benefits, most larger teams are heavily invested in automated test suites, but are also exploring methods like test prioritization + test minimization to reduce delays + optimize the testing process Assertions While a compiler can validate the syntax of a program, a test suite is required to ensure its runtime behaviour matches its spec Assertions are the primary piece of machinery used to evaluate correctness within automated unit test suites → reason whether or not the execution was correct, by asserting on the behavior of the execution ○ When we design our assertions, we are looking at the spec for the system and determining whether or not our execution is correct Models for Structuring Tests ○ Four-phase tests: supported by default jUnit and its variants 1) setting up testing environment (Before or BeforeEach method ) 2) executing the code under test; 3) evaluating the output of the CUT 4) tearing down the testing environment (After or AfterEach) ○ Given-When-Then: developed by the behavioral driven programming (BDD) community which strives to create ‘executable specifications’ through unit tests that use descriptive strategies for nesting + naming Strong emphasis on test readability → want test to form part of the executable spec for the system Useful technique for helping our test form this type of executable specification is to capture the expected and actual values that are being evaluated within the test ○ Const expected = … write down what expected is ○ Let actual = … compute the actual value by executing the code under the CUS ○ expect(actual).to.equal(expected) write assertion Critique: if actual or expected values are complicated + can’t be evaluated with a simple assertion, writing a test this way can be awkward Tests are structured from some given state, where the system’s configuration is understood Key action of test is when → often described in the test name using plain language it('should be able to parse a document that has UTF-16 characters') Then step involves observing the output to ensure its correctness Expect/should-style assertions (like those used in Chai library) are often used by the BDD community → these enable descriptive assertion statements Chai’s Expect/Should Assertions expect(obj).to.deep.equal({ key: 'value' }); // check large object Compares all properties and nested objects to ensure they are identical Use this when checking large objects for structural equality expect(42).to.be.a('number'); // check type Use this to ensure that a variable holds the expected data type expect([ 42, 97, 102 ]).to.have.length.above(1); // check array properties Checks that the array has more than one element Use this to check array lengths or other array properties expect([1, 2, 3]).not.to.include.members([1, 4]); // checks set membership In the example above, this checks that the array [1,2,3] does not include both 1 and 4 as members ○ The array [1,2,3] does include 1 but doesn’t include 4 ○ Since the array does not contain both members (1,4) the assertion succeeds ○ Both needs to be present Useful for testing whether a subset of elements exists or doesn’t exist in a larger set of array Benefit of using the specific assertion forms above → easy to read/understand; assertion failures are easier to understand when this form is used Each behaviour should be tested as independently as possible → eases debugging b/c a change that breaks a behaviour will be easy to isolate since it only broke one test ○ Also eases program evolution b/c changing a behaviour won’t require modifications to lots of different tests ○ Normal conditions: represent the way we expect the code to be used ○ Unexpected conditions: arise when the code is used in unexpected ways ○ Boundary conditions: all programs consume input; by testing boundary values we can ensure our program will successfully operate with the breadth of inputs the program might encounter Testability Testability: a quality attribute that does not affect how the system performs its functional requirements, but instead influences are amenable the system itself is to being tested A test needs to be able to execute the code you wish to test, in a way that can trigger a defect that will propagate an incorrect result to a program point where it can be checked against the expected behaviour 4 high-level properties ○ Controllability How effectively we can control the inputs, states, behaviour of CUT Purpose: manipulating inputs + state for testing →enable effective automated testing To write meaningful tests, we need to invoke the CUT with the right states + parameters ○ If we can’t control these aspects, we may not be able to write automated tests effectively Refactoring our code to make the test case that we are writing more affordable → make system easier to understand for the developer If the CUT cannot be programmatically controlled, an automated test cannot be written for it Controllability tradeoff between what code it is possible to write a test for and what code it is efficient to write a test for Improve controllability → add additional parameters to methods or constructors Enable dependencies to be more varied + allow more expressive/complete inputs to sent to a method When we are trying to make our code more controllable, we can make it so we set up the state and our objects to exactly the specification that we care about for our test case Red flag for controllability → whenever a constructor has new statements When a constructor is creating all its own objects, you don’t have the opportunity to change these objects ○ Observability The ability to observe + understand the internal state of a system based on its outputs, behaviours, interactions → determine whether CUT behaves as expected Purpose: understanding outputs + behaviour of system → validate correctness + diagnose issues in the CUT Challenges of Observability determine what values are correct and what values are erroneous Sometimes the CUT can be invoked by a test, but its outcome cannot be observed E.g. if a defective method mutates some external inaccessible object but does not return a value ○ Can be resolved by returning data to the caller that might otherwise been passed further down a call chain When outcomes are not directly observable, modify the CUT to provide useful feedback ○ Isolateability Being able to isolate a fault within the code undertest is crucial to be able to quickly determine what has caused a failure so it can be resolved Purpose: testing code independently from other components → ensure accurate testing w/out external interference Challenging in large modern systems due to # of dependencies software systems have Isolateability can be increased by decomposing larger functions into smaller more self-contained functions that can be tested independently In simulation-based environments code dependencies are mocked or stubbed whereby they are replaced with developer-created fake components that take known inputs and return known values Mocking increases performance + makes components less prone to non-determinism as the result being returned is usually fixed and not dependent on some external complex computation Your test only checks the behaviour of your function without depending on anything else Using mocks allows you to test your code in a controlled way. You ensure that your code handles different scenarios correctly without worrying about other parts of your system that may cause issues ○ Automatability Purpose: ease of setting up tests to run automatically → enable rapid feedback + integration in development Ease of execution → automated test suites can run w/out human intervention Tests that require manual execution are less likely to be performed regularly Automation ensures tests can be run frequently Automated testing is more likely to be run after a system has been released → Regression testing Important to validate that the system still works as intended, especially as external factors (e.g. lib) Continuous validation Economic benefits Time investment vs long-term gain ○ Setting up automated test case takes alot of effort, once established it can save time in the long run Global Visability Show health of the system → if test suite consistently passes, teams feel more confident about deploying new changes Necessary to restructure software that has not been written in a testable way to make it possible to validate a system effectively → happens when units within the code take on more than one responsibility or when features become scattered across a codebase Test-Driven Development PROS → by writing your tests first you can ensure your system is structured in a testable way Summary ○ All 4 properties of testability are interconnected Observability + isolatability are foundational (e.g. can I detect that a fault exists, and if so where exactly in my code is it) If you have an isolatability unit, but you can’t control it than being isolated does not matter because you cannot trigger the behaviour you want to observe E.g. if a method doesn’t accept parameters that could simulate different scenarios, you can’t test its response effectively even if its isolated Vice versa → you can control inputs + dependencies but unit is not isolateable, you might struggle to determine where the issue arises Automatability = least common property b/c it could be considered shorthand for code that is programmatically controllable White Box Testing Leverage coverage to help engineers understand which portion of their systems have/have not been tested Suffer from confirmation bias → by looking at an implementation, you are more likely to make similar assumptions as the author of the code being tested + miss important test cases Advantageous for high code coverage → useful for regression testing ○ Checks if new changes break existing functionality ○ Testers can be more confident that their tests cover all behaviors of the system COVERAGE Goals ○ Ensure our code implements its spec ○ Try to find defects in our implementation ○ Understand where we can expect to have found defects Non goal → ensure system is bug free Constraint → time Approach → coverage-guided white box testing Coverage measures the proportion of the system being tested Coverage % = covered / (covered + uncovered) Two high-level Categories Flow-independent(most common) → cheap + easy to reason about ○ Makes sure it executes each piece of the system + ensures its capable of executing successfully Block: measures the proportion of blocks in the system that are executed by the test suite Line: measures the proportion of lines in the system that are executed in the test suite When calculating line coverage, the denominator reflects the executable lines → function definitions (for parameter assignment), and implicit returns ○ Excludes whitespace, comments, lines containing only characters that we don’t actually execute Statement: many program statements (compound ifs or looping conditions) often contain multiple statements on the same line We want to make sure that all statements on a line are covered rather than just hitting the line itself Slightly stronger than line coverage because it requires all statements on a line to be executed to satisfy the coverage condition Flow-dependent coverage → ensure pieces of code can execute together ○ More expensive + hard to reason (less commonly used) ○ Gives insight into stringency of execution Branch: executed both halves of branching statement (true or false branch of an if statement) Ensures that both conditions of all branch points are executed by the test suite Path: ensure we’ve executed all paths in the system Almost always requires more test cases than branch coverage → we need to evaluate not only the true/false branch of every conditional, but also every combination of paths between all conditional outcomes MCC → ensures that all statements are independently evaluated with respect to the outcome of a function Mandated in safety critical systems but hard to use in practice Coverage Guided Testing is an iterative process Identify some uncovered code Design test case that executes that code Write test + execute it to make sure it passes Repeat until we’ve met our required coverage goal Coverage tools measure the proportion of the system that is executed by the test quite ○ Relatively cheap to compute → needs to track which parts of a program have executed when the test suite is executing ○ Actionable → devs can look at the output of a coverage report, find the parts that are not currently covered by the test suite, decide how to act upon this info Why are there stronger coverage criteria? The stronger the coverage criteria, the more stringently we can say the code was covered by the test suite Shortcoming → doesn’t show that the CUT is correct ○ Main reason why defects are found in code that is covered by the test suite Specifying + checking assertions of correctness on code is harder than serving the inputs required to cover it in the first place Inadequate Test Cases: Tests may not cover all possible scenarios, especially edge cases or unusual inputs. If tests only check the "happy path," defects may be overlooked. Incorrect Assertions: Even if the code is executed during tests, if the assertions (expected outcomes) are not correctly defined, the tests may pass while the code still has bugs. Developers tend to start with simpler forms of coverage (e,g, line coverage) and move to more stringent forms of coverage once they’ve reached high coverage levels ○ Once you have 100% line coverage → improve coverage using branch or path coverage No right coverage threshold or criteria ○ Writing more high quality tests > artificially improving coverage Coverage is not about testing correctness ○ It's about executing the system ○ Checking behaviors is the job of assertions Black Box Testing Used when the tester does not have any knowledge about the implementation of the CUT Uses SPEC for the CUT and validates that the output is correct (according to SPEC) for a given set of inputs Easier for stakeholders who aren’t the original developer (e.g. QA or Tester) → you can read the SPEC and come up with a reasonable set of tests w/out being overwhelmed by all of the detail of how that code was implemented Approaches Boundary value analysis (BVA): focuses on testing the boundaries of an input domain rather than testing arbitrary values → idea: boundary inputs are more likely to cause problems than inputs within the normal range ○ Seeks to identify which inputs are most valuable to test ○ Boundary conditions → inputs at or near the edges of the input range are tested. These values are more prone to defects b/c developers may overlook these edge cases ○ Example: Booleans → check true, false, defined, null ○ Example: Numeric Output Range For a function that accepts # in range 1 - 10, boundary values include Outside: 0 (just below) and 11 (just above) At the Boundary: 1, 10 These values are likely to reveal bugs related to off-by-one errors or improper range checks ○ Example: String inputs For a form where a user enters username + password, both fields may have specific restrictions (length, allowed characters) Minimum allowed length → if username requires at least 3 characters, test with 2 (invalid) and 3 (valid) ○ Can also be applied to more complex types (e.g. dates) and objects ○ Important → targets edge cases that are more likely to cause failures + reduces # test cases needed by focusing on inputs that are more prone to issues Make sure our code is robust to unexpected inputs Equivalence class partitioning (ECP): useful to test more commonly expected inputs to ensure that the CUT is behaving as expected ○ Seeks to decrease the input space from all possible inputs to a reasonable subset of inputs ○ Inputs are decomposed into classes, and testers ensure that at least one input from each class is validated ○ Usually, testers will consider values using both ECP and BVA simultaneously to ensure broad coverage of the input space without making a strong distinction between the two ○ Example: Numeric Output Range [1, 10] Classes for this input → [n < 1, 1 < n < 10, n > 10] Focus on ECP is to choose reasonable value from each partition [-5, 5, 15] State transition testing: requires the tester to identify the states, or modes, the program will assume at runtime, determine how the program follows between these states, and then test that the program behaves as expected during these transitions ○ Tester focuses on how a system moves between different states and whether it behaves as expected during these transitions ○ Method is especially useful when the system or application can exist in multiple states ○ Example: A system for an electronic lock Two natural states = open + close Identify transitions Closed to open → triggered by providing a valid credential Open to closed → triggered by either closing the door or after a timeout period Test transitions ⇒ create test cases to validate the transitions + behaviors - test invalid + valid inputs User acceptance testing: use case testing / user story testing tries to simulate user interaction with the system ○ Use case testing validates the happy path where the most common + expected inputs are validated to ensure the system behaves as expected Simulate untrained and adversarial users ⇒ handle unusual or incorrect inputs from a user’s perspective ○ Similar to a customer validating a user story by evaluating the definitions done for a story in a sprint review → use case testing + user story testing form a kind of user acceptance testing Input Partitioning Black box testing technique that strives to find value partitions for a function based on the inputs for the function Group the input space of a function into equivalence classes → distinct sets of inputs derived from the SPEC of the function ○ Chosen based on the function’s input parameters We then ensure each equivalence class is validated at least once Example:getPerfectSqrRoot → Returns a sqrt if the input # is a perfect square. Throws a NotPerfectSquare error if input number is negative or not a perfect square ○ Partition the input space into 3 equivalence classes Positive # that is a square Positive # not a square Negative # ○ Example test suite → 16, 10, -5 Strengths: ○ Black box testing → implementation of the function is not required to validate the behaviour More flexible + avoid confirmation bias that can occur when construction test cases for code you authorized o viewed ○ Provide systematic means to derive inputs that are likely to correspond to different code blocks within an implementation → various parts of the code are exercised + validated ○ For many functions, input space can be large → input partitioning simplifies these large domains into more approachable set of inputs CONS ○ Dependent on spec If spec is incomplete or implementation deviates from the spec, the inputs chosen by input partitioning may miss important test cases that should be tested ○ Inputs to a function are less interesting than the outputs → output partitioning is a better choice ○ Defects often arise at the boundaries of partitions Input partitioning is often combined with boundary value analysis to cover these cases more comprehensively Output Partitioning Various parts of a system’s output are correctly validated Allows the tester to analyze the function from a different perspective, which is useful when the output of a function is more interesting than its input EXAMPLE /** * Turns a number of seconds into a 24 hour time format. * * REQUIRES: 0

Tags

software testing software engineering automated testing
Use Quizgecko on...
Browser
Browser