🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

LECTURE 02 - Principles of Testing.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Principles of Testing Foundation Course Error - Fault - Failure A person makes an error... … that creates a fault in the software... … that can...

Principles of Testing Foundation Course Error - Fault - Failure A person makes an error... … that creates a fault in the software... … that can cause a failure in operation Why do faults occur in software? software is written by human beings who know something, but not everything who have skills, but aren’t perfect who do make mistakes (errors) under increasing pressure to deliver to strict deadlines no time to check but assumptions may be wrong systems may be incomplete if you have ever written software... What do software faults cost? Huge Sums Ariane 5 - Exploded just forty seconds after its lifted- off - ($7billion loss) https://www-users.cse.umn.edu/~arnold/disasters/ariane.html Mariner space explore to Venus - out-of-control spacecraft and booster were destroyed for safety. ($250m) American Airlines ($50m) https://www.nytimes.com/1988/09/12/business/software-bug-cost-millions-at-airline.html https://solarsystem.nasa.gov/missions/mariner-01/in- Customer satisfaction depth/#:~:text=Mariner%201&text=America's%20first%20attempt%20to%20exp lore,booster%20were%20destroyed%20for%20safety. Safety-critical systems software faults can cause death or injury radiation treatment kills patients (Therac-25) train driver killed aircraft crashes (Airbus & Korean Airlines) bank system overdraft letters cause suicide So why is testing necessary? 1. because software is likely to have faults 2. to learn about the reliability of the software 3. to fill the time between delivery of the software and the release date 4. to prove that the software has no faults 5. because testing is included in the project plan 6. because failures can be very expensive 7. to avoid being sued by customers 8. to stay in business Why not just "test everything"? Avr. 4 menus 3 options / menu system has Average: 10 fields / screen 20 screens 2 types input / field (date as Jan 3 or 3/1) (number as integer or decimal) Around 100 possible values Total for 'exhaustive' testing: 20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests If 1 second per test, 8000 mins, 133 hrs, 17.7 days (not counting finger trouble, faults or retest) 10 secs = 34 wks, 1 min = 4 yrs, 10 min = 40 yrs How much testing is enough? 1. it’s never enough 2. when you have done what you planned 3. when your customer/user is happy 4. when you have proved that the system works correctly 5. when you are confident that the system works correctly 6. it depends on the risks for your system How much testing? It depends on RISK risk of missing important faults risk of incurring failure costs risk of releasing untested or under-tested software risk of losing credibility and market share risk of missing a market window risk of over-testing, ineffective testing So little time, so much to test.. test time will always be limited use RISK to determine: what to test first what to test most how thoroughly to test each item what not to test (this time) use RISK to allocate the time available for testing by prioritising testing... Most important principle Prioritise tests so that, whenever you stop testing, you have done the best testing in the time available. Testing Techniques What is a testing technique? a procedure for selecting or designing or conducting tests based on a structural or functional model of the software successful at finding faults 'best' practice a way of deriving good test cases a way of objectively measuring a test effort Testing should be rigorous, thorough and systematic Advantages of techniques Different people: similar probability find faults gain some independence of thought Effective testing: find more faults focus attention on specific types of fault know you're testing the right thing Efficient testing: find faults with less effort avoid duplication systematic techniques are measurable Using techniques makes testing much more effective Three types of systematic technique Static (non-execution) examination of documentation, source code listings, etc. Functional (Black Box) based on behaviour / functionality of software Structural (White Box) based on structure of software Some Test Techniques Static Dynamic Reviews etc. Static Analysis Behavioural Inspection Walkthroughs Structural Non-functional Functional Desk-checking etc. Equivale Control Usability nce Data Flow Performance Partition Flow etc. ing Boundary Value etc. Statement Analysis Symbolic Execution Arcs Cause-Effect Branch/Decision Graphing Definition Branch Condition LCSAJ Random -Use State Branch Condition Combination Transition Black Box test design and measurement techniques Techniques defined in BS 7925-2 Equivalence partitioning Boundary value analysis State transition testing Cause-effect graphing Syntax testing Random testing Also defines how to specify other techniques Also a measurement technique? = Yes = No Equivalence partitioning (EP) divide (partition) the inputs, outputs, etc. into areas which are the same (equivalent) assumption: if one value works, all will work one from each partition better than all from one invalid valid invalid 0 1 100 101 Boundary value analysis (BVA) faults tend to lurk near boundaries good place to look for faults test values on both sides of boundaries invalid valid invalid 0 1 100 101 E.g. Loan Application E.g. Loan Application - Customer Name E.g. Loan Application - Account Number E.g. Loan Application - Loan Amount Test objectives? State Transition Testing (analysis) ➔ states the software may occupy ➔ transitions between the states ➔ events which cause the transitions ➔ actions that result from the transitions Invalid PIN Card inserted Beep Ask for PIN Wait for Wait for card Cancel PIN Valid PIN Return card Ask amount State Transition Testing (design) Test cases designed to achieve required coverage: state transitions (0-switch) transition pairs (1-switch) transition triples (2-switch) etc. A more “complete” test set will test for possible invalid transitions use state table to identify invalid transitions State machine for “display_changes” reset (R) alter time (AT) Display Change Time (S1) Time (S3) set (S) display time (T) (CM) change mode change mode (CM) display date display time (T) (D) reset (R) alter date (AD) Display Change Date (S2) Date (S4) set (S) display date (D) Possible Transitions Start Event/ End Trans State Action State R/AT 1 S1 CM/D S2 Display Change Time (S1) Time (S3) 2 S2 CM/T S1 S/T 3 S1 R/AT S3 CM/T CM/D 4 S3 S/T S1 R/AD 5 S2 R/AD S4 Display Change Date (S2) Date (S4) 6 S4 S/D S2 S/D Test case for transition coverage Step State Event Action R/AT 1 S1 Display Change Reset Alter Time (S1) Time (S3) 2 S3 Set Display S/T 3 S1 Chgmd Display CM/T CM/D 4 S2 Reset Alter R/AD 5 S4 Set Display Display Change Date (S2) Date (S4) 6 S2 Chgmd Display S/D S1 State table for “display changes” Test Case Design Considerations Start with functionally sensible test cases covering most likely transitions provides a good regression test set Add more complex test cases to cover exceptional conditions invalid transitions Safety-critical systemsWhite Box test design and measurement techniques Techniques defined in BS 7925-2 Statement testing Branch / Decision testing Data flow testing Branch condition testing Branch condition combination testing Modified condition decision testing LCSAJ testing Also defines how to specify other techniques Also a measurement technique? = Yes = No Program - Source Code Safety-critical systems Spec Enough Software tests? Tests Results OK? What's covered More Moretests tests ? Coverage OK? Stronger structural techniques (different structural elements) Increasing coverage Statement Coverage Percentage of executable statements exercised by a test suite number of statements exercised = total number of statements Example: program has 100 statements tests exercise 87 statements ? statement coverage = 87% Typical ad hoc testing achieves 60 - 75% Statement coverage is normally measured by a software tool. Example of Statement Coverage 1 read(a) Test Input Expected 2 IF a > 6 THEN case output 3 b=a 4 ENDIF 1 7 7 5 print b As all 5 statements are ‘covered’ by this test case, we have achieved Statement 100% statement coverage numbers Decision coverage (Branch coverage) percentage of decision outcomes exercised by a test suite number of decisions outcomes exercised total number of decision outcomes example: False program has 120 decision outcomes ? tests exercise 60 decision outcomes True decision coverage = 50% Typical ad hoc testing achieves 40 - 60% Decision coverage is normally measured by a software tool. Paths through code 1234 12 12 123 ? ? ? ? ? ? Paths through code with loops 1 2 3 4 5 6 7 8 …. for as many times as it is possible to go round ? the loop (this can be unlimited, i.e. infinite) Example 1: Wait Wait for card to be inserted Yes Valid Display IF card is a valid card THEN card? “Enter.. display “Enter PIN number” IF PIN is valid THEN No select transaction Reject Valid Yes Select ELSE (otherwise) PIN? card trans... display “PIN invalid” No ELSE (otherwise) reject card Display End “PIN in.. End Example 2: Read Yes Yes Read A A>0 A=21 IF A > 0 THEN No No IF A = 21 THEN Print Print “Key” ENDIF ENDIF End ➔ Cyclomatic complexity: _____ 3 ➔ Minimum tests to achieve:  Statement coverage: ______  Branch coverage: _____ 1 3 Example 3: Read Read A Yes No Read B A>0 B=0 Print IF A > 0 THEN No Yes Yes IF B = 0 THEN Print A>21 Print Print “No values” No ELSE Print B IF A > 21 THEN End Print A ➔ Cyclomatic complexity: _____ ENDIF ➔ Minimum tests to achieve: 4 ENDIF  Statement coverage: ______ 2  Branch coverage: _____ ENDIF 4 Testing Components Functional Testing Functional Positive Functional Alternative Functional Negative User Experience Testing (UX) User Interface Testing (UI) Usability Testing Performance Testing – Single User Non-Functional Testing Load Testing Security Testing Reliability Testing Interoperability Testing Etc. UX Issues in design UX Issues UX Issues UX – Common UI Issues Content Structure Alignments Usage of images and Videos Font and Font Type Faces Spelling and Grammar Navigation Resolutions Message and Notifications Page loading time Keyboard usage Test Case - Attributes Test case ID (Unique) Test case Description (Clear) Test case Preconditions and Postconditions Test case Steps Test case Expected Output Status Comments/Feedback Test Case: AzureDevOps Test Case: Organization Exercise 1: Login Features Exercise 2: Online Payment Page

Use Quizgecko on...
Browser
Browser