Software Quality Assurance and Testing PDF

Summary

This document provides an overview of software quality assurance and testing, including definitions, activities, advantages, disadvantages, and different review checklists. It's suitable for an undergraduate-level software engineering course.

Full Transcript

Software Quality Assurance and Testing Instructor: Dr. Nguyễn Quang Vũ [email protected] Phone: 0901.982.982 Chapter 3. Software Quality Management What is Software Quality? Simplistically, quality is an attribute...

Software Quality Assurance and Testing Instructor: Dr. Nguyễn Quang Vũ [email protected] Phone: 0901.982.982 Chapter 3. Software Quality Management What is Software Quality? Simplistically, quality is an attribute of software that implies the software meets its specification. This definition is too simple for ensuring quality in software systems Software specifications are often incomplete or ambiguous Some quality attributes are difficult to specify Tension exists between some quality attributes, e.g. efficiency vs. reliability What is Software Quality? Conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed software Software requirements are the foundation from which quality is measured. Lack of conformance to requirements is lack of quality. Specified standards define a set of development criteria that guide the manner in which software is engineered. If the criteria are not met, lack of quality will almost surely result. There is a set of implicit requirements that often goes unmentioned. If software conforms to its explicit requirements but fails to meet its implicit requirements, software quality is suspect. Software Quality Assurance - SQA To ensure quality in a software product, an organization must have a three-prong approach to quality management: Organization-wide policies, procedures and standards must be established. Project-specific policies, procedures and standards must be tailored from the organization-wide templates. Quality must be controlled; that is, the organization must ensure that the appropriate procedures are followed for each project Standards exist to help an organization draft an appropriate software quality assurance plan. ISO 9000-3 ANSI/IEEE standards External entities can be contracted to verify that an organization is standard-compliant. SQA Activities Applying technical methods To help the analyst achieve a high quality specification and a high quality design Conducting formal technical reviews A stylized meeting conducted by technical staff with the sole purpose of uncovering quality problems Testing Software A series of test case design methods that help ensure effective error detection Enforcing standards Controlling change Applied during software development and maintenance Measurement Track software quality and asses the ability of methodological and procedural changes to improve software quality Record keeping and reporting Provide procedures for the collection and dissemination of SQA information SQA Advantages Software will have fewer latent defects, resulting in reduced effort and time spent during testing and maintenance. Higher reliability will result in greater customer satisfaction. Maintenance costs can be reduced. Overall life cycle cost of software is reduced. SQA Disadvantages It is difficult to institute in small organizations, where available resources to perform necessary activities are not available. It represents cultural change - and change is never easy. It requires the expenditure of dollars that would not otherwise be explicitly budgeted to software engineering or QA. Quality Reviews The fundamental method of validating the quality of a product or a process. Applied during and/or at the end of each life cycle phase Point out needed improvements in the product of a single person or team Confirm those parts of a product in which improvement is either not desired or not needed Achieve technical work of more uniform, or at least more predictable, quality than what can be achieved without reviews, in order to make technical work more manageable Quality reviews can have different intents: review for defect removal review for progress assessment review for consistency and conformance Requirements Quality Reviews Analysis Specification Review 1x Design Design Review 3-6x Code Code Review 10x Test Testing Review 15-70x Customer Maintenance Feedback 40-1000x Cost Impact of Software Defects Errors from Previous Steps Errors Passed Through Percent Efficiency Amplified Errors 1:X for error Newly Generated Errors detection Errors Passed to Next Step Defect Amplification and Removal Preliminary Design 0 Detailed 10 0 0% Design 10 6 6 4 37 Code/Unit 4x1.5 0% Testing 25 10 10 27 94 37 27x3 20% 25 116 To integration testing... Defect Amplification and Removal (cont’d) Integration 94 Testing 94 94 Validation 0 47 Testing 0 50% 0 47 47 0 24 94 0 50% System Testing 0 24 24 0 12 47 0 50% 0 24 Latent Errors Review Checklist for System Engineering 1. Are major functions defined in a bounded and unambiguous fashion? 2. Are interfaces between system elements defined? 3. Are performance bounds established for the system as a whole and for each element? 4. Are design constraints established for each element? 5. Has the best alternative been selected? 6. Is the solution technologically feasible? 7. Has a mechanism for system validation and verification been established? 8. Is there consistency among all system elements? [Adapted from Behforooz and Hudson] Review Checklist for Software Project Planning 1. Is the software scope unambiguously defined and bounded? 2. Is terminology clear? 3. Are resources adequate for the scope? 4. Are resources readily available? 5. Are tasks properly defined and sequenced? 6. Is the basis for cost estimation reasonable? Has it been developed using two different sources? 7. Have historical productivity and quality data been used? 8. Have differences in estimates been reconciled? 9. Are pre-established budgets and deadlines realistic? 10. Is the schedule consistent? Review Checklist for Software Requirements Analysis 1. Is the information domain analysis complete, consistent, and accurate? 2. Is problem partitioning complete? 3. Are external and internal interfaces properly defined? 4. Are all requirements traceable to the system level? 5. Is prototyping conducted for the customer? 6. Is performance achievable with constraints imposed by other system elements? 7. Are requirements consistent with schedule, resources, and budget? 8. Are validation criteria complete? Review Checklist for Software Design (Preliminary Design Review) 1. Are software requirements reflected in the software architecture? 2. Is effective modularity achieved? Are modules functionally independent? 3. Is program architecture factored? 4. Are interfaces defined for modules and external system elements? 5. Is data structure consistent with software requirements? 6. Has maintainability been considered? Review Checklist for Software Design (Design Walkthrough) 1. Does the algorithm accomplish the desired function? 2. Is the algorithm logically correct? 3. Is the interface consistent with architectural design? 4. Is logical complexity reasonable? 5. Have error handling and “antibugging” been specified? 6. Is local data structure properly defined? 7. Are structured programming constructs used throughout? 8. Is design detail amenable to the implementation language? 9. Which are used: operating system or language dependent features? 10. Is compound or inverse logic used? 11. Has maintainability been considered? Review Checklist for Coding 1. Is the design properly translated into code? (The results of the procedural design should be available at this review) 2. Are there misspellings or typos? 3. Has proper use of language conventions been made? 4. Is there compliance with coding standards for language style, comments, module prologue? 5. Are incorrect or ambiguous comments present? 6. Are typing and data declaration proper? 7. Are physical constraints correct? 8. Have all items on the design walkthrough checklist been reapplied (as required)? Review Checklist for Software Testing (Test Plan) 1. Have major test phases been properly identified and sequenced? 2. Has traceability to validation criteria/requirements been established as part of software requirements analysis? 3. Are major functions demonstrated early? 4. Is the test plan consistent with the overall project plan? 5. Has a test schedule been explicitly defined? 6. Are test resources and tools identified and available? 7. Has a test recordkeeping mechanism been established? 8. Have test drivers and stubs been identified, and has work to develop them been scheduled? 9. Has stress testing for software been specified? Review Checklist for Software Testing (Test Procedure) 1. Have both white and black box tests been specified? 2. Have all independent logic paths been tested? 3. Have test cases been identified and listed with expected results? 4. Is error handling to be tested? 5. Are boundary values to be tested? 6. Are timing and performance to be tested? 7. Has acceptable variation from expected results been specified? Review Checklist for Maintenance 1. Have side effects associated with change been considered? 2. Has the request for change been documented, evaluated, and approved? 3. Has the change, once made, been documented and reported to interested parties? 4. Have appropriate FTRs been conducted? 5. Has a final acceptance review been conducted to assure that all software has been properly updated, tested, and replaced? Formal Technical Review (FTR) Software quality assurance activity that is performed by software engineering practitioners Uncover errors in function, logic, or implementation for any representation of the software Verify that the software under review meets its requirements Assure that the software has been represented according to predefined standards Achieve software that is developed in a uniform manner Make projects more manageable FTR is actually a class of reviews Walkthroughs Inspections Round-robin reviews Other small group technical assessments of the software The Review Meeting Constraints Between 3 and 5 people (typically) are involved Advance preparation should occur, but should involve no more that 2 hours of work for each person Duration should be less than two hours Components Product - A component of software to be reviewed Producer - The individual who developed the product Review leader - Appointed by the project leader; evaluates the product for readiness, generates copies of product materials, and distributes them to 2 or 3 reviewers Reviewers - Spend between 1 and 2 hours reviewing the product, making notes, and otherwise becoming familiar with the work Recorder - The individual who records (in writing) all important issues raised during the review Review Reporting and Recordkeeping Review Summary Report What was reviewed? Who reviewed it? What were the findings and conclusions? Review Issues List Identify the problem areas within the product Serve as an action item checklist that guides the producer as corrections are made Guidelines for FTR Review the product, not the producer Set an agenda and maintain it Limit debate and rebuttal Enunciate the problem areas, but don’t attempt to solve every problem that is noted Take written notes Limit the number of participants and insist upon advance preparation Develop a checklist for each product that is likely to be reviewed Allocate resources and time schedules for FTRs Conduct meaningful training for all reviewers Review your earlier reviews (if any) Reviewer’s Preparation Be sure that you understand the context of the material Skim all product material to understand the location and the format of information Read the product material and annotate a hardcopy Pose your written comments as questions Avoid issues of style Inform the review leader if you cannot prepare Results of Review Meeting All attendees of the FTR must make a decision Accept the product without further modification Reject the product due to severe errors (and perform another review after corrections have been made) Accept the product provisionally (minor corrections are needed, but no further reviews are required) A sign-off is completed, indicating participation and concurrence with the review team’s findings Software Reliability Probability of failure-free operation for a specified time in a specified environment. This could mean very different things for different systems and different users. Informally, reliability is a measure of the users’ perception of how well the software provides the services they need. Not an objective measure Must be based on an operational profile Must consider that there are widely varying consequences for different errors Software Reliability Improvements Software reliability improves when faults which are present in the most frequently used portions of the software are removed. A removal of X% of faults doesn’t necessarily mean an X% improvement in reliability. In a study by Mills et al. in 1987 removing 60% of faults resulted in a 3% improvement in reliability. Removing faults with the most serious consequences is the primary objective.

Use Quizgecko on...
Browser
Browser