Software Engineering Notes PDF
Document Details
New Horizon College, Marathahalli
G.GNANESWARI
Tags
Related
- Simplified Guide for AEs: Software Engineering, DevOps, and Cybersecurity PDF
- original_1489125567_Chapter_1_Introduction_to_software_product_engineering.pdf
- Chapter 2: Software Engineering Lifecycle PDF
- Software Engineering.pdf
- Software Engineering 9th Edition PDF
- Gls University Software Engineering PDF
Summary
These notes provide an introduction to software engineering for BCA III Year/V Semester students. Including topics on the characteristics of software, software processes, SDLC models, and risk management. The notes are prepared by G.GNANESWARI, an Asst. Professor at NHC, Marathalli.
Full Transcript
STUDY MATERIAL BCA Iii YEAR/V SEMESTER SOFTWARE ENGINEERING (BCA502T) Prepared by G.GNANESWARI Asst. Professor NHC, Marathalli. SOFTWARE ENGINEERING “Software Engineering is the application of a systematic, disciplined, quantifiable approach...
STUDY MATERIAL BCA Iii YEAR/V SEMESTER SOFTWARE ENGINEERING (BCA502T) Prepared by G.GNANESWARI Asst. Professor NHC, Marathalli. SOFTWARE ENGINEERING “Software Engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation and maintenance of the software applying engineering techniques.” UNIT - I Introduction to Software Engineering Introduction Characteristics of software Introduction to SE, Components and Goals Software process and process model Characteristics of Software Process Software products and types SDLC models SE Challenges Risk management Professional Ethical responsibility Process Visibility Software Engineering: It is a branch of CS that creates practical, cost effective solutions to computing by applying systematic scientific knowledge. Software is a set of instructions – Today it comprises of Source code, Executables, Design Documents, Operations, system manual and Installation and Implementation manuals. Classification: System software: Operates the h/w and a platform to run s/w Operating system, assembler, debugger, compiler and utilities Application software: specific task o Word processor o databases o games Essential attributes of good software: Product characteristic Description Maintainability Software should be written in such a way so that it can evolve to meet the changing needs of customers. This is a critical attribute because software change is an inevitable requirement of a changing business environment. Dependability and Software dependability includes a range of characteristics including reliability, security security and safety. Dependable software should not cause physical or economic damage in the event of system failure. Malicious users should not be able to access or damage the system. Efficiency Software should not make wasteful use of system resources such as memory and processor cycles. Efficiency therefore includes responsiveness, processing time, memory utilisation, etc. Acceptability Software must be acceptable to the type of users for which it is designed. This means that it must be understandable, usable and compatible with other systems that they use. Components of SE: Software Development Life Cycle(SDLC): various stages. Software Quality Assurance(SQA): customer/ user satisfaction. Software Project Management(SPM): Principals of Project Management. Software Management(SM): s/w maintenance. Computer Aided Software Engineering (CASE): requires automated tools. Types of software product o Generic: stand alone systems, commercial off the self software, maintain proper interface and be flexible. Eg. Word processor o Customized: Specific user group, controlled by the customer. Eg. Air traffic control, Payroll MS o Software process Process modelling is an aspect of business system modelling which focuses on the flows of information and flows of control through a system. A software process model is an abstract representation of a process. The process models are Waterfall model, Evolutionary model and Spiral model. Waterfall model It resembles a cascade. It is known as the classic life cycle model. Here output of one phase flows as input to another phase. Waterfall model: Phases : 1.Requirement analysis – requirement is documented and is known as software requirement specification documentation. 2.System and software designing - It includes the architectural design, Abstractions and relationships are designed. 3.Implementation and unit testing- Implemented and tested. 4.Intergration and system testing-programs are integrated and tested. Operation and Maintenance-keep the software operational after delivery. Advantages: Simple and systematic. Easy to maintain. Provide clarity to software Engineers. Disadvantages: Each phase must be frozen before the next phase. Difficult to incorporate changes. Product is available only at the last stage. Iterative waterfall model: It introduces feedback paths to the previous phases. System Development life cycle (SDLC) Phases: Feasibility – provide project plan and budget. Requirement analysis - study business needs. Design - Crucial stage where software’s overall structure is defined. Implementation - programming tools are used. Testing - A detailed testing is performed, where the interfaces b/w modules are also tested. Maintenance - accommodate changes. Advantages: Linear model. Systematic and sequential Proper documentation is done. Disadvantages: Difficult to define all the requirements at the start. Not suitable for changes. Inflexible partitioning Evolutionary development model Does not require usable product at the end of each cycle. Developing initial implementation, exposing to the user, refining until it becomes an acceptable system. Characteristics : Phases are interleaved. Feedback is used throughout the entire process. Software products goes through many versions. Types of ED model: 1. Exploratory Development : Development is in parts; new features are added to the product and process continues until acceptable product. 2. Throw-away Prototyping : Development is in parts; gets more refined but this prototype is thrown away and actual system development starts from stratch. Advantages: Requirements are not frozen in the beginning. When it is impossible to express specifications at the beginning. Disadvantages: Poor structured system-Changes are made till the last stage; may lose structure. Need highly skilled s/w engineers. Invisible process – difficult to produce deliverables after each stage. Bohiemia’s spiral model Proposed by Boehm. Combination of Iterative nature of prototyping and systematic aspects of waterfall model. Risk management is incorporated. Each loop in the spiral represents a phase of s/w process. Inner most loop represents feasibility, next system requirement, next design and finally testing. Thus it can be described as a risk driven model and a cyclic approach for incrementally growing system while decreasing its degree of risk. Each loop is further divided into 4 sectors: 1. Objective setting : Objectives and constrains are identified, detailed plan, Risks are identified. 2. Risk assessment and Reduction : Analysis is done on the risks and steps are taken to reduce the risk. 3. Development and validation : After evaluation, development model is guided by risk factors. Planning : Reviewing the results, plans are made for the next phase. Advantages: Begins with elaborating the objective. Development process begins only when the risk evaluated. Encompasses other process models. Risks are explicitly assessed and resolved throughout the process. Disadvantages: Complex system; difficult to follow. Only applicable for large systems. Risk assessment is costly. Need expertise in risk evaluation. Risk management RISK is the impact of an event with the potential to influence achievement of an organization’s objectives. Effective risk management requires an informed understanding of relevant risks, an assessment of their relative priority and a rigorous approach to monitoring and controlling them. Risk management An organization may use risk assumption, risk avoidance, risk retention, risk transfer or any other strategy in proper management of future events. RISK means ‘potential danger, insecurity, threat or harm of a future event” Objective is ‘to maximize the potential of success and minimize the probability of future losses’. Concept of risk management Risks can come from uncertainty in financial markets, project failures, legal liabilities, credit risk, accidents, natural causes and disasters. An unbiased study on technical risk management measures adopted and followed will help the management. insurance practices Insurers can evaluate risk of insurance policies at a much higher accuracy. Types of risks: Product Risk: o Fail to satisfy the customers expectations. o Unsatisfactory functionality of the s/w. o S/w is unreliable and fails frequently. o Result in major functional damage. o Low quality software. o Cause financial damage. Types of risks: ♣ Business Risk: o Lower than expected profits/ loss. o It is influenced by sales volume, unit price, input costs, competition, overall economy, govt. regulations…. ♣ Internal risk: o Risk due to events that takes place within the organization such as Human factors (strike/talent), physical factors ( fire, theft, damage), operational factors (access to credit, cost cutting, ads) ♣ External risk: o Due to outside environment such as economic factors (market risk, pricing), natural factors (floods, etc), political factors (govt. regulations, etc) Professional and ethical responsibilities S/W engineers are ethically responsible. Issues: Confidentiality Competence – don’t accept outside competence level Intellectual property rights Computer misuse- do not use the skills to misuse someone’s system. Process visibility: Waterfall model – Good - Each activity produces some deliverables. Evolutionary model – Poor – Uneconomic to produce documents during rapid iteration. Formal transformations – Good – Documents must be produced from each phase. Reuse-oriented development – Moderate – not possible to produce documents describing reuse. Spiral model – Good – Each segment and each ring of the spiral should produce some document. Software Engineering Introduction System and their environment System Procurement System Engineering process System Architecture modelling Human factors System Reliability Engineering Software engineering: Software engineering is the activity of specifying, implementing, validating, installing and maintaining system as a whole. (interconnected components working together) Emergent properties: Overall weight of the system, reliability of the system, usability of the system. Environment: It affects functioning and performance. System inside a system is known as subsystem. System hierarchies – levels of systems System procurement: Acquiring a system for an organization. Deciding on the system specification and architectural design is essential. Developing a system from scratch/buy commercial off the shelf system. Model of system procurement: System requirement definition: Consult with customers and endusers Basic function is described at the abstact level Non functional system properties such Specify what the system should not do. System Design process: Analyse the partition requirements Identify the sub system that can meet the requirements Assign the requirement to each identified sub systems. Specific functions provided by each sub system are specified. Define the interfaces that are provided and expected by each sub system. Sub system development: Sub system development activity involves developing each sub system. If sub system is a software system, a software process involving requirements, design, implementation and so on. Commercial Off the shelf (COTS) systems may not meet the requirement exactly but it can be modified. Sub systems are usually developed in parallel. System Integration: Putting them together to make up a complete system is integration. Big bang method integrated all the sub systems all at the same time. Incremental integration does it one system at a time. Scheduling the development of the sub systems at the same time is impossible. II reduces the cost. System installation: Installing the system in the environment in which it is intended to operate. Problems can be that due to ---Environment assumption may be incorrect. ---Human resistance to new systems ---System has to coexist with the existing system for some time --Problems with physical installation Operator training Human factors: User Interface is essential for effective system operaation. Process changes – training is required for workers to cope with new system. But organization is face resistance from staff initially. Job Changes – New and faster systems will require to change the way the worker work. Organizational changes – Changes in the political power structure. System reliability engineering Components of the system are dependent and failure in one components. Affect the operation of other components. Types: -Hardware: The probability of a hardware components failing is proportional to reliability. -Software : Software component producing incorrect output. -Operator: Error can be made by the operator. Introduction Functional and Non-functional and Domain Requirements Software Requirement Specification(SRS) Document Requirement Engineering process Requirement Management Requirement Management Planning System Models Software Requirement Analysis and Specification A software requirement provides a Blueprint for the development of a software product. The degree of understanding, accuracy and description provided by SRS document is directly proportional to the degree of quality of the derived product. Classification of system requirements Functional requirements describe system services or functions Non-functional requirements is a constraint on the system or on the development process User requirements - Statements in natural language (NL) plus diagrams of the services the system provides and its operational constraints. Written for customers System requirements - A structured document setting out detailed descriptions of the system services. Written as a contract between client and contractor Functional requirements The functional requirements for a system describe the functionalities or services that the system is expected to provide. They provide how the system should react to particular inputs and how the system should behave in a particular situation. Non-Functional requirements These are constraints on the services or functionalities offered by the system. They include timing constraints, constraints on the development process, standards etc. These requirements are not directly concerned with the specific function delivered by the system. They may relate to such system properties such as reliability, response time, and storage. They may define the constraints on the system such as capabilities of I/O devices and the data representations used in system interfaces. Non-Functional requirements classifications Non-functional classifications: Product requirements o Requirements which specify that the delivered product must behave in a particular way e.g. execution speed, reliability, etc. Organisational requirements o Requirements which are a consequence of organisational policies and procedures e.g. process standards used, implementation requirements, etc. External requirements o Requirements which arise from factors which are external to the system and its development process e.g. interoperability requirements, legislative requirements, etc. Requirements for srs Software requirement specification document 1. It should specify only external system behaviour. 2. It should specify constraints on the implementation. 3. It should be easy to change. 4. It should serve as a reference tool for system maintainers. 5. It should record forethought about the life cycle of the system. 6. It should characterize acceptable response to undesired events. Characteristics of an SRS: Correct: An SRS is correct if every requirement included in the SRS represents something required in the final system. Complete: An SRS is complete if everything software is supposed to do and the responses of the software to all classes of input data are specified in the SRS. Unambiguous: An SRS is unambiguous or clear cut if and only if every requirement stated has one and only one interpretation. Verifiable: An SRS is verifiable if and only if every specified requirement is verifiable i.e. there exists a procedure to check that final software meets the Requirement. Consistent: An SRS is consistent if there is no requirement that conflicts with another. Traceable: An SRS is traceable if each requirement in it must be uniquely identified to a source. Modifiable: An SRS is modifiable if its structure and style are such that any necessary change can be made easily while preserving completeness and consistency. Ranked: An SRS is ranked for importance and/or stability if for each requirement the importance and the stability of the requirements are indicated. Components of an SRS: Functionality What is the software supposed to do? External interfaces How does the software interact with people, the system's hardware, other hardware, and other software? What assumptions can be made about these external entities? Required Performance What is the speed, availability, response time, recovery time of various software functions, and so on? Quality Attributes What are the portability, correctness, maintainability, security, and other considerations? Design constraints imposed on an implementation Are there any required standards in effect, implementation language, policies for database integrity, resource limits, operating environment(s) and so on? Project development plans o E.g. cost, staffing, schedules, methods, tools, etc Lifetime of SRS is until the software is made obsolete Lifetime of development plans is much shorter Product assurance plans o Configuration Management, Verification & Validation, test plans, Quality Assurance, etc Different audiences Different lifetimes Designs o Requirements and designs have different audiences o Analysis and design are different areas of expertise I.e. requirements analysts shouldn’t do design! o Except where application domain constrains the design e.g. limited communication between different subsystems for security reasons. Requirement Engineering Process The requirements engineering process includes a feasibility study, requirements elicitation and analysis, requirements specification and requirements management. Requirement engineering process: The process of understanding and defining what services are required from the system and identifying the constraints on the system’s operation and development. Feasibility study – on current hardware, software technologies, cost effectiveness, etc Requirement elicitation and analysis – involve the development of one or more system models and prototypes. Requirement specification – detailed description in the form of document Requirement validation – checks the requirement for realism, consistency and completeness. Feasibility Study A feasibility study decides whether or not the proposed system is worthwhile. The input to the feasibility study is an outline description of the system and the output recommends whether the requirement engineering process should be initiated. It is the cost benefit analysis which involves information collection and report writing. Information is extracted from managers, SE’s, technical experts and end users. It may propose scope budget and schedule of the system. Questions for people in the organisation: Objective ? Acceptable? Affordable? Feasible ? Significance? How it is better than current system? How will the business be better? Elicitation and analysis Sometimes called requirements discovery Involves technical staff working with customers and end users (stakeholders) to find out about the application domain Activities involves are: Requirement discovery: interacting with stakeholders Requirement classification and organization: use a model of the system architecture, organize subsystems and relations Requirements prioritization and negotiation: negotiate when there are conflicts among stakeholders Requirement specification: documented It is an iterative process with feedback. Techniques for Requirements elicitation and analysis: Viewpoint oriented elicitation: All the previous requirements sources can be represented as system viewpoints. o Viewpoints are a way of structuring the requirements to represent the perspectives of different stakeholders. Stakeholders and other sources may be classified under different viewpoints. o This multi-perspective analysis is important as there is no single correct way to analyse system requirements. ATM stakeholders, Domain and interacting System Stakeholders: Bank customers Representatives of other banks Bank managers Counter staff Database administrators Security managers Marketing department Hardware and software maintenance engineers Banking regulators Moreover, we have already seen that requirements may come from the application Domain and from other System that interact with the application being specified. Types of viewpoint: Interactor viewpoints o People or other systems that interact directly with the system. In an ATM, the customer’s and the account database are interactor VPs. Indirect viewpoints o Stakeholders who do not use the system themselves but who influence the requirements. In an ATM, management and security staff are indirect viewpoints. Domain viewpoints o Domain characteristics and constraints that influence the requirements. In an ATM, an example would be standards for inter-bank communications. Requirements discovery and scenarios: People usually find it easier to relate to real-life examples than to abstract description. They can understand and critique a scenario of how they can interact with the system. That is, scenario can be particularly useful for adding detail to an outline requirements description: o they are description of example interaction sessions; o each scenario covers one or more possible interaction; Several forms of scenarios have been developed, each of which provides different types of information at different level of detail about the system. Requirements discovery and scenarios: Scenarios are real-life examples of how a system can be used. They should include A description of the starting situation; A description of the normal flow of events; A description of what can go wrong; Information about other concurrent activities; A description of the state when the scenario finishes. Social and organisational factors Ethnography Software systems are used in a social and organisational context. This can influence or even dominate the system requirements. Social and organisational factors are not a single viewpoint but are influences on all viewpoints. Good analysts should immerse him or herself in the working environment where the system will be used and must be sensitive to these factors Currently no systematic way to tackle their analysis. Ethnography: An expert in social branch spends a considerable time observing and analysing how people actually work. In this way it is possible to discover implicit system requirements. People do not have to explain or articulate detail about their work. In fact people often find it difficult to detail their works because it is second nature to them. An unbiased observer may be suitable to find out social and important organisational factors (that are not obvious to individuals). Ethnographic studies have shown that work is usually richer and more complex than suggested by simple system models. Ethnography and prototyping: Scope of ethnography: To summarize the discussion, we can say that ethnography is particularly effective at discovering two type of requirement: Requirements that are derived from the way that people actually work rather than the way I which process definitions suggest that they ought to work. Requirements that are derived from cooperation and awareness of other people’s activities. Requirements validation: Concerned with demonstrating that the requirements define the system that the customer really wants. Requirements validation covers a part of analysis in that it is concerned with finding problems with requirements. Requirements error costs are high so validation is very important o Fixing a requirements error after delivery may cost up to 100 times the cost of fixing an implementation error. o In fact, a change to the requirements usually means that the system design and the implementation must also be changed and the testing has to be performed again. Checks required during the requirements validation process: Validity checks. Does the system provide the functions which best support the customer’s needs? ( Other functions maybe identified by a further analysis ) Consistency checks. Are there any requirements conflicts? Completeness checks. Are all the requirements needed to define all functions required by the customer sufficiently specified? Realism checks. Can the requirements be implemented given available budget, technology and schedule? Verifiability. Can the requirements be checked? Requirements validation techniques: The following techniques can be used individually or in conjunction. Requirements reviews o Systematic manual analysis of the requirements performed by a team of reviewers Prototyping o Using an executable model of the system to check requirements. Covered in later Chapters. Test-case generation o Developing tests for requirements to check testability. o If the test is difficult to design, usually the related requirements are difficult to implement. Requirements reviews A requirements review is a manual process that involves both client and contractor staff should be involved in reviews. In other words these people should discuss. Regular reviews should be held while the requirements definition is being formulated. Reviews may be formal (with completed documents) or informal. Good communications between developers, customers and users can resolve problems at an early stage. Formal and informal reviews Informal reviews simply involve contractors discussing requirements with as many system stakeholders as possible; Formal reviews the development team should “take” the client through the system requirements and explaining the implications of each requirements. Checks that should be performed by reviews: Verifiability. Is the requirement realistically testable? Comprehensibility. Is the requirement properly understood? Traceability. Is the origin of the requirement clearly stated? It might be necessary to go back to the source of a requirement to assess the impact of the change. Adaptability. Can the requirement be changed without a large impact on other requirements? Requirements management: The requirements for large systems are frequently changing. In fact, during the software process, the stakeholders’ understanding of the problem is constantly changing. Requirements management is the process of managing changing requirements during the requirements engineering process and system development. Requirements are inevitably incomplete and inconsistent o New requirements emerge during the process as business needs change and a better understanding of the system is developed; o Different viewpoints have different requirements and these are often contradictory. Requirements management: It is hard for the users and customers to anticipate what effects the new system will have on the organization. Often, only when the system has been deployed, new requirements inevitably emerge. This is mainly due to the fact that, when the end-users have experience of the new system, they discover new needs and priority. Requirements change: The priority of requirements from different viewpoints changes during the development process. Conflicts have to inevitably converge in a compromise. System customers may specify requirements from a business perspective that conflict with end-user requirements. The business and technical environment of the system changes during its development. New hardware, new interface, business priority, new regulations, etc. Requirement changes and the requirements management: The requirements management is the process of identifying, understanding and controlling changes to system requirements. It might be useful to keep track of individual requirements and maintain links between dependent requirements so that you can asset the impact of requirements changes. The process of requirements management should start as soon as a draft a version of the requirement document is available. Requirements evolution: Enduring and volatile requirements From an evolution perspective, requirements fall into two classes: Enduring requirements. Stable requirements derived from the core activity of the customer organisation and relate directly to the domain of the system. o E.g., In a hospital, requirements will always relate to doctors, nurses, etc. o These requirements may be derived from a domain conceptual models that show entities and relations between them. Volatile requirements. Requirements which change during development or when the system is in use. o E.g., In a hospital, requirements derived from healthcare policy; A possible classification of volatile requirements Requirement Description Type Mutable Requirements that change because of changes to the environment in which the requirements organisation is operating. For example, in hospital systems, the funding of patient care may change and thus require different treatment information to be collected. Emergent Requirements that emerge as the customer's understanding of the system develops requirements during the system development. The design process may reveal new emergent requirements. Consequential Requirements that result from the introduction of the computer system. Introducing the requirements computer system may change the organisations processes and open up new ways of working which generate new system requirements Compatibility Requirements that depend on the particular systems or business processes within an requirements organisation. As these change, the compatibility requirements on the commissioned or delivered system may also have to evolve. Requirements management planning: Since the RE process is very expensive, it might be useful to establish a planning. In fact, during the requirements engineering process, you have to plan: o Requirements identification How requirements are individually identified; they should be uniquely identified in order to keep a better traceability. o A change management process The process followed when requirements change: the set of activities that estimate the impact and cost of changes. o Traceability policies The policy for managing the amount of information about relationships between requirements and between system design and requirements that should be maintained (e.g., in a Data Base) o CASE tool support The tool support required to help manage requirements change; tolls can range from specialist requirements management systems to simple data base systems. System modelling: System modelling helps the analyst to understand the functionality of the system and models are used to communicate with customers. Different models present the system from different perspectives o External perspective showing the system’s context or environment; o Behavioural perspective showing the behaviour of the system; o Structural perspective showing the system or data architecture. Model types: Data processing model showing how the data is processed at different stages. Composition model showing how entities are composed of other entities. Architectural model showing principal sub-systems. Classification model showing how entities have common characteristics. Stimulus/response model showing the system’s reaction to events. Context models: Context models are used to illustrate the operational context of a system - they show what lies outside the system boundaries. Social and organisational concerns may affect the decision on where to position system boundaries. Architectural models show the system and its relationship with other systems. The context of an ATM system Behavioural models: Behavioural models are used to describe the overall behaviour of a system. Two types of behavioural model are: Data processing models that show how data is processed as it moves through the system; State machine models that show the systems response to events. These models show different perspectives so both of them are required to describe the system’s behaviour. Data-processing models: Data flow diagrams (DFDs) may be used to model the system’s data processing. These show the processing steps as data flows through a system. DFDs are an intrinsic part of many analysis methods. Simple and intuitive notation that customers can understand. Show end-to-end processing of data. Data flow diagrams: DFDs model the system from a functional perspective. Tracking and documenting how the data associated with a process is helpful to develop an overall understanding of the system. Data flow diagrams may also be used in showing the data exchange between a system and other systems in its environment. State machine models: These model the behaviour of the system in response to external and internal events. They show the system’s responses to stimuli so are often used for modelling real-time systems. State machine models show system states as nodes and events as arcs between these nodes. When an event occurs, the system moves from one state to another. Statecharts are an integral part of the UML and are used to represent state machine models. Microwave oven model Semantic data models: Used to describe the logical structure of data processed by the system. An entity-relation-attribute model sets out the entities in the system, the relationships between these entities and the entity attributes Widely used in database design. Can readily be implemented using relational databases. No specific notation provided in the UML but objects and associations can be used. Object models Object models describe the system in terms of object classes and their associations. An object class is an abstraction over a set of objects with common attributes and the services (operations) provided by each object. Various object models may be produced o Inheritance models; o Aggregation models; o Interaction models. Object models: Natural ways of reflecting the real-world entities manipulated by the system More abstract entities are more difficult to model using this approach Object class identification is recognised as a difficult process requiring a deep understanding of the application domain Object classes reflecting domain entities are reusable across systems Inheritance models Organise the domain object classes into a hierarchy. Classes at the top of the hierarchy reflect the common features of all classes. Object classes inherit their attributes and services from one or more super-classes. these may then be specialised as necessary. Class hierarchy design can be a difficult process if duplication in different branches is to be avoided. Library class hierarchy Object aggregation An aggregation model shows how classes that are collections are composed of other classes. Aggregation models are similar to the part-of relationship in semantic data models. Object behaviour modelling A behavioural model shows the interactions between objects to produce some particular system behaviour that is specified as a use-case. Sequence diagrams (or collaboration diagrams) in the UML are used to model interaction between objects. Unit – II Software Prototyping Software Design Introduction This Technique is used to reduce the cost and risk Because Requirement engineering is a problem Of 56% of errors – 83% is during req. and design stages Early user participation in shaping and evaluating system functionality Feedback to refine the emerging system providing a working version that is ready for testing. “Prototyping is a technique for providing a reduced functionality or a limited performance version of a software system early in development” Need for prototyping in software development Prototyping is required when it is difficult to obtain exact requirements. User keeps giving feedback and once satisfied a report is prepared. Once the process is over SRS is prepared. Now any model can be used for development. Prototyping will expose functional, behavioural aspects as well as implementation. Process of Prototyping It takes s/w functional specifications as input, which is simulated, analyzed or directly executed. User evaluations can then be incorporated as a feedback to refine the emerging specifications and design. A continual refining of the input specification is done. Phases of prototyping development are, Establishing prototyping objectives Defining prototype functionality Develop a prototype Evaluation of the prototype Prototyping process: Establish Define Develop Evaluate prototype prototype prototype prototype objectives functionality Prototyping Outline Executable Evaluation plan definition prototype report Users point out defects and offer suggestions for improvement. This increases the flexibility of the development process. prototyPING MODEL PM is a system development method(SDM) in which a Prototype is built, tested and then reworked as necessary until an acceptable prototype is finally achieved. It is an iterative, trial and error process that takes place between the developers and the users. Prototyping model: It is an attractive idea for complicated and large systems for which there is no manual process or existing system to help determining the requirements. The goal is to provide a system with overall functionality. Types of prototyping approach There are two variants of prototyping: Throwaway prototyping and (ii)evolutionary prototyping. Throwaway prototyping is used with the objective that prototype will be discarded after the requirements have been identified. In evolutionary prototyping, the idea is that prototype will be eventually converted in the final system. o Gradually the increments are made to the prototype by taking into the consideration the feedback of clients and users. Approaches to prototyping: Evolutionary Delivered prototyping system Outline Requirements Throw-away Executable Prototype + Prototyping System Specification Evolutionary prototyping Develop abstract Build prototype Use prototype specification system system N Deliver YES System system adequate? Evolutionary prototyping: It is the only way to develop the system where it is difficult to establish a detailed system specification. But this approach has following limitations: (i)Prototype evolves so quickly that it is not cost effective to produce system documentation. (ii) Continual changes tend to corrupt the structure of the prototype system. So maintenance is likely to be difficult and costly. Throw-away prototyping: The principal function of the prototype is to clarify the requirements. After evaluation the prototype is thrown away as shown in figure. Customers and end users should resist the temptation to turn the throwaway prototype into a delivered system. Outline Develop Evaluate Specify requirements prototype prototype system Reusable components Delivered Develop Validate software software system system The reason for this are: (limitations) (i) Important system characteristics such as performance, security, reliability may have been ignored during prototype development so that a rapid implementation could be developed. It may be impossible to turn the prototype to meet these non-functional requirements. (ii) The changes made during prototype development will probably have degraded the system structure. So the maintenance will be difficult and expensive. prototyping techniques Various techniques may be used for rapid development (i) Dynamic high-level language development (ii) Database programming (iii) Component and application assembly These are not exclusive techniques - they are often used together Visual programming is an inherent part of most prototype development systems Dynamic high-level languages Languages which include powerful data management facilities Need a large run-time support system. Not normally used for large system development Some languages offer excellent UI development facilities Some languages have an integrated support environment whose facilities may be used in the prototype Database programming languages Domain specific languages for business systems based around a database management system Normally include a database query language, a screen generator, a report generator and a spreadsheet. May be integrated with a CASE toolset The language + environment is sometimes known as a fourth-generation language (4GL) Cost-effective for small to medium sized business systems Database programming: Interface generator Spreadsheet DB Report programming generator language Database management system Fourth-gener ation language Component and application assembly: Prototypes can be created quickly from a set of reusable components plus some mechanism to ‘glue’ these component together The composition mechanism must include control facilities and a mechanism for component communication The system specification must take into account the availability and functionality of existing components User interface prototyping It is impossible to pre-specify the look and feel of a user interface in an effective way. prototyping is essential UI development consumes an increasing part of overall system development costs User interface generators may be used to ‘draw’ the interface and simulate its functionality with components associated with interface entities Web interfaces may be prototyped using a web site editor Software design Introduction: It is a process to transform user requirements into some suitable form which will help coding and implementation. First step we take from problem to solution. Good design is the key to engineering. “Software design is the process of defining the architecture, components, interfaces and characteristics of a system and planning for a solution to the problem” Basic Design Process: The design process develops several models of the software system at different levels of abstraction. o Starting point is an informal “boxes and arrows” design o Add information to make it more consistent and complete o Provide feedback to earlier designs for improvement Design Phases: Architectural design: Identify sub-systems. Abstract specification: Specify sub-systems. Interface design: Describe sub-system interfaces. Component design: Decompose sub-systems into components. Data structure design: Design data structures to hold problem data. Algorithm design: Design algorithms for problem functions. Phases in the Design Process Design principles: The design process is a sequence of steps that enable the designer to describe all aspects of the software to be built. Design software follows a set of iterative steps. The principles of design are Problem partitioning Abstraction Modularity Top-Down or Bottom -Up Problem partitioning Complex program is divided into sub program. Eg : 3 partitions 1. Input 2. Data Transformation 3. Output Advantages: Easier to test Easier to maintain Propagation of fewer side effects Easier to add new features Abstraction Abstraction is the method of describing a program function. Types : 1. Data Abstraction : A named collection of data that describes a data object. Data abstraction for door would be a set of attributes that describes the door. (e.g. door type, swing direction, weight, dimension) 2. Procedural Abstraction : A named sequence of instructions that has a specific & limited or a particular function. Eg: Word OPEN for a door. 3.Control Abstraction : It controls the program without specifying internal details. Eg. Room is stuffy. Modularity: Modularity is a logical partitioning of the software design that allows complex software to be managed for purpose of implementation and maintenance. Modules can be compiled and stored separately in a library and can be included in the program whenever required. Modularity: 5 criteria to evaluate a design method with respect to its modularity: Modular Decomposability Complexity of the overall problem can be reduced if the design method provides a systematic mechanism to decompose a problem into sub problems Modular understandability module should be understandable as a standalone unit (no need to refer to other modules) Modularity: Modular continuity If small changes to the system requirements result in changes to individual modules, rather than system wide changes, the impact of side effects will be minimized Modular protection If an error occurs within a module then those errors are localized and not spread to other modules. Design stategies The most commonly used software design strategies are Functional Design Object Oriented Design Functional design: Designed from a functional viewpoint. The functions are designed as actions such as scan, build, analyze, generate etc., Object oriented design: Viewed as a collection of objects. Objects are usually members of an object class whose definition defines attributes and operations of class members. Design quality A good design – efficient code. Adapt changes – add new functionality and modify existing functionality. Design Quality is based on Quality Characteristics: Cohesion Coupling Understandability Adaptability Cohesion It is a measure of the closeness of the relationship between its components. Components are encapsulated into single unit. So no need it modify individual components. Various levels of cohesion are, Coincidental cohesion: Components are not related but bundle together Logical: components that perform similar function are put together. Temporal: Component’s function that a particular time are grouped together. Communicational: Components that operate on the same input data are grouped. Procedural: Based on the procedure it is grouped. Sequential: Grouped based on the sequence. Functional: Components that are necessary for a single function are grouped together. Coupling It measures the Strength of interconnections between components. Strength depends on the interdependence. Tight coupling: Have very strong interconnections because they share variables and the program unit is dependent on each other. Loose Coupling: Components are independent which in turns reduces the ripple effect(one change leading to another). Tight Coupling Module A Module B Module C Module D Shared data area Loose Coupling Domain-specific architectures Architectural models which are specific to some application domain Two types of domain-specific model Generic models which are abstractions from a number of real systems and which encapsulate the principal characteristics of these systems Reference models which are more abstract, idealised model. Provide a means of information about that class of system and of comparing different architectures Generic models are usually bottom-up models; Reference models are top-down models Generic models Compiler model is a well-known example although other models exist in more specialised application domains Lexical analyser Symbol table Syntax analyser Syntax tree Semantic analyser Code generator Generic compiler model may be organised according to different architectural models Compiler model Symbol table Lexical Syntactic Semantic Code analysis analysis analysis generation Language processing system Reference architectures Reference models are derived from a study of the application domain rather than from existing systems May be used as a basis for system implementation or to compare different systems. It acts as a standard against which systems can be evaluated OSI model is a layered model for communication systems 7 Application Application 6 Presentation Presentation 5 Session Session 4 Transport Transport 3 Network Network Network 2 Data link Data link Data link 1 Physical Physical Physical Communica tions medium OSI reference model OBJECT ORIENTED DESIGN FUNCTION ORIENTED DESIGN USER INTERFACE DESIGN Unit - III Object-oriented development: Object-oriented analysis, design and programming are related but distinct OOA is concerned with developing an object model of the application domain OOD is concerned with developing an object-oriented system model to implement requirements OOP is concerned with realising or coding an OOD using an OO programming language such as Java or C++ Objects and object classes Objects are entities in a software system which represent instances of real-world and system entities. Object classes are templates for objects. They may be used to create objects. Object classes may inherit attributes and services from other object classes. Objects - Definition An object is an entity which has a state and a defined set of operations which operate on that state. The state is represented as a set of object attributes. The operations associated with the object provide services to other objects (clients) which request these services when some computation is required. Objects are created according to some object class definition. An object class definition serves as a template for objects. It includes declarations of all the attributes and services which should be associated with an object of that class. Employee object class Generalisation and inheritance Objects are members of classes which define attribute types and operations Classes may be arranged in a class hierarchy where one class (a super-class) is a generalisation of one or more other classes (sub-classes) A sub-class inherits the attributes and operations from its super class and may add new methods or attributes of its own A generalisation hierarchy Advantages & Disadvantages of inheritance It is an abstraction mechanism which may be used to classify entities It is a reuse mechanism at both the design and the programming level The inheritance graph is a source of organisational knowledge about domains and systems o Object classes are not self-contained. they cannot be understood without reference to their super-classes o Designers have a tendency to reuse the inheritance graph created during analysis. Can lead to significant inefficiency o The inheritance graphs of analysis, design and implementation have different functions and should be separately maintained Inheritance and OOD There are differing views as to whether inheritance is fundamental to OOD. o View 1. Identifying the inheritance hierarchy or network is a fundamental part of object-oriented design. Obviously this can only be implemented using an OOPL. o View 2. Inheritance is a useful implementation concept which allows reuse of attribute and operation definitions. Identifying an inheritance hierarchy at the design stage places unnecessary restrictions on the implementation Inheritance introduces complexity and this is undesirable, especially in critical systems An association model Relationship is denoted using a line that is optionally annotated with information about the association. Concurrent objects The nature of objects as self-contained entities make them suitable for concurrent implementation where execution takes place as a parallel process. The message-passing model of object communication can be implemented directly if objects are running on separate processors in a distributed system. Types: servers –suspends itself and waits for a request to serve. Active objects – never suspends itself. An object-oriented design process Step1: Analyze the project, Define the context and modes of use of the system Step2: Design the system architecture Step3: Identify the principal system objects Step4: Generate or Develop design models (known as refinement of architecture) Step5: Specify suitable object interfaces Weather system description A weather data collection system is required to generate weather maps on a regular basis using data collected from remote, unattended weather stations and other data sources such as weather observers, balloons and satellites. Weather stations transmit their data to the area computer in response to a request from that machine. The area computer validates the collected data and integrates it with the data from different sources. The integrated data is archived and, using data from this archive and a digitised map database a set of local weather maps is created. Maps may be printed for distribution on a special-purpose map printer or may be displayed in a number of different formats. Weather station description A weather station is a package of software controlled instruments which collects data, performs some data processing and transmits this data for further processing. The instruments include air and ground thermometers, an anemometer, a wind vane, a barometer and a rain gauge. Data is collected every five minutes. When a command is issued to transmit the weather data, the weather station processes and summarises the collected data. The summarised data is transmitted to the mapping computer when a request is received. Layered architecture System context and models of use Develop an understanding of the relationships between the software being designed and its external environment System context o A static model that describes other systems in the environment. Use a subsystem model to show other systems. Following slide shows the systems around the weather station system. Model of system use o A dynamic model that describes how the system interacts with its environment. Use lower-cases to show interactions Subsystems in the weather mapping system: Object identification Identifying objects (or object classes) is the most difficult part of object oriented design There is no 'magic formula' for object identification. It relies on the skill, experience and domain knowledge of system designers Object identification is an iterative process. You are unlikely to get it right first time Approaches to identification Use a grammatical approach based on a natural language description of the system (used in Hood method) Base the identification on tangible things in the application domain Use a behavioural approach and identify objects based on what participates in what behaviour Use a scenario-based analysis. The objects, attributes and methods in each scenario are identified Weather station object classes Ground thermometer, Anemometer, Barometer o Application domain objects that are ‘hardware’ objects related to the instruments in the system Weather station o The basic interface of the weather station to its environment. It therefore reflects the interactions identified in the use-case model Weather data o Encapsulates the summarised data from the instruments Weather station object classes A function-oriented view of design Functional design process Data-flow design o Model the data processing in the system using data-flow diagrams Structural decomposition o Model how functions are decomposed to sub-functions using graphical structure charts Detailed design o The entities in the design and their interfaces are described in detail. These may be recorded in a data dictionary and the design expressed using a PDL Explain in detail giving your project DFD as an example. Design principles User familiarity o The interface should be based on user-oriented terms and concepts rather than computer concepts. For example, an office system should use concepts such as letters, documents, folders etc. rather than directories, file identifiers, etc. Consistency o The system should display an appropriate level of consistency. Commands and menus should have the same format, command punctuation should be similar, etc. Minimal surprise o If a command operates in a known way, the user should be able to predict the operation of comparable commands Design principles Recoverability o The system should provide some resilience to user errors and allow the user to recover from errors. This might include an undo facility, confirmation of destructive actions, 'soft' deletes, etc. User guidance o Some user guidance such as help systems, on-line manuals, etc. should be supplied User diversity o Interaction facilities for different types of user should be supported. For example, some users have seeing difficulties and so larger text should be available User-system interaction Two problems must be addressed in interactive systems design o How should information from the user be provided to the computer system? o How should information from the computer system be presented to the user? User interaction and information presentation may be integrated through a coherent framework such as a user interface metaphor Interaction styles Command language Form fill-in Natural language Menu selection Direct manipulation Command interfaces User types commands to give instructions to the system e.g. UNIX May be implemented using cheap terminals. Easy to process using compiler techniques Commands of arbitrary complexity can be created by command combination Concise interfaces requiring minimal typing can be created Problems with command interfaces Users have to learn and remember a command language. Command interfaces are therefore unsuitable for occasional users Users make errors in command. An error detection and recovery system is required System interaction is through a keyboard so typing ability is required Command languages Often preferred by experienced users because they allow for faster interaction with the system Not suitable for casual or inexperienced users May be provided as an alternative to menu commands (keyboard shortcuts). In some cases, a command language interface and a menu-based interface are supported at the same time Form-based interface NE W BOOK Title ISBN Author Price Publication Publisher date Number of Edition copies Classification Loan status Date of Order purchase status Natural language interfaces The user types a command in a natural language. Generally, the vocabulary is limited and these systems are confined to specific application domains (e.g. timetable enquiries) NL processing technology is now good enough to make these interfaces effective for casual users but experienced users find that they require too much typing Control panel interface Menu systems Users make a selection from a list of possibilities presented to them by the system The selection may be made by pointing and clicking with a mouse, using cursor keys or by typing the name of the selection May make use of simple-to-use terminals such as touchscreens Advantages of menu systems Users need not remember command names as they are always presented with a list of valid commands Typing effort is minimal User errors are trapped by the interface Context-dependent help can be provided. The user’s context is indicated by the current menu selection Problems with menu systems Actions which involve logical conjunction (and) or disjunction (or) are awkward to represent Menu systems are best suited to presenting a small number of choices. If there are many choices, some menu structuring facility must be used Experienced users find menus slower than command language Information presentation Static information o Initialised at the beginning of a session. It does not change during the session o May be either numeric or textual Dynamic information o Changes during a session and the changes must be communicated to the system user o May be either numeric or textual Alternative information presentations Jan Feb Mar April May June 2842 2851 3164 2789 1273 2835 4000 3000 2000 1000 0 Jan Feb Mar April May June Direct manipulation advantages Users feel in control of the computer and are less likely to be intimidated by it User learning time is relatively short Users get immediate feedback on their actions so mistakes can be quickly detected and corrected Direct manipulation problems The derivation of an appropriate information space model can be very difficult Given that users have a large information space, what facilities for navigating around that space should be provided? Direct manipulation interfaces can be complex to program and make heavy demands on the computer system USER GUIDANCE Refers to error messages, alarms, prompts, labels etc., It covers system messages, documentation, online help Provides faster task performance, fewer errors and greater user satisfaction. Preventing and correcting. Directly or indirectly guide users. Design consistency Immediate feedback to users. Interface evaluation Some evaluation of a user interface design should be carried out to assess its suitability Full scale evaluation is very expensive and impractical for most systems Ideally, an interface should be evaluated against a usability specification. However, it is rare for such specifications to be produced Usability attributes Attribute Description Learnability How long does it take a new user to become productive with the system? Speed of operation How well does the system response match the user’s work practice? Robustness How tolerant is the system of user error? Recoverability How good is the system at recovering from user errors? Adaptability How closely is the system tied to a single model of work? Simple evaluation techniques Questionnaires for user feedback Video recording of system use and subsequent tape evaluation. Instrumentation of code to collect information about facility use and user errors. The provision of a grip button for on-line user feedback. RELIABILITY AND REUSABILITY Unit-IV Reliability metrics Reliability metrics are units of measurement of system reliability. System reliability is measured by counting the number of operational failures and, where appropriate, relating these to the demands made on the system and the time that the system has been operational A long-term measurement programme is required to assess the reliability of critical systems Reliability metrics Probability of failure on demand This is the probability that the system will fail when a service request is made. Useful when demands for service are intermittent and relatively infrequent. Appropriate for protection systems where services are demanded occasionally and where there are serious consequence if the service is not delivered. Relevant for many safety-critical systems with exception management components. o Emergency shutdown system in a chemical plant Rate of fault occurrence (ROCOF) Reflects the rate of occurrence of failure in the system ROCOF of 0.002 means 2 failures are likely in each 1000 operational time units e.g. 2 failures per 1000 hours of operation Relevant for operating systems, transaction processing systems where the system has to process a large number of similar requests that are relatively frequent. o Credit card processing system, airline booking system Mean time to failure Measure of the time between observed failures of the system. Is the reciprocal of ROCOF for stable systems MTTF of 500 means that the mean time between failures is 500 time units Relevant for systems with long transactions i.e. where system processing takes a long time. MTTF should be longer than transaction length o Computer-aided design systems where a designer will work on a design for several hours, word processor systems Steps to a reliability specification For each sub-system, analyse the consequences of possible system failures. From the system failure analysis, partition failures into appropriate classes. For each failure class identified, set out the reliability using an appropriate metric. Different metrics may be used for different reliability requirements Identify functional reliability requirements to reduce the chances of critical failures Bank auto-teller system Each machine in a network is used 300 times a day Bank has 1000 machines Lifetime of software release is 2 years Each machine handles about 200, 000 transactions About 300, 000 database transactions in total per day Examples of a reliability spec: Statistical testing Testing software for reliability rather than fault detection Measuring the number of errors allows the reliability of the software to be predicted. Note that, for statistical reasons, more errors than are allowed for in the reliability specification must be induced An acceptable level of reliability should be specified and the software tested and amended until that level of reliability is reached Reliability modelling A reliability growth model is a mathematical model of the system reliability change as it is tested and faults are removed Used as a means of reliability prediction by extrapolating from current data Simplifies test planning and customer negotiations Depends on the use of statistical testing to measure the reliability of a system version Equal-step reliability growth Observed reliability growth Simple equal-step model but does not reflect reality Reliability does not necessarily increase with change as the change can introduce new faults The rate of reliability growth tends to slow down with time as frequently occurring faults are discovered and removed from the software A random-growth model may be more accurate Random-step reliability growth Growth model selection Many different reliability growth models have been proposed No universally applicable growth model Reliability should be measured and observed data should be fitted to several models Best-fit model should be used for reliability prediction Reliability prediction Programming for Reliability Programming techniques for building reliable software systems. Software reliability Ä In general, software customers expect all software to be reliable. However, for non-critical applications, they may be willing to accept some system failures Ä Some applications, however, have very high reliability requirements and special programming techniques must be used to achieve this three strategies Fault avoidance The software is developed in such a way that it does not contain faults Fault detection The development process is organised so that faults in the software are detected and repaired before delivery to the customer Fault tolerance The software is designed so that faults in the delivered software do not result in complete system failure Fault avoidance Current methods of software engineering now allow for the production of fault-free software. Fault-free software means software which conforms to its specification. It does NOT mean software which will always perform correctly as there may be specification errors. The cost of producing fault free software is very high. It is only cost-effective in exceptional situations. May be cheaper to accept software faults Fault-free software development Ä Needs a precise (preferably formal) specification. Ä Information hiding and encapsulation in software design is essential Ä A programming language with strict typing and run-time checking should be used Ä Extensive use of reviews at all process stages Ä Requires an organizational committment to quality. Ä Careful and extensive system testing is still necessary Structured programming Ä Programming without gotos Ä While loops and if statements as the only control statements. Ä Top-down design. Ä Important because it promoted thought and discussion about programming. Error-prone constructs Ä Floating-point numbers Inherently imprecise. The imprecision may lead to invalid comparisons Ä Pointers Pointers referring to the wrong memory areas can corrupt data. Aliasing can make programs difficult to understand and change Ä Dynamic memory allocation Run-time allocation can cause memory overflow Ä Parallelism Can result in subtle timing errors because of unforeseen interaction between parallel processes Ä Recursion Errors in recursion can cause memory overflow Ä Interrupts Interrupts can cause a critical operation to be terminated and make a program difficult to execute Information hiding Ä Information should only be exposed to those parts of the program which need to access it. This involves the creation of objects or abstract data types which maintain state and operations on that state Data typing Ä Each program component should only be allowed access to data which is needed to implement its function Ä The representation of a data type should be concealed from users of that type Ä Ada, Modula-2 and C++ offer direct support for information hiding Generics Generics are a way of writing generalised, parameterised ADTs and objects which may be instantiated later with particular types Fault tolerance Ä In critical situations, software systems must be fault tolerant. Ä Fault tolerance means that the system can continue in operation in spite of software system failure Ä Even if the system has been demonstrated to be fault-free, it must also be fault tolerant as there may be specification errors or the validation may be incorrect Fault tolerance actions Ä Failure detection The system must detect that a failure has occurred. Ä Damage assessment The parts of the system state affected by the failure must be detected. Ä Fault recovery The system must restore its state to a known safe state. Ä Fault repair The system may be modified to prevent recurrence of the fault. As many software faults are transitory, this is often unnecessary. Software analogies Ä N-version programming The same specification is implemented in a number of different versions. All versions compute simultaneously and the majority output is selected. This is the most commonly used approach e.g. in Airbus 320. However, it does not provide fault tolerance if there are specification errors. Ä Recovery blocks Versions are executed in sequence. The output which conforms to an acceptance test is selected. The weakness in this system is writing an appropriate acceptance test. N-version programming Ä The different system versions are designed and implemented by different teams. It is assumed that there is a low probability that they will make the same mistakes Ä However, there is some empirical evidence that teams commonly misinterpret specifications in the same way and use the same algorithms i their systems Recovery blocks Ä Force a different algorithm to be used for each version so they reduce the probability of common errors Ä However, the design of the acceptance test is difficult as it must be independent of the computation used Ä Like N-version programming, susceptible to specification errors Exception handling Ä A program exception is an error or some unexpected event such as a power failure. Ä Exception handling constructs allow for such events to be handled without the need for continual status checking to detect exceptions. Ä Using normal control constructs to detect exceptions in a sequence of nested procedure calls needs many additional statements to be added to the program and adds a significant timing overhead. Defensive programming Ä An approach to program development where it is assumed that undetected faults may exist in Programs. Ä The program contains code to detect and recover from such faults. Ä Does NOT require a fault-tolerance controller yet can provide a significant measure of fault Tolerance. Failure prevention Ä Type systems allow many potentially corrupting failures to be detected at compile-time Ä Range checking and exceptions allow another significant group of failures to be detected at run-time Ä State assertions may be developed and included as checks in the program to catch a further class of system failures Damage assessment Ä Analyse system state to judge the extent of corruption caused by a system failure Ä Must assess what parts of the state space have been affected by the failure Ä Generally based on ‘validity functions’ which can be applied to the state elements to assess if their value is within an allowed range Damage assessment techniques Ä Checksums are used for damage assessment in data transmission Ä Redundant pointers can be used to check the integrity of data structures Ä Watch dog timers can check for non-terminating processes. If no response after a certain time, a problem is assumed Fault recovery Ä Forward recovery Apply repairs to a corrupted system state Ä Backward recovery Restore the system state to a known safe state Ä Forward recovery is usually application specific - domain knowledge is required to compute possible state corrections Ä Backward error recovery is simpler. Details of a safe state are maintained and this replaces the corrupted system state Fault recovery Ä Corruption of data coding Error coding techniques which add redundancy to coded data can be used for repairing data corrupted during transmission Ä Redundant pointers When redundant pointers are included in data structures (e.g. two-way lists), a corrupted list or filestore may be rebuilt if a sufficient number of pointers are uncorrupted Often used for database and file system repair Software reusability Software reuse We need to reuse our software assets rather than redevelop the same software. Component reuse- just not reusing the code, reuse specifications and designs Different level of reuse software o Application system reuse- application system may be reused; portable in various platforms. o Sub-system are reused o Module or object reuse – collection of functions o Function reuse – single functions Reuse-based software engineering Application system reuse o The whole of an application system may be reused either by incorporating it without change into other systems (COTS reuse) or by developing application families Component reuse o Components of an application from sub-systems to single objects may be reused Function reuse o Software components that implement a single well-defined function may be reused Four aspects of software reuse Software development with reuse: Develop software with reusable components Software development for reuse – components are generalized Generator based reuse – application generators support Application system reuse – implementation strategies S/w development with reuse Reduces the development cost Steps are Design system architecture, specify component, search for reusable components, incorporate discovered components. First search for reusable components and their designs then reuse them. Conditions for reuse: o To find appropriate components; catalogued and documented components, keep the cost of finding low o Maintain quality and reliability o Reuser must understand and adapt them, also be aware of the problems it may cause. The reuse landscape Range of levels from simple functions to full application: o Design patterns, Components based development, Application framework, legacy system wrapping, service oriented systems, product lines, COTS integration, configurable vertical applications, program libraries, program generators, etc Generator-based reuse Program generators involve the reuse of standard patterns and algorithms These are embedded in the generator and parameterised by user commands. A program is then automatically generated Generator-based reuse is possible when domain abstractions and their mapping to executable code can be identified A domain specific language is used to compose and control these abstractions Types of program generators Parser generators for language processing Code generators Application system reuse: reusing entire application integrating more systems COTS product reuse – eg. Flipkart and firstcry or mailserver Software product line – The core system is specifically designed to suit the specific needs of different customers. Software product lines A product line is set of applications with a common application specific architecture. Platform specialization: versions of the application are developed for different platforms. Environment specialization: for particular OS or I/O devices. Functional specialization: different version like a library automation system. Process specialization: like centralized ordering or distributed ordering. Deployment time configuration: developed as a generic system but also has the customer specifications Unit – V SOFTWARE TESTING BASICS Software testing is a process, which is used to identify the correctness, completeness and quality of software. Software testing is often used in association with the terms verification and validation. Verification refers to checking or testing of items, including software, for conformance and consistency with an associated specification. For verification, techniques like reviews, analysis, inspections and walkthroughs are commonly used. While validation refers to the process of checking that the developed software is according the requirements specified by the user. b) Testing in Software Development Life Cycle (SDLC): Software testing comprises of a set of activities, which are planned before testing begins. These activities are carried out for detecting errors that occur during various phases of SDLC. The role of testing in software development life cycle is listed in Table. (c) Bugs, Error, Fault and Failure: The purpose of software testing is to find bugs, errors, faults, and failures present in the software. Bug is defined as a logical mistake, which is caused by a software developer while writing the software code. Error is defined as the difference between the outputs produced by the software and the output desired by the user (expected output). Fault is defined as the condition that leads to malfunctioning of the software. Malfunctioning of software is caused due to several reasons, such as change in the design, architecture, or software code. Defect that causes error in operation or negative impact is called failure. Failure is defined as the state in which software is unable to perform a function according to user requirements. Bugs, errors, faults, and failures prevent software from performing efficiently and hence, cause the software to produce unexpected outputs. Errors can be present in the software due to the reasons listed below: Programming errors: Programmers can make mistakes while developing the source code. Unclear requirements: The user is not clear about the desired requirements or the developers are unable to understand the user requirements in a clear and concise manner. Software complexity: The complexity of current software can be difficult to comprehend for someone who does not have prior experience in software development. Changing requirements: The user may not understand the effects of change. If there are minor changes or major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems. This may lead to complexity of keeping track of changes and ultimately may result in errors. Time pressures: Maintaining schedule of software projects is difficult. When deadlines are not met, the attempt to speed up the work causes errors. Poorly documented code: It is difficult to maintain and modify code that is badly written or poorly documented. This causes errors to occur. Principles of Software Testing There are certain principles that are followed during software testing. These principles act as a standard to test software and make testing more effective and efficient. The commonly used software testing principles are listed below: Define the expected output: When programs are executed during testing, they may or may not produce the expected outputs due to different types of errors present in the software. To avoid this, it is necessary to define the expected output before software testing begins. Without knowledge of the expected results, testers may fail to detect an erroneous output. Inspect output of each test completely: Software testing should be performed once the software is complete in order to check its performance and functionality. Also, testing should be performed to find the errors that occur in various phases of software development. Include test cases for invalid and unexpected conditions: Generally, software produces correct outputs when it is tested using accurate inputs. However, if unexpected input is given to the software, it may produce erroneous outputs. Hence, test cases that detect errors even when unexpected and incorrect inputs are specified should be developed. Test the modified program to check its expected performance: Sometimes, when certain modifications are made in software (like adding of new functions) it is possible that software produces unexpected outputs. Hence, software should be tested to verify that it performs in the expected manner even after modifications. 5.2.2 Testability The ease with which a program is tested is known as testability. Testability can be defined as the degree to which a program facilitates the establishment of test criteria and execution of tests to determine whether the criteria have been met or not. There are several characteristics of testability, which are listed below: Easy to operate: High quality software can be tested in a better manner. This is because if software is designed and implemented considering quality, then comparatively fewer errors will be detected during the execution of tests. Observability: Testers can easily identify whether the output generated for certain input is accurate or not simply by observing it. Decomposability: By breaking software into independent modules, problems can be easily isolated and the modules can be easily tested. Stability: Software becomes stable when changes made to the software are controlled and when the existing tests can still be performed. Easy to understand: Software that is easy to understand can be tested in an efficient manner. Software can be properly understood by gathering maximum information about it. For example, to have a proper knowledge of software, its documentation can be used, which provides complete information of software code thereby increasing its clarity and making testing easier. Note that documentation should be easily accessible, well organised, specific, and accurate. TEST PLAN A test plan describes how testing would be accomplished. A test plan is defined as a document that describes the objectives, scope, method, and purpose of software testing.This plan identifies test items, features to be tested, testing tasks and the persons involved in performing these tasks. It also identifies the test environment and the test design and measurement techniques that are to be used. Note that a properly defined test plan is an agreement between testers and users describing the role of testing in software. A complete test plan helps people outside the test group to understand the ‘why’ and ‘how’ of product validation. Whereas an incomplete test plan can result in a failure to check how the software works on different hardware and operating systems or when software is used with other software. To avoid this problem, IEEE states some components that should be covered in a test plan. These components are listed in Table. Steps in Development of Test Plan: A carefully developed test plan facilitates effective test execution, proper analysis of errors, and preparation of error report. To develop a test plan, a number of steps are followed, which are listed below: 1. Set objectives of test plan: Before developing a test plan, it is necessary to understand its purpose. The objectives of a test plan depend on the objectives of software. For example, if the objective of software is to accomplish all user requirements, then a test plan is generated to meet this objective. Thus, it is necessary to determine the objective of software before identifying the objective of test plan. 2. Develop a test matrix: Test matrix indicates the components of software that are to be tested. It also specifies the tests required to test these components. Test matrix is also used as a test proof to show that a test exists for all components of software that require testing. In addition, test matrix is used to indicate the testing method which is used to test the entire software. 3. Develop test administrative component: It is necessary to prepare a test plan within a fixed time so that software testing can begin as soon as possible. The test administrative component of test plan specifies the time schedule and resources (administrative people involved while developing the test plan) required to execute the test plan. However, if implementation plan (a plan that describes how the processes in software are carried out) of software changes, the test plan also changes. In this case, the schedule to execute the test plan also gets affected. 4. Write the test plan: The components of test plan, such as its objectives, test matrix, and administrative component are documented. All these documents are then collected together to form a complete test plan. These documents are organised either in an informal or formal manner. In informal manner, all the documents are collected and kept together. The testers read all the documents to extract information required for testing software. On the other hand, in formal manner, the important points are extracted from the documents and kept together. This makes it easy for testers to extract important information, which they require during software testing. Overview: Describes the objectives and functions of the software to be performed. It also describes the objectives of test plan, such as defining responsibilities, identifying test environment and giving a complete detail of the sources from where the information is gathered to develop the test plan. Test scope: Specifies features and combination of features, which are to be tested. These features may include user manuals or system documents. It also specifies the features and their combinations that are not to be tested. Test methodologies: Specifies types of tests required for testing features and combination of these features, such as regression tests and stress tests. It also provides description of sources of test data along with how test data is useful to ensure that testing is adequate, such as selection of boundary or null values. In addition, it describes the procedure for identifying and recording test results. Test phases: Identifies various kinds of tests, such as unit testing, integration testing and provides a brief description of the process used to perform these tests. Moreover, it identifies the testers that are responsible for performing testing and provides a detailed description of the source and type of data to be used. It also describes the procedure of evaluating test results and describes the work products, which are initiated or completed in this phase. Test environment: Identifies the hardware, software, automated testing tools, operating system, compliers, and sites required to perform testing. It also identifies the staffing and training needs. Schedule: Provides detailed schedule of testing activities and defines the responsibilities to respective people. In addition, it indicates dependencies of testing activities and the time frames for them. Approvals and distribution: Identifies the individuals who approve a test plan and its results. It