CSC 424 Software Engineering 2nd Ed. PDF

Document Details

DelightedEarthArt

Uploaded by DelightedEarthArt

C. K. Tedam University of Technology and Applied Sciences

Callistus Ireneous Nakpih, PhD

Tags

software engineering software computer science

Summary

This document is a set of lecture notes for a Software Engineering course (CSC 424) at C.K. Tedam University of Technology & Applied Sciences. It covers introductory concepts, software characteristics, different categories of software applications, and software reliability. It also details different software life cycle models such as Waterfall, Agile, and rapid prototyping models.

Full Transcript

School of Computing and Information Sciences Department of Computer Science Software Engineering (CSC 424) 2023/2024 Academic Year, Fourth Year, Second Semester Callistus Ireneous Nakpih, PhD UNIT 1 Introductory Concepts of Software Engineering Introd...

School of Computing and Information Sciences Department of Computer Science Software Engineering (CSC 424) 2023/2024 Academic Year, Fourth Year, Second Semester Callistus Ireneous Nakpih, PhD UNIT 1 Introductory Concepts of Software Engineering Introduction Computer software has become a driving force in different areas of life. It is the engine that drives business decision making. It serves as the basis for modern scientific investigation and engineering problem solving. It is a key factor that differentiates modern products and services. It is embedded in systems of all kinds: transportation, medical, telecommunications, military, industrial processes, entertainment, office products etc. It has become the driver for new advances in everything from elementary education to genetic engineering. Introduction Computer software is the product that software engineers design and build. It encompasses programs that execute within a computer of any size and architecture, documents that encompass hard-copy and virtual forms and data that combine numbers and text but also includes representations of pictorial, video and audio information. Evolving role of software Software impact on our society and culture continues to be profound. As its importance grows, the software community continually attempts to develop technologies that will make it easier, faster, and less expensive to build high-quality computer programs. Some of these technologies are targeted at specific application domain and others focus on a technology domain and still others are more broad based and focus on operating systems. Evolving role of software The role of computer software has undergone significant change over a time span of little more than 50 years; Improvements in hardware performance, profound changes in computing architecture, vast increase in memory and storage capacity, and a wide variety of input and output options have all made it possible for a significant contribution of software on our day to day life. Evolving role of software some of the common questions that we have been asking programmers in all the past history of the software development era and we continue to ask them even now; Why does it take so long to get software developed? Why are development costs so high? Why can’t we find all the errors before we give the software to customer? Why do we continue to have difficulty in measuring progress as software is being developed? Evolving role of software These concern in fact has led us to the adoption of software engineering practices. Software characteristics Software is a logical rather than a physical system. Therefore software has characteristics that are considerably different than those of hardware: a) Software is developed or engineered, it is not manufactured in the classical sense. b) Software doesn't “wear out”. c) Although the industry is moving toward component-based assembly, most software continues to be custom built. Software Applications Determinate Applications An engineering analysis program accepts data that have a predefined order, executes the analysis algorithm without interruption and produces resultant data in report or graphical format. Such applications are determinate. Software Applications Indeterminate Application multi-user applications, on the other hand, accepts dynamic inputs that have varied content and arbitrary timing, executes algorithms that can be interrupted by external conditions, and produces output that varies as a function of environment and time. Applications with these characteristics are indeterminate. Software Applications Information content and determinacy are important factors in determining the nature of a software application. Information determinacy refers to the predictability of the order and timing of information. Content refers to the meaning and form of incoming and outgoing information. Software that controls an automated machine accepts discrete data items with limited structure and produces individual machine commands in rapid succession. Software Applications Software applications can be divided into different categories; System software: System software is a collection of programs written to service other programs. Some system software process complex information structures. Other systems applications process largely indeterminate data. It is characterised by heavy interaction with hardware, heavy usage by multiple users, concurrent operation that requires scheduling, resource sharing, and sophisticated process management, complex data structures and multiple external interfaces. Software Applications Real time software: Software that monitors/analyses/controls real- world events as they occur is called real-time. Usually has a scheduler and executes operations based on priorities set on the operations. Software Applications Business Software: Business information processing is the largest single software application area. Management Information Systems(MIS) software that accesses one or more large databases containing business information. Applications in this area restructure existing data in a way that facilitates business operations or management decision making. Software Applications Engineering and scientific software: Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle orbital dynamics and from molecular biology to automated manufacturing. Software Applications Embedded software: Embedded software resides only in read-only memory and is used to control products and systems for the consumer and industrial markets. Embedded software can provide very limited and hidden functions or provide significant function and control capability. Software Applications Personal computer software: Day to day useful applications like word processing, spreadsheets, multimedia, database management, personal and business financial applications are some of the common examples for personal computer software. Software Applications Web-based software: The web pages retrieved by a browser are software that incorporates executable instructions and data. In essence, the network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a modem. Software Applications Artificial Intelligence software: Artificial Intelligence software makes use of algorithms to solve complex computational problems that are beyond straightforward analysis. Expert systems, also called knowledge based systems, pattern recognition, game playing are representative examples of applications within this category. Software crisis The set of problems that are encountered in the development of computer software is not limited to software that does not function properly rather the affliction encompasses problems associated with how we develop software, how we support a growing volume of existing software, and how we can expect to keep pace with a growing demand for more software. What is Software Engineering? Software Engineering is an engineering discipline whose focus is the cost-effective development of high-quality software systems. It is a sub discipline of Computer Science that attempts to apply engineering principles to the creation, operation, modification and maintenance of the software components of various systems. It is concerned with all aspects of software production. It is concerned with the practicalities of developing and delivering useful software. The cost of software engineering includes roughly 60% of development costs and 40% of testing costs. What is Software Engineering? Structured approaches to software development include system models, notations, rules, design and process guidelines. Coping with increasing diversity, demands for reduced delivery times and developing trustworthy software are the key challenges facing Software Engineering. What is engineering? Engineering is the application of well-understood scientific methods to the construction, operation, modification and maintenance of useful devices and systems. What is software? Software comprises the aspects of a system not reduced to tangible devices. Eg., computer programs and documentation. It is distinguished from hardware, which consists of tangible devices, and often exists as collections of states of hardware devices. The boundary between hardware and software can be blurry, as with firmware and micro code. Systems A system is an assemblage of components that interact in some manner among themselves and, possibly, with the world outside the system boundary. We understand systems by decomposing them into; Subsystems System components. It is very difficult to separate the software components of a system from the other components of a system. Engineering Approach to Software Engineering An engineering approach to software engineering is characterized by a practical, orderly, and measured development of software. The principal aim of this approach is to produce satisfactory systems on time and within budget. The engineering approach is practical because it is based on proven methods and practices in software development. The approach is orderly and development can be mapped to fit customer requirements. The approach is measured, during each phase; software metrics are applied to products to gauge quality, cost and reliability of what has been produced. Unit 2 Software Development Life Cycle Models Software Development Life Cycle (SDLC) SDLC is the a methodology or standards with clearly defined processes and steps for creating high-quality software. SDLC methodology is generally composed of the following phases of software development: Requirements Analysis Design Implementation Note that there are other phases such as planning, building, testing, deployment, maintenance, that are sometimes included in the list of the phases; but in the case where they are not explicitly listed, they are usually embedded in the main phases listed. The Requirement Phase The Requirement phases is also considered as the planning phase; this the phase where software developers interact with the users or clients of the software to be developed, in order to obtain useful information for developing the software. Such information from the user is very crucial for developers to be able to produce a functional and useful software to the user. The Analysis Phase In this phase the developers conduct an analysis of the needs of the customer and what it takes to complete the whole project; the feasibility, scope, objectives, and other resources needed for the project. a Software Requirement Specification (SRS) document which details the software, hardware and network requirements of the whole project. The Design Phase This is the phase where details of the software is outlined based on user and system interfaces, network requirement, databases etc. The software developers use the SRS documents as basis for selecting the best design for developing the software. They turn the SRS documents into a more logical structure that can be easily implemented in a programming language. The new document is Design Document Specification (DDS) which will be referenced through out the lifecycle of the software. The Implementation Phase This is the phase the software is actually developed in a particular programming language and tested for several issues. The implementation phase encompasses, coding, and testing the software. The coding of the software is done following the details on the DDS. Note that type of programming language that can be used for the project will depend on the specification and requirement of the software. After the software is developed, it is tested to ensure that it is functioning correctly, and, it is also producing the desired output, etc. Iteration and Incrementation of Development Life Cycles Iterative and incremental development is a software development process that combines iterative design with the incremental build model. It involves developing a system through repeated cycles (iterative) and in smaller portions at a time (incremental), allowing developers to take advantage of what was learned during development of earlier parts or versions of the system Iteration and Incrementation of Development Life Cycles In practice, iteration and incrementation are used in conjunction with one another. Usually, we start by developing an initial version of a software with its core function or specification, and then later, we iterate over the phases in the SDLC to develop a new version of the software with new features. The new version becomes an incremented version of the older one. In other words, Iteration is, going over/through the phases of SDLC again to produce a new version of a software, while incrementation is the addition of new features to the software giving rise to a new version. Iteration and Incrementation of Development Life Cycles That is, an artifact is constructed piece by piece (incrementation), and each incremented version goes through iteration. The basic idea is that the software should be developed in increments, each increment adding some functional capability to the system until the full system is implemented. At each step, extensions and design modifications can be made. Another way of looking at iteration and incrementation is that incrementation adds functionality, whereas iteration improves the quality of an increment. Iteration and Incrementation of Development Life Cycles An advantage of this approach is that it can result in better testing because testing each increment is likely to be easier than testing the entire system as in the waterfall model. One of the limitations of the waterfall model is that, the requirements has to be completely specified before the rest of the development can proceed. The Iterative model resolves this limitation. Iteration and Incrementation of Development Life Cycles The incremental models provide feedback to the client that is useful for determining the final requirements of the system. In the first step of this model, a simple initial implementation is done as a subset of the overall problem. This subset is one that contains some of the key aspects of the problem that are easy to understand and implement and which form a useful and usable system. Iteration and Incrementation of Development Life Cycles For e.g., word processing software developed using the incremental paradigm might deliver basic file management, editing, and document production functions in the first increment; a more sophisticated editing and document production capabilities in the second increment; spelling and grammar checking in the third increment; and advanced page layout capability in the fourth increment. Types of SDLC They are several types of models for engineering software They include: Waterfall Life Cycle Model Rapid Prototype Life Cycle Model Evolution-tree Life Cycle Model Rapid Application Development Life Cycle Model Spiral Life Cycle Model Etc. Waterfall Life Cycle Model This model is based on a sequential/serial/linear design process. A phase in the waterfall model is usually completed before the next phase begins. Documentation for a phase must be completed and the products of that phase must be approved by the Software Quality Assurance (SQA) before that phase can be considered as completed. Waterfall Life Cycle Model This model allows for feedback loops from later phases to earlier phases. If the products of an earlier phase have to be changed as a consequence of following a feedback loop; after the modification of the earlier phase, its documentation must be modified an approved by the SQA before that phase can be regarded as a completed phase. Testing is done at every phase in the Waterfall model The waterfall model has many strengths, including the enforced disciplined approach the stipulation that documentation be provided at each phase and the requirement that all the products of each phase (including the documentation) be meticulously checked by SQA. Waterfall Life Cycle Model However, the fact that the waterfall model is documentation driven can also be a weakness. Specification documents are usually shared with clients to sign for the software project to commerce or for modification to be made; even though such documents are technical and hardly understood by the client. This can result in developing a software that is not desirable to the client. The Rapid Prototype Model resolves this problem Rapid Prototype Life Cycle Model Rapid prototyping is aa life cycle model which is used in product development to quickly develop a software. It involves creating multiple iterations of a prototype based on user feedback and analysis, allowing for rapid validation of design assumptions by the user and refinement of the product the developer. Note that a prototype is not a fully functional product. The iterations of review and refinement of the prototype is done until the user is satisfied, then the actual development process of the software begins. The model is usually used when the user does not know the requirement for the system Rapid Prototype Life Cycle Model The rapid prototyping process typically involves three steps: 1. Prototyping: Creating an initial prototype, which can be low-fidelity or high-fidelity, and may be interactive or non-interactive. 2. Feedback: Sharing the prototype with stakeholders, end-users, and other team members to gather feedback on usability and design. 3. Improvement: Using the feedback to create a new iteration of the prototype, which continues until there are no more changes The software development process starts after the a satisfied prototyped is reached. Requirement Analysis Design Implementation Post-delivery Maintenance Retirement Rapid Prototype Life Cycle Model A major strength of the rapid-prototyping model is that the development of the product is essentially linear, proceeding from the rapid prototype to the delivered product; the feedback loops of the waterfall model are less likely to be needed in the rapid-prototyping model. This is because, since the working rapid prototype has been validated through interaction with the client, it is reasonable to expect that the resulting specification document will be correct. Faults and more insight would have also been known in the prototype, unlike the waterfall model where faults are discovered at implementation. Rapid Application Development (RAD) Model The RAD model is a high speed adaptation of the linear sequential model in which the rapid development is achieved by using component-based construction It is an incremental software development process model that emphasizes an extremely short development cycle. If requirements are clear and well understood and the project scope is constrained, the RAD process enables a development team to create a fully functional system within a very short period of time. Rapid Application Development (RAD) Model The RAD approach encompasses the following phases: Business modelling: Here we try to find answers to questions like what information drives the business process? What information is generated? Who generates it? Where does the information go? Who processes it? Etc., Data modeling: Here the information flow which would have been defined as part of the business modelling phase is refined into a set of data objects that are needed to support the business. Rapid Application Development (RAD) Model Process modelling The data objects defined in the data modelling phase are transformed to achieve the information flow necessary to implement a business function. Processing descriptions are created for adding, modifying, deleting, or retrieving a data object. Rapid Application Development (RAD) Model Application generation: RAD assumes the use of fourth generation techniques. Rather than creating software using conventional third generation programming languages the RAD process works to reuse existing program components(when possible) or create reusable components(when necessary). In all cases, automated tools are used to facilitate construction of the software. Rapid Application Development (RAD) Model Testing and turnover Since the RAD process emphasizes reuse, many of the program components have already been tested. This reduces overall testing time. However, new components must be tested and all interfaces must be fully exercised. Difference between RAD and Rapid Prototype Model In Prototyping Model The development prototype is primarily used to gain insight into the solution. Choose between alternatives Elicit customer feedback The developed prototype is usually thrown away. In Rapid Application Development (RAD) Model The developed prototype evolves into deliverable software. RAD leads to faster development compared to traditional models. However, the quality and reliability would possibly be poorer. Evolution-tree Life Cycle Model This is a maintenance oriented model, which describes software development as continuous evolution of software product. That is, we view software development as a maintenance process based on a tree of engineering decisions made at various times. These decisions are made by software engineers in response to modifications in the requirements as they are issued. Evolution tree can be used to identify those pieces of the software that need to be modified when the requirements change The model combines iterative and incremental development process for producing newer versions of software. Evolution-tree Life Cycle Model With the passage of time and demand of the customers, necessary changes need to be made in the software from time to time. So, every successive version of the software will be an enhanced version of the previous one. In the diagram in the next slide, we see that software is developed incrementally, module after module with specific target to archive in a particular module. A module is usually iterated over, incorporating the desired features for achieving new targets. Refer to the Winburg Mini Case Study for further details. Evolution-tree Life Cycle Model Spiral Life Cycle Model The spiral model is iterative, incremental, combined with linear/sequential development process. The core or distinctive feature of the spiral model is that, it is risk oriented. this model has special phases developers go through in an iterative way to develop a software; Determine Objective: gathering the requirement Identify and resolve risk: risk and solution identification (analysis) Development and Test: design and implementation Plan the next iteration: evaluation of software output and plan for next phase Spiral Life Cycle Model Spiral Life Cycle Model This model is suitable for large, complex and expensive software projects such as; projects in which frequent releases are necessary; projects in which changes may be required at any time; long term projects that are not feasible due to altered economic priorities; medium to high risk projects; projects in which cost and risk analysis is important; projects that would benefit from the creation of a prototype; and projects with unclear or complex requirements. Spiral Life Cycle Model The whole spiral model process begins with a design objective/goal and ends with client review. During early iterations, the incremental release might be a paper model or prototype. During later iterations, increasingly more complete versions of the engineered system are produced. Spiral Life Cycle Model Advantages of the model Flexibility - Changes made to the requirements after development has started can be easily adopted and incorporated. Risk handling - The spiral model involves risk analysis and handling in every phase, improving security and the chances of avoiding attacks and breakages. The iterative development process also facilitates risk management. Customer satisfaction - The spiral model facilitates customer feedback. If the software is being designed for a customer, then the customer will be able to see and evaluate their product in every phase. This allows them to voice dissatisfactions or make changes before the product is fully built, saving the development team time and money. Spiral Life Cycle Model Limitations of the spiral model High cost - The spiral model is expensive and, therefore, is not suitable for small projects. Dependence on risk analysis - Since successful completion of the project depends on effective risk handling, then it is necessary for involved personnel to have expertise in risk assessment. Complexity - The spiral model is more complex than other SDLC options. For it to operate efficiently, protocols must be followed closely. Furthermore, there is increased documentation since the model involves intermediate phases. Hard to manage time - Going into the project, the number of required phases is often unknown, making time management almost impossible. Therefore, there is always a risk for falling behind schedule or going over budget. Unit 3 Software Reliability Software Reliability Software reliability is a function of the number of failures experienced by a particular user of that software. A software failure occurs when the software is executing. It is a situation in which the software does not deliver the service expected by the user. Software failures are not the same as software faults although these terms are often used interchangeably. Software Reliability The Reliability of a software system is a measure of how well users think it provides the services that they require. Reliability is the most important dynamic characteristic of almost all software systems. Unreliable software results in high costs for end-users. Developers of unreliable systems may acquire a bad reputation for quality and lose future business opportunities. Software Reliability It is claimed that software installed on an aircraft will be 99.99% reliable during an average flight of five hours. This means that a software failure of some kind will probably occur in one flight out of 10000. A system might be thought of as unreliable if it ever failed to provide some critical service. For example, if a system was used to control braking on an aircraft but failed to work under a single set of very rare conditions. Software Reliability Formal specifications and proof do not guarantee that the software will be reliable in practical use. The reasons for this are: The specifications may not reflect the real requirements of system users The proof may contain errors (proof involves testing a certain assumption in order to obtain confirmation that the idea is feasible, viable and applicable in practice.) The Proof may assume a usage pattern, which is incorrect Software Reliability Because of additional design, implementation and validation overheads, increasing reliability can exponentially increase development costs. There is often an efficiency penalty, which must be paid for increasing reliability. Reliable software must include extra, often redundant, code to perform the necessary checking for exceptional conditions. This reduces program execution speed and increases the amount of storage required by the program. Software Reliability Reliability should always take precedence over efficiency for the following reasons: Unreliable software is liable to be discarded by users System failure costs may be enormous Unreliable systems are difficult to improve Unreliable systems may cause information loss Software Reliability Metrics The choice metric used for software reliability specification should depends on the type of system to which it applies and the requirements of the application domain. For some systems, it may be appropriate to use different reliability metrics for different sub-systems. Software Reliability Metrics There are three kinds of measurement, which can be made when assessing the reliability of a system: 1. The number of system failures given a number of systems inputs. This is used to measure the POFOD. 2. The time (or number of transaction) between system failures. This is used to measure ROCOF and MTTF. 3. The elapsed repair or restart time when a system failure occurs. Given that the system must be continuously available, this is used to measure AVAIL. Reliability metrics Software Reliability Metrics Time is a factor in all of this reliability metrics. It is essential that the appropriate time units should be chosen if measurements are to be meaningful. Time units, which may be used, are calendar time, processor time or may be some discrete unit such as number of transactions. Programming for Reliability Improved programming techniques, better programming languages and better quality management have led to very significant improvements in reliability for most software. However, for some systems, such as those, which control unattended machinery, these ‘normal’ techniques may not be enough to achieve the level of reliability required. In these cases, special programming techniques may be necessary to achieve the required reliability. Programming for Reliability Reliability in a software system can be achieved using three strategies: Fault avoidance: This is the most important strategy, which is applicable to all types of system. The design and implementation process should be organized with the objective of producing fault-free systems. Fault tolerance: This strategy assumes that residual faults remain in the system. Facilities are provided in the software to allow operation to continue when these faults cause system failures. Fault detection: Faults are detected before the software is put into operation. The software validation process uses static and dynamic methods to discover any faults, which remain in a system after implementation. Programming for Reliability Faults are less likely to be introduced into programs (Fault Avoidance) if the use of these constructs is minimized. These constructs include: 1. Floating-point numbers: Floating-point numbers are inherently imprecise. They present a particular problem when they are compared because representation imprecision may lead to invalid comparisons. Fixed-point numbers, where a number is represented to a given number of decimal places, are safer as exact comparisons are possible. Programming for Reliability 2. Pointer: Pointers are low-level constructs, which refer directly to areas of the machine memory. They are dangerous because errors in their use can be devastating and because they allow ‘aliasing’. This means the same entity may be referenced using different names. Aliasing makes programs harder to be referenced using different names. Aliasing makes programs harder to understand so that errors are more difficult to find. However, efficiency requirements mean that it is often impractical to avoid the use of pointers. Programming for Reliability 3. Dynamic memory allocation: Program memory is allocated at run-time rather than compile-time. The danger with this is that the memory may not be de-allocated so that the system eventually runs out of available memory. This can be a very subtle type of errors to detect as the system may run successfully for a long time before the problem occurs. Programming for Reliability 4. Parallelism: Parallelism is dangerous because of the difficulties of predicting the subtle effects of timing interactions between parallel process. Timing problems cannot usually be detected by program inspection and the peculiar combination of circumstances, which cause a timing problem, may not result during system testing. Parallelism may be unavoidable but its use should be carefully controlled to minimize inter-process dependencies. Programming language facilities, such as Ada tasks, help avoid some of the problems of parallelism as the compiler can detect some kinds of programming errors. Programming for Reliability 5. Recursion: Recursion is the situation in which a subroutine calls itself or calls another subroutine, which then calls the calling subroutine. Its use can result in very concise programs but it can be difficult to follow the logic of recursive programs. Errors in recursion may result in the allocation of all the system’s memory as temporary stack variables are created. Programming for Reliability 6. Interrupts: Interrupts are a means of forcing control to transfer to a section of code irrespective of the code currently executing. The dangers of this are obvious as the interrupt may cause a critical operation to be terminated. Programming for Reliability A fault-tolerant system can continue in operation after some system failures have occurred. Fault-tolerance facilities are required if the system is to fail. There are four aspects to fault tolerance; 1. Failure detection: The system must detect a particular state combination has resulted or will result in a system failure. Programming for Reliability 2. Damage assessment: The parts of the system state, which have been affected by the failure, must be detected. 3. Fault recovery: The system must restore its state to a known ‘safe’ state. This may be achieved by correcting the damaged state or by restoring the system to a known ‘safe’ state. Forward error recovery is more complex as it involves diagnosing system faults and knowing what the system state should have been had the faults not caused a system failure. Programming for Reliability 4. Fault repair: This involves modifying the system so that the fault does not repeat. In many cases, software failures are transient and due to a peculiar combination of system inputs. No repair is necessary as normal processing can resume immediately after fault recovery. Programming for Reliability Exception Handling When an error of some kind or an unexpected event occurs during the execution of a program, the program should be able to continue, not being interrupted by the error, this is exception handling. Exceptions may be caused by hardware or software errors. When an exception has not been anticipated, control is transferred to system exceptions handling mechanism. If an exception has been anticipated, code must be included in the program to detect and handle that exception. Software Reuse | for Reliability Software reuse is the development of software systems from existing software instead of developing from scratch. The design process in most engineering disciplines is based on component reuse. The reuse of software can consider at a number of different levels: Software Reuse | for Reliability 1. Application system reuse: The whole of an application system may be reused. The key problem here is ensuring that the software is portable; it should execute on several different platforms. 2. Sub-system reuse: Major sub-systems of an application may be reused. For example, a pattern-matching system developed as part of a text processing system may be reused in a database management system. Software Reuse | for Reliability 2. Module or object reuse: Components of a system representing a collection of functions may be reused (e.g. python libraries/modules) 3. Function reuse: Software components, which implement a single function, such as a mathematical function, may be reused. Unit 4 Software Design System Models System Models | Introduction System modeling is the process of developing abstract models of a system, with each model presenting a different view or perspective of that system. It is about representing a system using some kind of graphical notation System Models | Introduction There are four widely used types of system models, which can be used to support all kinds of system model. Data-flow models Semantic data models object models Data dictionaries Data-flow models Data-flow model is a way of showing how data is processed by a system. At the analysis level, they should be used to model the way in which data is processed in the existing system. The notations used in these models represents functional processing, data stores and data movements between functions. Data-flow models Data-flow models are used to show how data flows through a sequence of processing steps. The data is transformed at each step before moving on to the next stage. These processing steps or transformations are program functions when data-flow diagrams are used to document a software design. Figure in the next shows the steps involved in processing an order for goods (such as computer equipment) in an organization. Data Flow diagram of order processing e.g. 2 e.g. 3 data flow diagram of a design report generator Data-flow models Diagram/graphical notations and meaning Rounded rectangles: processing steps; functions, which transform inputs to outputs. The transformation name indicates its function. Rectangles represent data stores. Circles represent user interactions with the system which provide input or receive output. Arrows show the direction of data flow. Their name describes the data flowing along that path. The keywords ‘and’ and ‘or’. have their usual meanings as in Boolean expressions. They are used to link data flows when more than one data flow may be input or output from a transformation. Semantic data models The large software system makes use of a large database of information. In some cases, this database exists independently of the software system. In others, it is created for the system being developed. An important part of system modeling is to define the logical form of the data processed by the system. An approach to data modeling, which includes information about the semantics of the data, allows a better abstract model to be produced. Semantic data models always identify the entities in a database, their attributes and explicit relationship between them. Semantic data models Approach to semantic data modeling includes Entity-relationship modeling. Semantic data models are described using graphical notations. These graphical notations are understandable by users so they can participate in data modeling. The notations used are shown in figure the next slide. Notation for Semantic Data Models Semantic data models Relations between entities which is 1:1, means one entity instance, participate in a relation with one other entity instance. 1:M, means one entity instance participates in relationship with more than one other entity instances, M:N means several entity instances participate in a relation with several others. Entity-relationship models have been widely used in database design. Source: Wikipedia Object models Object modeling uses object oriented techniques; This means expressing the system requirements using an object model, using an object-oriented approach, and developing the system in object-oriented programming languages such as C++. Object models developed during requirement analysis used to represent both system data and its processing. They combine some of the uses of data-flow and semantic data models. They are useful for showing how entities in the system may be classified and composed of other entities. Object models An object class is an abstraction over a set of objects, which identifies common attributes and the services or operations, which are provided by each object. Various types of object models can be produced showing how object classes are related to each other, how objects are aggregated to form other objects, how objects use the services provided by other objects and so on. Object models The following figure shows the notation which is used to represent an object class. Object models The Class Name section lists the object class name The attribute section lists the attributes of that object class. The service section shows the operations associated with the object. Object models Inheritance models Object-oriented modeling involves identifying the classes of object, which are important in the domain being studied. These are then organized into taxonomy. Taxonomy is a classification scheme, which shows how an object class is related to other classes through common attributes and services. To display this taxonomy, we organize the classes into an inheritance or class hierarchy where the most general object classes are presented at the top of the hierarchy. More specialized objects inherit their attributes and services. Object models The following figure illustrates part of a simplified class hierarchy that might be developed when modeling a library system. This hierarchy gives information about the items held in the library. It is assumed that the library does not only hold books but also other types of items such as music, recordings of films, magazines, newspapers and so on. Part of Class hierarchy for library system Object models Object aggregation Object aggregation is the acquiring attributes and services through an inheritance relationship with other objects, some objects are aggregation of other objects. The classes representing these objects may be modeled using an aggregation model as shown in following figure. In the example above, we have modeled potential library item which is the materials for particular class given in a university. an aggregate object representing a course. Data Dictionaries Data dictionary is a list of names used by the systems, arranged alphabetically. As well as the name, the dictionary should include a description of the named entity and, if the name represents a composite object, their may be a description of the composition. Other information such as a date of the creation, the creator, and representation of the entity may also be included depending on the type of model which is being developed. Data dictionary entries for the design report generator Software Design Process The Design Process The design process involves adding formality and detail as the design is developed with constant backtracking to correct earlier, less formal, designs. The starting point is an informal design, which is refined by adding information to make it consistent and complete as shown in figure below. The progression from an informal to a detailed design The Design Process The Figure shows a general model of the design process which suggests that, the stages of the design process are sequential. However, the activities shown are all part of the design process for large software systems. These design activities are: The Design Process Design Activities The following activities are followed through when designing (especially) large software. Note that any of the designed models(data flow, object, semantic, data dictionaries) can be used at each stage of the design activities. However, appropriate model should be used through the design process. 1. Architectural designs: the sub-systems making up the system and their relationships are identified and documented. 2. Abstract specification: for each sub-system, an abstract specification of the services it provides and the constraints under which it must operate is produced. 3. Interface design: for each sub-system, its interface with other sub- systems is designed and documented. This interface specification must be unambiguous as it allows the sub-system to be used without knowledge of the sub-system operation. The Design Process 4. Component design Services are allocated to different components and the interfaces of these components are designed. 5. Data structure design the data structures used in the system implementation is designed in detail and specified. 6. Algorithm design the algorithms used to provide services are designed in detail and specified. The Design Process A general model of the design process through design activities This process is repeated for each sub-system until the components identified can be mapped directly into programming language components such as packages, procedures or functions. Unit 5 Configuration Management Introduction Configuration Management is the process, which controls the changes made to a system, and manages the different versions of the evolving software product. Configuration management involves the development and application of procedures and standards for managing an evolving system product. Procedures should be developed for building systems and releasing them to customers. Standards should be developed for recording and processing proposed system changes and identifying and storing different versions of the system. Software Maintenance The process of changing a system after it has been delivered and is in use is called software maintenance. The changes may involve simple changes to correct coding errors. More extensive changes to correct design errors or significant enhancements to correct specification errors or accommodate new requirements. Maintenance means evolution. It is the process of changing a system to maintain its ability to survive. Software Maintenance There are three different types of software maintenance: 1. Corrective maintenance is concerned with fixing reported errors in the software. Coding errors are usually relatively cheap to correct Design errors are more expensive as they may involve the rewriting of several program components. Requirements errors are the most expensive to repair because of the extensive system redesign which may be necessary. Software Maintenance 2. Adaptive maintenance means changing the software to some new environment such as a different hardware platform or for use with a different operating system. The software functionality does not radically change. 3. Perfective maintenance involves implementing new functional or non-functional system requirements. Software customers as their organization or business changes generate these. Software Maintenance The maintenance process The maintenance process is triggered by a set of change requests from system users, management or customers. The cost and impact of these changes are assessed. If the proposed changes are accepted, a new release of the system is planned. This release will usually involve elements of adaptive, corrective and perfective maintenance. The changes are implemented and validated and a new version of the system is released. Software Maintenance The process then iterates with a new set of changes proposed for the new release. The following figure shows an overview of this process. an overview of the maintenance process Software Maintenance System documentation The system documentation includes all of the documents describing the implementation of the system from the requirement specification to the final acceptance test plan. Software Maintenance Documents, which may be produced to aid the maintenance process, include: The requirements document and as associated rationale A document describing the overall system architecture For each program in the system, a description of the architecture of that program. For each component, a specification and design description. Program source code listings which should be commented. Validation documents describing how each program are validated and how the validation information relates to the requirements. A system maintenance guide that describes known problems with the system and that describes which parts of the system are hardware and software dependent. Software Maintenance Maintenance costs Maintenance costs are related to a number of product, process and organizational factors. The principal technical and non-technical factors, which affect maintenance, are: Module independence: It should be possible to modify one component of a system without affecting other system components. Programming language: Programs written in a high-level programming language are usually easier to understand (and hence maintain) than programs written in a low-level language. Programming style: The way in which a program is written contributes to its understandability and hence the ease with which it can be modified. Program validation and testing: Generally, the more time and effort spent on design validation and program testing, the fewer errors in the program. Consequently, corrective maintenance costs are minimized. Software Maintenance The quality of program documentation: If a program is supported by clear, complete yet concise documentation, the task of understanding the program can be relatively straightforward. Program maintenance costs tend to be less for well-documented systems than for systems supplied with poor or incomplete documentation. The configuration management techniques used: One of the most significant costs of maintenance is keeping track of all system documents and ensuring that these are kept consistent. Effective configuration management can help control this cost. Software Maintenance Staff stability: Maintenance costs are reduced if system developers are responsible for maintaining their own programs. There is no need for other engineers to spend time understanding the system. In practice, however, it is very unusual for developers to maintain a program throughout its useful life. The age of the program: As a program is maintained, its structure is degraded. The older the program, the more maintenance it receives and the more expensive this maintenance becomes. Software Maintenance Hardware stability: If a program is designed for a particular hardware configuration that does not change during the program’s lifetime, no maintenance due to hardware changes will be required. However, this situation is rare. Programs must often be modified to use new hardware which replaces obsolete equipment. Version and Release Management Version and release management are the processes of identifying and keeping track of different versions and releases of a system. Version managers must devise procedures to ensure that different versions of a system may be retrieved when required and are not accidentally changed. They may also work with customer liaison staff to plan when new releases of a system should be distributed. Version and Release Management A system version is an instance of a system that differs, in some way, from other instances. New versions of the system may have different functionality, performance or may repair system faults. Some versions may be functionally equivalent but designed for different hardware or software configurations. A system release is a version that is distributed to customers. Each system release should either include new functionality or should be intended for a different hardware platform. Version and Release Management A release is not just an executable program or set of programs. It usually includes: 1. Configuration files defining how the release should be configured for particular installations. 2. Data files which are needed for successful system operation. 3. An installation program which is used to help install the system on target hardware. 4. Electronic and paper documentation describing the system. Version and Release Management Version identification Identifying versions of a system appears to be straightforward. The first version and release of a system is simply called 1.0 subsequent versions are 1.1, 1.2 and so on. At some stage, it is decided to create release 2.0 and the process starts again at version 2.1, 2.2 and so on. System releases normally correspond to the base versions, that is, 1.0, 2.0, 3.0 and so on. The scheme is a linear one based on the assumption that system versions are created in sequence. Version and Release Management In the figure below Version 1.0 has spawned two versions, 1.1 and 1.1a. Version 1.1 has also spawned two versions namely 1.2 and 1.1b. Version 2.0 is not derived from 1.2 but from 1.1a. Version 2.2 is not a direct descendant of version 2 as it is derived from version 1.2. Version Derivation Structure Version and Release Management An alternative to a numeric naming structure is to use a symbolic naming scheme. For example, rather than refer to Version 1.1.2, a particular instance of a system might be referred to as V1/VMS/DB server. This implies that this is a version of a database server for a Digital computer running the VMS operating system (Virtual Memory System). This has some advantages over the linear scheme but, again, it does not truly represent the derivation structure. Unit 6 Software Testing Introduction Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and code generation. The engineer creates a series of test cases that are intended to find faults/errors in the software that has been built. In fact, Testing is the one step in the software process that could be viewed (psychologically, at least) as destructive rather than constructive. A good test case is one that has a high probability of finding an error that is yet to be found. Testing Objectives Our objective is to design tests that systematically uncover different classes of errors and to do so with a minimum amount of time and effort. Testing cannot show the absence of errors and defects, it can only show that software errors and defects are present in the software. Testing Principles Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing; 1. All tests should be traceable to customer requirements: As we have Seen, the objective of software testing is to uncover errors, it follows that the most severe defects (from the customer's point of view) are those that cause the program to fail to meet its requirements. Testing Principles 2. Tests should be planned long before testing begins: Test planning can begin as soon as the requirements model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed before any code is generated. Testing Principles 3. The Pareto principle applies to software testing: The Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of program components. The probe is to isolate these suspect components and to thoroughly test them. Testing Principles 4. Testing should begin on small things and progress toward testing large things: The first tests planned and executed is usually/generally focused on individual components. As testing should progress and focus on finding errors in clusters of components, and then ultimately in the entire system. Testing Principles 5. Exhaustive testing is not possible: It is impossible to execute every combination of paths during testing. It is possible, however, to adequately cover program logic and to ensure that all conditions in the component-level design have been exercised. Testing Principles 6. To be most effective, testing should be conducted by an independent third party: By most effective, we mean testing that has the highest probability of finding errors (the primary objective of testing). The software engineer who created the system is not the best person to conduct all tests for the software. Testability Software testability is simply how easily [a computer program] can be tested. In ideal circumstances, a software engineer designs a computer program, a system, or a product with "testability" in mind. This enables the individuals charged with testing to design effective test cases more easily. Test Strategies Testing is divided into phases and builds that address specific functional and behavioral characteristics of the software. There are five different types of test strategies like Top-down testing, Bottom-up Testing, Thread testing, Stress testing, Back-to-back testing. A software engineer must understand the basic principles that guide software testing. Test Strategies Verification and Validation Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. Boehm states this in another way: Verification: "Are we building the product right?" Validation: "Are we building the right product?" Test Strategies Verification and Validation Testing does provide the last support from which quality can be assessed and, more pragmatically, errors can be uncovered. But testing should not be viewed as a safety net. Quality is incorporated into software throughout the process of software engineering. Proper application of methods and tools, effective formal technical reviews, and solid management and measurement all lead to quality that is confirmed during testing. Test Strategies 1. Top-Down Testing Top-down testing tests the high levels of a system before testing its detailed components. The program is represented as a single abstract component with sub components. After the top-level component has been tested, its sub components are implemented and tested in the same way. This process continues recursively until the bottom level components are implemented. The whole system may then be completely tested aftwards. Test Strategies Top-down testing should be used with top-down program development so that a system component is tested as soon as it is coded. Coding and testing are a single activity with no separate component or module-testing phase. The main disadvantage of top-down testing is that, test output may be difficult to observe. In many systems, the higher levels of that system do not generate output but, to test these levels, they must be forced to do so. The tester must create an artificial environment to generate the test results. Test Strategies 2. Bottom-Up Testing Bottom –up testing is the converse of top down testing. It involves testing the modules at the lower levels in the hierarchy, and then working up the hierarchy of modules until the final module is tested. If top-down development is combined with bottom-up testing, all parts of the system must be implemented before testing can begin. Architectural faults are unlikely to be discovered until much of the system has been tested. Correction of these faults might involve the rewriting and consequent re- testing of low-level modules in the system. Test Strategies Bottom-up testing is appropriate for object-oriented systems in that individual objects may be tested using their own test drivers they are then integrated and the object collection is tested. Test Strategies 3. Thread testing Thread testing is a testing strategy, which was devised for testing real- time systems. It is an event-based approach where tests are based on the events, which trigger system actions. Thread testing is a testing strategy, which may be used after processes, or objects have been individually tested and integrated in to sub-systems. Test Strategies The processing of each possible external event threads its way through the system processes or objects with some processing carried out at each stage. Thread testing involves identifying and executing each possible processing thread. complete thread testing may be impossible because of the number of possible input and output combinations. In such cases, the most commonly exercised threads should be identified and selected for testing. Test Strategies 4. Stress testing Some classes of system are designed to handle specified load. For example, a transaction processing system may be designed to process up to 100 transactions per second; an operating system may be designed to handle up to 200 separate terminals. Tests have to be designed to ensure that the system can process its intended load. This usually involves planning a series of tests where the load is steadily increased. Test Strategies Stress testing continues these tests beyond the maximum design load of the system until the system fails This type of testing has two functions: oIt tests the failure behavior of the system. oIt stresses the system and may cause defects to come to light, which would not normally manifest themselves. Test Strategies Stress testing is particularly relevant to distributed systems based on a network of processors. These systems often exhibit severe degradation when they are heavily loaded as the network becomes swamped with data, which the different processes must exchange. Test Strategies 5. Back-to-back testing Back-to-back testing may be used when more than one version of a system is available for testing. The same tests are presented to both versions of the system and the test results compared. The Difference between these test results highlight potential system problems, see the figure below. Test Strategies Back-to-back testing is only usually possible in the following situations: oWhen a system prototype is available. oWhen different versions of a system have been developed for different types of computers. Test Strategies Steps involved in back-to-back testing are: Step 1: prepare a general-purpose set of test case. Step 2: run one version of the program with these test cases and save the results in more than one file. Step 3: run another version of the program with the same test cases, saving the results to a different file. Step 4: automatically compare the files produced by the modified and unmodified program versions. Test Strategies If the programs behave in the same way, the file comparison should show the output files to be identical. Although this does not guarantee that they are valid (the implementers of both versions may have made the same mistake), it is probable that the programs are behaving correctly. Differences between the outputs suggest problems, which should be investigated in more detail. Testing Methods and Tools 1. Testing through reviews Formal technical reviews can be as effective as testing in uncovering errors. For this reason, reviews can reduce the amount of testing effort that is required to produce high-quality software. Testing Methods and Tools Many different types of reviews can be conducted. oAn informal meeting around the coffee machine is a form of review, if technical problems are discussed. oA formal presentation of software design to audience of customers, management, and technical staff is, also a form of review. oFormal technical reviews some times called a walkthrough or an inspection. This is most effective filter from a quality assurance standpoint. Testing Methods and Tools 2. Black-box testing (Functional testing) black box tests are used to demonstrate that software functions are operational, that input is properly accepted and output is correctly produced, and that the integrity of external information (e.g., a database) is maintained. black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software. Testing Methods and Tools 3. White box testing (glass-box testing) White-box testing of software is predicated on close examination of procedural detail; providing test cases that exercise specific sets of conditions and/or loops tests logical paths through the software. The “status of the program” may be examined at various points to determine if the expected or asserted status corresponds to the actual status. Testing Methods and Tools Using white-box methods, the software engineer can derive test cases that; oGuarantee that all independent paths within a module have been exercised at least once, oExercise all logical decisions on their true and false sides, oExecute all loops at their boundaries and within their operational bounds, and oExercise internal data structures to ensure their validity. Testing Methods and Tools Designing white-box test cases requires thorough knowledge of the internal structure of software, and therefore white-box testing is also called the structural testing. Test and Software Quality Assurance Plan To ensure that the final product produced is of high quality, some quality control activities must be performed throughout the development. Correcting of errors in the final stages can be very expensive, especially if they originated in the early phases. The purpose of the software quality assurance plans (SQAP) is to specify all the work products that need to be produced during the project, activities that need to be performed for checking the quality of each of the work products and the tools and methods that may be used for the SQA activities. Test and Quality Assurance Plan SQAP takes a broad view of quality. It is interested in the quality of not only the final product but also the intermediate products, even though in a project we are ultimately interested in the quality of the delivered product. The SQAP specifies the tasks that need to be undertaken at different times in the life cycle to improve the software quality and how they are to be managed. These tasks will generally include reviews and audits. Each task should defined with an entry and exit criterion, that is, the criterion that should be satisfied to initiate the task and the criterion that should be satisfied to terminate the task. Test and Quality Assurance Plan The documents that should be produced during software development to enhance software quality should also be specified by the SQAP. It should identify all documents that govern the developments, verification, validation, use, and maintenance of the software and how these documents are to be checked for adequacy. END OF LECTURE NOTES

Use Quizgecko on...
Browser
Browser