CS605 Software Engineering-II Lecture Notes PDF

Document Details

Uploaded by Deleted User

Virtual University of Pakistan

Tags

software engineering software development programming languages computer science

Summary

These lecture notes cover Software Engineering-II. It introduces software engineering principles and discussions about its characteristics and challenges. The notes focus on definitions and approaches to creating well-engineered software, examining the trade-offs involved.

Full Transcript

CS605 Software Engineering-II CS605 Software Engineering-II VU Table of Content Table of Content...................................................................

CS605 Software Engineering-II CS605 Software Engineering-II VU Table of Content Table of Content...................................................................................................................2 Lecture No. 1........................................................................................................................4 Lecture No. 2......................................................................................................................12 Lecture No. 3......................................................................................................................14 Lecture No. 4......................................................................................................................18 Lecture No. 5......................................................................................................................23 Lecture No. 6......................................................................................................................27 Lecture No. 7......................................................................................................................32 Lecture No. 8......................................................................................................................35 Lecture No. 9.....................................................................................................................37 Lecture No. 10....................................................................................................................41 Lecture No. 11....................................................................................................................46 Lecture No. 12....................................................................................................................65 Measures, Metrics and Indicators...................................................................................... 65 Metrics for software quality...............................................................................................66 Lecture No. 13....................................................................................................................67 Lecture No. 14....................................................................................................................71 Baseline..............................................................................................................................72 Metrics for small organizations..........................................................................................72 Lecture No. 15....................................................................................................................75 Lecture No. 16....................................................................................................................78 Lecture No. 17....................................................................................................................80 Lecture No. 18....................................................................................................................84 Lecture No. 19....................................................................................................................85 Lecture No. 20....................................................................................................................88 Lecture No. 21....................................................................................................................92 Lecture No. 22....................................................................................................................95 Lecture No. 23....................................................................................................................99 Lecture No. 24..................................................................................................................100 Lecture No. 25..................................................................................................................102 Lecture No. 26..................................................................................................................104 Lecture No. 27..................................................................................................................106 Lecture No. 28..................................................................................................................109 Lecture No. 29..................................................................................................................112 Lecture No. 30..................................................................................................................114 Lecture No. 31..................................................................................................................117 Lecture No. 32..................................................................................................................118 Lecture No. 33..................................................................................................................119 Lecture No. 34..................................................................................................................122 Release Numbering..........................................................................................................122 Internal Release Numbering.............................................................................................122 Lecture No. 35..................................................................................................................124 Lecture No. 36..................................................................................................................127 Lecture No. 37..................................................................................................................132 Lecture No. 38..................................................................................................................134 Lecture No. 39..................................................................................................................137 Lecture No. 40..................................................................................................................140 © Copy Right Virtual University of Pakistan 2 CS605 Software Engineering-II VU Lecture No. 41..................................................................................................................141 Lecture No. 42..................................................................................................................142 Lecture No. 43..................................................................................................................153 Lecture No. 44..................................................................................................................168 Lecture No. 45..................................................................................................................172 © Copy Right Virtual University of Pakistan 3 CS605 Software Engineering-II VU Lecture No. 1 Introduction to Software Engineering This course is a continuation of the first course on Software Engineering. In order to set the context of our discussion, let us first look at some of the definitions of software engineering. Software Engineering is the set of processes and tools to develop software. Software Engineering is the combination of all the tools, techniques, and processes that used in software production. Therefore Software Engineering encompasses all those things that are used in software production like: Programming Language Programming Language Design Software Design Techniques Tools Testing Maintenance Development etc. So all those thing that are related to software are also related to software engineering. Some of you might have thought that how programming language design could be related to software engineering. If you look more closely at the software engineering definitions described above then you will definitely see that software engineering is related to all those things that are helpful in software development. So is the case with programming language design. Programming language design is one of the major successes in last fifty years. The design of Ada language was considered as the considerable effort in software engineering. These days object-oriented programming is widely being used. If programming languages will not support object-orientation then it will be very difficult to implement object- oriented design using object-oriented principles. All these efforts made the basis of software engineering. Well-Engineered Software Let’s talk something about what is well-engineered software. Well-engineered software is one that has the following characteristics. It is reliable It has good user-interface It has acceptable performance It is of good quality It is cost-effective © Copy Right Virtual University of Pakistan 4 CS605 Software Engineering-II VU Every company can build software with unlimited resources but well-engineered software is one that conforms to all characteristics listed above. Software has very close relationship with economics. When ever we talk about engineering systems we always first analyze whether this is economically feasible or not. Therefore you have to engineer all the activities of software development while keeping its economical feasibility intact. The major challenges for a software engineer is that he has to build software within limited time and budget in a cost-effective way and with good quality Therefore well-engineered software has the following characteristics. Provides the required functionality Maintainable Reliable Efficient User-friendly Cost-effective But most of the times software engineers ends up in conflict among all these goals. It is also a big challenge for a software engineer to resolve all these conflicts. The Balancing Act! Software Engineering is actually the balancing act. You have to balance many things like cost, user friendliness, Efficiency, Reliability etc. You have to analyze which one is the more important feature for your software is it reliability, efficiency, user friendliness or something else. There is always a trade-off among all these requirements of software. It may be the case that if you try to make it more user-friendly then the efficiency may suffer. And if you try to make it more cost-effective then reliability may suffer. Therefore there is always a trade-off between these characteristics of software. These requirements may be conflicting. For example, there may be tension among the following: Cost vs. Efficiency Cost vs. Reliability Efficiency vs. User-interface A Software Engineer is required to analyze these conflicting entities and tries to strike a balance. Challenge is to balance these requirements. Software Engineers always confront with the challenge to make a good balance of all these tings depending on the requirements of the particular software system at hand. He © Copy Right Virtual University of Pakistan 5 CS605 Software Engineering-II VU should analyze how much weight should all these things get such that it will have acceptable quality, acceptable performance and will have acceptable user-interface. In some software the efficiency is more important and desirable. For example if we talk about a cruise missile or a nuclear reactor controller that are droved by the software systems then performance and reliability is far more important than the cost-effectiveness and user-friendliness. In these cases if your software does not react within a certain amount of time then it may result in the disaster like Chernobyl accident. Therefore software development is a process of balancing among different characteristics of software described in the previous section. And it is an art to come up with such a good balance and that art can be learned from experience. Law of diminishing returns In order to understand this concept lets take a look at an example. Most of you have noticed that if you dissolve sugar in a glass of water then the sweetness of water will increase gradually. But at a certain level of saturation no more sugar will dissolved into water. Therefore at that point of saturation the sweetness of water will not increase even if you add more sugar into it. The law of diminishing act describes the same phenomenon. Similar is the case with software engineering. Whenever you perform any task like improving the efficiency of the system, try to improve its quality or user friendliness then all these things involves an element of cost. If the quality of your system is not acceptable then with the investment of little money it could be improved to a higher degree. But after reaching at a certain level of quality the return on investment on the system’s quality will become reduced. Meaning that the return on investment on quality of software will be less than the effort or money we invest. Therefore, in most of the cases, after reaching at a reasonable level of quality we do not try to improve the quality of software any further. This phenomenon is shown in the figure below. cost benefit Software Background © Copy Right Virtual University of Pakistan 6 CS605 Software Engineering-II VU Caper Jones a renounced practitioner and researcher in the filed of Software Engineering, had made immense research in software team productivity, software quality, software cost factors and other fields relate to software engineering. He made a company named Software Productivity Research in which they analyzed many projects and published the results in the form of books. Let’s look at the summary of these results. He divided software related activities into about twenty-five different categories listed in the table below. They have analyzed around 10000 software projects to come up with such a categorization. But here to cut down the discussion we will only describe nine of them that are listed below. Project Management Requirement Engineering Design Coding Testing Software Quality Assurance Software Configuration Management Software Integration and Rest of the activities One thing to note here is that you cannot say that anyone of these activities is dominant among others in terms of effort putted into it. Here the point that we want to emphasize is that, though coding is very important but it is not more than 13-14% of the whole effort of software development. Fred Brook is a renowned software engineer; he wrote a great book related to software engineering named “A Mythical Man Month”. He combined all his articles in this book. Here we will discuss one of his articles named “No Silver Bullet” which he included in the book. An excerpt from “No Silver Bullet” – Fred Brooks Of all the monsters that fill the nightmares of our folklore, none terrify more than werewolves, because they transform unexpectedly from the familiar into horrors. For these we seek bullets of silver that can magically lay them to rest. The familiar software project has something of this character (at least as seen by the non-technical manager), usually innocent and straight forward, but capable of becoming a monster of missed schedules, blown budgets, and flawed projects. So we hear desperate cries for a silver bullet, something to make software costs drop as rapidly as computer hardware costs do. Scepticism is not pessimism, however. Although we see no startling breakthroughs, and indeed, such to be inconsistent with the nature of the software, many encouraging innovations are under way. A disciplined, consistent effort to develop, propagate and exploit them should indeed yield an order of magnitude improvement. There is no royal road, but there is a road. The first step towards the management of disease was replacement of demon theories and humours theories by the germ theory. The very first step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, © Copy Right Virtual University of Pakistan 7 CS605 Software Engineering-II VU and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today. So, according to Fred Brook, in the eye of an unsophisticated manager software is like a giant. Sometimes it reveals as an unscheduled delay and sometimes it shows up in the form of cost overrun. To kill this giant the managers look for magical solutions. But unfortunately magic is not a reality. We do not have any magic to defeat this giant. There is only one solution and that is to follow a disciplined approach to build software. We can defeat the giant named software by using disciplined and engineered approach towards software development. Therefore, Software Engineering is nothing but a disciplined and systematic approach to software development. Now we will look at some of the activities involved in the course of software development. The activities involved in software development can broadly be divided into two major categories first is construction and second is management. Software Development The construction activities are those that are directly related to the construction or development of the software. While the management activities are those that complement the process of construction in order to perform construction activities smoothly and effectively. A greater detail of the activities involved in the construction and management categories is presented below. Construction The construction activities are those that directly related to the development of software, e.g. gathering the requirements of the software, develop design, implement and test the software etc. Some of the major construction activities are listed below. Requirement Gathering Design Development Coding Testing Management Management activities are kind of umbrella activities that are used to smoothly and successfully perform the construction activities e.g. project planning, software quality assurance etc. Some of the major management activities are listed below. Project Planning and Management Configuration Management Software Quality Assurance Installation and Training © Copy Right Virtual University of Pakistan 8 CS605 Software Engineering-II VU Project Planning and Management Configuration Management Quality Assurance Management Installation and Training Construction Requirements Design Coding Testing Maintenance Figure1 Development Activities As we have said earlier that management activities are kind of umbrella activities that surround the construction activities so that the construction process may proceed smoothly. This fact is empathized in the Figure1. The figure shows that construction is surrounded by management activities. That is, certain processes and rules govern all construction activities. These processes and rules are related to the management of the construction activities and not the construction itself. A Software Engineering Framework The software development organization must have special focus on quality while performing the software engineering activities. Based on this commitment to quality by the organization, a software engineering framework is proposed that is shown in Figure 2. The major components of this framework are described below. As we have said earlier, the given framework is based on the organizational Quality Focus: commitment to quality. The quality focus demands that processes be defined for rational and timely development of software. And quality should be emphasized while executing these processes. The processes are set of key process areas (KPAs) for effectively manage and Processes: deliver quality software in a cost effective manner. The processes define the tasks to be performed and the order in which they are to be performed. Every task has some deliverables and every deliverable should be delivered at a particular milestone. Methods:Methods provide the technical “how-to’s” to carryout these tasks. There could be more than one technique to perform a task and different techniques could be used in different situations. Tools provide automated or semi-automated support for software processes, Tools: methods, and quality control. © Copy Right Virtual University of Pakistan 9 CS605 Software Engineering-II VU Method T O O Task Set Process L S Quality Focus Figure 2 Software Engineering Framework Software Development Loop Let’s now look at software engineering activities from a different perspective. Software development activities could be performed in a cyclic and that cycle is called software development loop which is shown in Figure3. The major stages of software development loop are described below. In this stage we determine what is the problem against which we are Problem Definition: going to develop software. Here we try to completely comprehend the issues and requirements of the software system to build. In this stage we try to find the solution of the problem on technical Technical Development: grounds and base our actual implementation on it. This is the stage where a new system is actually developed that solves the problem defined in the first stage. If there are already developed system(s) available with which our new Solution Integration: system has to interact then those systems should also be the part of our new system. All those existing system(s) integrate with our new system at this stage. After going through the previous three stages successfully, when we actually Status Quo: deployed the new system at the user site then that situation is called status quo. But once we get new requirements then we need to change the status quo. After getting new requirements we perform all the steps in the software development loop again. The software developed through this process has the property that this could be evolved and integrated easily with the existing systems. © Copy Right Virtual University of Pakistan 10 CS605 Software Engineering-II VU Problem Definition Technical Status Quo Development Solution Integration Figure3 Software Development Loop Overview of the course contents In the first course we studied the technical processes of software development to build industrial strength software. That includes requirement gathering and analysis, software design, coding, testing, and debugging. In this course our focus will be on the second part of Software Engineering, that is, the activities related to managing the technical development. This course will therefore include the following topics: 1. Software development process 2. Software process models 3. Project Management Concepts 4. Software Project Planning 5. Risk Analysis and Management 6. Project Schedules and Tracking 7. Software Quality Assurance 8. Software Configuration Management 9. Software Process and Project Metrics 10. Requirement Engineering Processes 11. Verification and Validation 12. Process Improvement 13. Legacy Systems 14. Software Change 15. Software Re-engineering © Copy Right Virtual University of Pakistan 11 CS605 Software Engineering-II VU Lecture No. 2 Software Process A software process is a road map that helps you create a timely, high quality result. It is the way we produce software and it provides stability and control. Each process defines certain deliverables known as the work products. These include programs, documents, and data produced as a consequence of the software engineering activities. Process Maturity and CMM The Software Engineering Institute (SEI) has developed a framework to judge the process maturity level of an organization. This framework is known as the Capability Maturity Model (CMM). This framework has 5 different levels and an organization is placed into one of these 5 levels. The following figure shows the CMM framework. These levels are briefly described as follows” 1. Level 1 – Initial: The software process is characterized as ad hoc and occasionally even chaotic. Few processes are defined, and success depends upon individual effort. By default every organization would be at level 1. 2. Level 2 – Repeatable: Basic project management processes are established to track cost, schedule, and functionality. The necessary project discipline is in place to repeat earlier successes on projects with similar applications. 3. Level 3 – Defined: The software process for both management and engineering activities is documented, standardized, and integrated into an organizational software process. All projects use a documented and approved version of the organization’s process for developing and supporting software. 4. Level 4 – Managed: Detailed measures for software process and product quality are controlled. Both the software process and products are quantitatively understood and controlled using detailed measures. 5. Level 5 – Optimizing: Continuous process improvement is enabled by qualitative feedback from the process and from testing innovative ideas and technologies. SEI has associated key process areas with each maturity level. The KPAs describe those software engineering functions that must be present to satisfy good practice at a particular level. Each KPA is described by identifying the following characteristics: © Copy Right Virtual University of Pakistan 12 CS605 Software Engineering-II VU 1. Goals: the overall objectives that the KPA must achieve. 2. Commitments: requirements imposed on the organization that must be met to achieve the goals or provide proof of intent to comply with the goals. 3. Abilities: those things that must be in place – organizationally and technically – to enable the organization to meet the commitments. 4. Activities: the specific tasks required to achieve the KPA function 5. Methods for monitoring implementation: the manner in which the activities are monitored as they are put into place. 6. Methods for verifying implementation: the manner in which proper practice for the KPA can be verified. Each of the KPA is defined by a set of practices that contribute to satisfying its goals. The key practices are policies, procedures, and activities that must occur before a key process area has been fully instituted. The following table summarizes the KPAs defined for each level. Level KPAs 1 No KPA is defined as organizations at this level follow ad-hoc processes 2 Software Configuration Management Software Quality Assurance Software subcontract Management Software project tracking and oversight Software project planning Requirement management 3 Peer reviews Inter-group coordination Software product Engineering Integrated software management Training program Organization process management Organization process focus 4 Software quality management Quantitative process management 5 Process change management Technology change management Defect prevention © Copy Right Virtual University of Pakistan 13 CS605 Software Engineering-II VU Lecture No. 3 Software Lifecycle Models Recalling from our first course, a software system passes through the following phases: 1. Vision – focus on why 2. Definition – focus on what 3. Development – focus on how 4. Maintenance – focus on change During these phases, a number of activities are performed. A lifecycle model is a series of steps through which the product progresses. These include requirements phase, specification phase, design phase, implementation phase, integration phase, maintenance phase, and retirement. Software Development Lifecycle Models depict the way you organize your activities. There are a number of Software Development Lifecycle Models, each having its strengths and weaknesses and suitable in different situations and project types. The list of models includes the following: Build-and-fix model Waterfall model Rapid prototyping model Incremental model Extreme programming Synchronize-and-stabilize model Spiral model Object-oriented life-cycle models In the following sections we shall study these models in detail and discuss their strengths and weaknesses. Build and Fix Model This model is depicted in the following diagram: © Copy Right Virtual University of Pakistan 14 CS605 Software Engineering-II VU It is unfortunate that many products are developed using what is known as the build-and- fix model. In this model the product is constructed without specification or any attempt at design. The developers simply build a product that is reworked as many times as necessary to satisfy the client. This model may work for small projects but is totally unsatisfactory for products of any reasonable size. The cost of build-and fix is actually far greater than the cost of properly specified and carefully designed product. Maintenance of the product can be extremely in the absence of any documentation. Waterfall Model The first published model of the software development process was derived from other engineering processes. Because of the cascade from one phase to another, this model is known as the waterfall model. This model is also known as linear sequential model. This model is depicted in the following diagram. The principal stages of the model map directly onto fundamental development activities. It suggests a systematic, sequential approach to software development that begins at the system level and progresses through the analysis, design, coding, testing, and maintenance. In the literature, people have identified from 5 to 8 stages of software development. The five stages above are as follows: 1. Requirement Analysis and Definition: What - The systems services, constraints and goals are established by consultation with system users. They are then defined in detail and serve as a system specification. 2. System and Software Design: How – The system design process partitions the requirements to either hardware of software systems. It establishes and overall system architecture. Software design involves fundamental system abstractions and their relationships. © Copy Right Virtual University of Pakistan 15 CS605 Software Engineering-II VU 3. Implementation and Unit Testing: - How – During this stage the software design is realized as a set of programs or program units. Unit testing involves verifying that each unit meets its specifications. 4. Integration and system testing: The individual program unit or programs are integrated and tested as a complete system to ensure that the software requirements have been met. After testing, the software system is delivered to the customer. 5. Operation and Maintenance: Normally this is the longest phase of the software life cycle. The system is installed and put into practical use. Maintenance involves correcting errors which were not discovered in earlier stages of the life-cycle, improving the implementation of system units and enhancing the system’s services as new requirements are discovered. In principle, the result of each phase is one or more documents which are approved. No phase is complete until the documentation for that phase has been completed and products of that phase have been approved. The following phase should not start until the previous phase has finished. Real projects rarely follow the sequential flow that the model proposes. In general these phases overlap and feed information to each other. Hence there should be an element of iteration and feedback. A mistake caught any stage should be referred back to the source and all the subsequent stages need to be revisited and corresponding documents should be updated accordingly. This feedback path is shown in the following diagram. Because of the costs of producing and approving documents, iterations are costly and require significant rework. The Waterfall Model is a documentation-driven model. It therefore generates complete and comprehensive documentation and hence makes the maintenance task much easier. It however suffers from the fact that the client feedback is received when the product is finally delivered and hence any errors in the requirement specification are not discovered until the product is sent to the client after completion. This therefore has major time and cost related consequences. © Copy Right Virtual University of Pakistan 16 CS605 Software Engineering-II VU Rapid Prototyping Model The Rapid Prototyping Model is used to overcome issues related to understanding and capturing of user requirements. In this model a mock-up application is created “rapidly” to solicit feedback from the user. Once the user requirements are captured in the prototype to the satisfaction of the user, a proper requirement specification document is developed and the product is developed from scratch. An essential aspect of rapid prototype is embedded in the word “rapid”. The developer should endeavour to construct the prototype as quickly as possible to speedup the software development process. It must always be kept in mind that the sole purpose of the rapid prototype is to capture the client’s needs; once this has been determined, the rapid prototype is effectively discarded. For this reason, the internal structure of the rapid prototype is not relevant. Integrating the Waterfall and Rapid Prototyping Models Despite the many successes of the waterfall model, it has a major drawback in that the delivered product may not fulfil the client’s needs. One solution to this is to combine rapid prototyping with the waterfall model. In this approach, rapid prototyping can be used as a requirement gathering technique which would then be followed by the activities performed in the waterfall model. © Copy Right Virtual University of Pakistan 17 CS605 Software Engineering-II VU Lecture No. 4 Incremental Models As discussed above, the major drawbacks of the waterfall model are due to the fact that the entire product is developed and delivered to the client in one package. This results in delayed feedback from the client. Because of the long elapsed time, a huge new investment of time and money may be required to fix any errors of omission or commission or to accommodate any new requirements cropping up during this period. This may render the product as unusable. Incremental model may be used to overcome these issues. In the incremental models, as opposed to the waterfall model, the product is partitioned into smaller pieces which are then built and delivered to the client in increments at regular intervals. Since each piece is much smaller than the whole, it can be built and sent to the client quickly. This results in quick feedback from the client and any requirement related errors or changes can be incorporated at a much lesser cost. It is therefore less traumatic as compared to the waterfall model. It also required smaller capital outlay and yield a rapid return on investment. However, this model needs and open architecture to allow integration of subsequent builds to yield the bigger product. A number of variations are used in object-oriented life cycle models. There are two fundamental approaches to the incremental development. In the first case, the requirements, specifications, and architectural design for the whole product are completed before implementation of the various builds commences. © Copy Right Virtual University of Pakistan 18 CS605 Software Engineering-II VU In a more risky version, once the user requirements have been elicited, the specifications of the first build are drawn up. When this has been completed, the specification team Build 1 Implementation, Specification Design Deliver to client integration Build 2 Implementation, Specification Design Deliver to client integration Build 3 Implementation, Specification Design Deliver to client integration Build n Implementation, Specification Design Deliver to client integration Specification team Implementation, Design team integration team turns to the specification of the second build while the design team designs the first build. Thus the various builds are constructed in parallel, with each team making use of the information gained in the all the previous builds. This approach incurs the risk that the resulting build will not fit together and hence requires careful monitoring. Rapid Application Development (RAD) Rapid application development is another form of incremental model. It is a high speed adaptation of the linear sequential model in which fully functional system in a very short time (2-3 months). This model is only applicable in the projects where requirements are well understood and project scope is constrained. Because of this reason it is used primarily for information systems. Synchronize and Stabilize Model This is yet another form of incremental model adopted by Microsoft. In this model, during the requirements analysis interviews of potential customers are conducted and requirements document is developed. Once these requirements have been captured, specifications are drawn up. The project is then divided into 3 or 4 builds. Each build is carried out by small teams working in parallel. At the end of each day the code is synchronized (test and debug) and at the end of the build it is stabilized by freezing the build and removing any remaining defects. Because of the synchronizations, components always work together. The presence of an executable provides early insights into operation of product. © Copy Right Virtual University of Pakistan 19 CS605 Software Engineering-II VU Spiral Model This model was developed by Barry Boehm. The main idea of this model is to avert risk as there is always an element of risk in development of software. For example, key personnel may resign at a critical juncture, the manufacturer of the software development may go bankrupt, etc. In its simplified form, the Spiral Model is Waterfall model plus risk analysis. In this case each stage is preceded by identification of alternatives and risk analysis and is then followed by evaluation and planning for the next phase. If risks cannot be resolved, project is immediately terminated. This is depicted in the following diagram. Risk Analysis Rapid Prototype Specification Design Verify Implementation As can be seen, a Spiral Model has two dimensions. Radial dimension represents the cumulative cost to date and the angular dimension represents the progress through the spiral. Each phase begins by determining objectives of that phase and at each phase a new process model may be followed. © Copy Right Virtual University of Pakistan 20 CS605 Software Engineering-II VU A full version of the Spiral Model is shown below: The main strength of the Spiral Model comes from the fact that it is very sensitive to the risk. Because of the spiral nature of development it is easy to judge how much to test and there is no distinction between development and maintenance. It however can only be used for large-scale software development and that too for internal (in-house) software only. © Copy Right Virtual University of Pakistan 21 CS605 Software Engineering-II VU Determine Identify and objectives, resolve risks alternatives, constraints Develop and verify Plan Next next-level Phase product © Copy Right Virtual University of Pakistan 22 CS605 Software Engineering-II VU Lecture No. 5 Object-Oriented Lifecycle Models Object-oriented lifecycle models appreciate the need for iteration within and between phases. There are a number of these models. All of these models incorporate some form of iteration, parallelism, and incremental development. eXtreme Programming It is a somewhat controversial new approach. In this approach user requirements are captured through stories which are the scenarios presenting the features needed by the client? Estimate for duration and cost of each story is then carried out. Stories for the next build are selected. Then each build is divided into tasks. Test cases for task are drawn up first before and development and continuous testing is performed throughout the development process. Architectural User stories spike Release Iteration Acceptance Planning test Spike Small release One very important feature of eXtreme programming is the concept of pair programming. In this, a team of two developers develop the software, working in team as a pair to the extent that they even share a single computer. In eXtereme Programming model, computers are put in center of large room lined with cubicles and client representative is always present. One very important restriction imposed in the model is that no team is allowed to work overtime for 2 successive weeks. XP has had some successes. It is good when requirements are vague or changing and the overall scope of the project is limited. It is however too soon to evaluate XP. Fountain Model Fountain model is another object-oriented lifecycle model. This is depicted in the following diagram. © Copy Right Virtual University of Pakistan 23 CS605 Software Engineering-II VU Maintenance Further development Operations Implementation and integration Implementation Object-oriented design Object-oriented analysis Requirement In this model the circles representing the various phases overlap, explicitly representing an overlap between activities. The arrows within a phase represent iteration within the phase. The maintenance cycle is smaller, to symbolize reduced maintenance effort when the object oriented paradigm is used. Rational Unified Process (RUP) Rational Unified Process is very closely associated with UML and Krutchen’s architectural model. In this model a software product is designed and built in a succession of incremental iterations. It incorporates early testing and validation of design ideas and early risk mitigation. The horizontal dimension represents the dynamic aspect of the process. This includes cycles, phases, iterations, and milestones. The vertical dimension represents the static aspect of the process described in terms of process components which include activities, disciplines, artifacts, and roles. The process emphasizes that during development, all activities are performed in parallel, however, and at a given time one activity may have more emphasis than the other. The following figure depicting RUP is taken from Krutchen’s paper. © Copy Right Virtual University of Pakistan 24 CS605 Software Engineering-II VU Comparison of Lifecycle Models As discussed above, each lifecycle model has some strengths and weaknesses. These are summarized in the following table: The criteria to be used for deciding on a model include the organization, its management, skills of the employees, and the nature of the product. No single model may fulfill the needs in a given situation. It may therefore be best to devise a lifecycle model tuned to your own needs by creating a “Mix-and-match” life-cycle model. Quality Assurance and Documentation It may be noted that there is no separate QA or documentation phase. QA is an activity performed throughout software production. It involves verification and validation. © Copy Right Virtual University of Pakistan 25 CS605 Software Engineering-II VU Verification is performed at the end of each phase whereas validation is performed before delivering the product to the client. Similarly, every phase must be fully documented before starting the next phase. It is important to note that postponed documentation may never be completed as the responsible individual may leave. Documentation is important as the product is constantly changing—we need the documentation to do this. The design (for example) will be modified during development, but the original designers may not be available to document it. The following table shows the QA and documentation activities associated with each stage. Phase Documents QA Requirement Rapid prototype, or Rapid prototype Definition Requirements document Reviews Functional Specification document (specifications) Traceability Specification Software Product Management Plan FS Review Check the SPMP Design Architectural Design Traceability Detailed Design Review Coding Source code Traceability Test cases Review Testing Integration Source code Integration testing Test cases Acceptance testing Maintenance Change record Regression testing Regression test cases © Copy Right Virtual University of Pakistan 26 CS605 Software Engineering-II VU Lecture No. 6 Software Project Management Concepts Software project management is a very important activity for successful projects. In fact, in an organization at CMM Level basic project management processes are established to track cost, schedule, and functionality. That is, it is characterized by basic project management practices. It also implies that without project management not much can be achieved. Capers Jones, in his book on Software Best Practices, notes that, for the projects they have analyzed, good project management was associated with 100% of the successful project and bad project management was associated with 100% of the unsuccessful projects. Therefore, understanding of good project management principles and practices is essential for all project managers and software engineers. Software project management involves that planning, organization, monitoring, and control of the people and the processes. Software Project Management: Factors that influence results The first step towards better project management is the comprehension of the factors that influence results of a project. Among these, the most important factors are: – Project size As the project size increases, the complexity of the problem also increases and therefore its management also becomes more difficult. – Delivery deadline Delivery deadline directly influences the resources and quality. With a realistic deadline, chances of delivering the product with high quality and reasonable resources increase tremendously as compared to an unrealistic deadline. So a project manager has to first determine a realistic and reasonable deadline and then monitor the project progress and ensure timely delivery. – Budgets and costs A project manager is responsible for ensuring delivery of the project within the allocated budget and schedule. A good estimate of budget, cost and schedule is essential for any successful project. It is therefore imperative that the project manager understand and learns the techniques and principle needed to develop these estimates. – Application domain Application domain also plays an important role in the success of a project. The chances of success of a project in a well-known application domain would be much better than of a project in a relatively unknown domain. The project manager thus needs to implement measures to handle unforeseen problems that may arise during the project lifecycle. – Technology to be implemented Technology also plays a very significant role in the success or failure of a project. One the one hand, a new “state-of-the-art” technology may increase the productivity of the team and quality of the product. On the other hand, it may prove to be unstable and hence © Copy Right Virtual University of Pakistan 27 CS605 Software Engineering-II VU prove to be difficult to handle. Resultantly, it may totally blow you off the track. So, the project manager should be careful in choosing the implementation technology and must take proper safeguard measures. – System constraints The non-functional requirement or system constraints specify the conditions and the restrictions imposed on the system. A system that fulfils all its functional requirements but does not satisfy the non-functional requirements would be rejected by the user. – User requirements A system has to satisfy its user requirements. Failing to do so would render this system unusable. – Available resources A project has to be developed using the available resources who know the domain as well as the technology. The project manager has to ensure that the required number of resources with appropriate skill-set is available to the project. Project Management Concerns In order to plan and run a project successfully, a project manager needs to worry about the following issues: 1. Product quality: what would be the acceptable quality level for this particular project and how could it be ensured? 2. Risk assessment: what would be the potential problems that could jeopardize the project and how could they be mitigated? 3. Measurement: how could the size, productivity, quality and other important factors be measured and benchmarked? 4. Cost estimation: how could cost of the project be estimated? 5. Project schedule: how could the schedule for the project be computed and estimated? 6. Customer communication: what kind of communication with the customer would be needed and how could it be established and maintained consistently? 7. Staffing: how many people with what kind of resources would be needed and how that requirement could be fulfilled? 8. Other resources: what other hardware and software resources would be needed for the project? 9. Project monitoring: how the progress of the project could be monitored? Thorough understanding and appreciation of these issues leads to the quest for finding satisfactory answers to these problems and improves the chances for success of a project. Why Projects Fail? A project manager is tasked to ensure the successful development of a product. Success cannot be attained without understanding the reasons for failure. The main reasons for the failure of software projects are: 1. changing customer requirements 2. ambiguous/incomplete requirements © Copy Right Virtual University of Pakistan 28 CS605 Software Engineering-II VU 3. unrealistic deadline 4. an honest underestimate of effort 5. predictable and/or unpredictable risks 6. technical difficulties 7. miscommunication among project staff 8. failure in project management The first two points relate to good requirement engineering practices. Unstable user requirements and continuous requirement creep has been identified as the top most reason for project failure. Ambiguous and incomplete requirements lead to undesirable product that is rejected by the user. As discussed earlier, delivery deadline directly influences the resources and quality. With a realistic deadline, chances of delivering the product with high quality and reasonable resources increase tremendously as compared to an unrealistic deadline. An unrealistic deadline could be enforced by the management or the client or it could be due to error in estimation. In both these cases it often results in disaster for the project. A project manager who is not prepared and without a contingency plan for all sorts of predictable and unpredictable risks would put the project in jeopardy if such a risk should happen. Risk assessment and anticipation of technical and other difficulties allows the project manager to cope with these situations. Miscommunication among the project staff is another very important reason for project failure. Lack of proper coordination and communication in a project results in wastage of resources and chaos. The Management Spectrum Effective project management focuses on four aspects of the project known as the 4 P’s. These are: people, product, process, and project. People Software development is a highly people intensive activity. In this business, the software factory comprises of the people working there. Hence taking care of the first P, that is people, should take the highest priority on a project manager’s agenda. Product The product is the outcome of the project. It includes all kinds of the software systems. No meaningful planning for a project can be carried-out until all the dimensions of the product including its functional as well as non-functional requirements are understood and all technical and management constraints are identified. Process Once the product objectives and scope have been determined, a proper software development process and lifecycle model must be chosen to identify the required work products and define the milestones in order to ensure streamlined development activities. It includes the set of all the framework activities and software engineering tasks to get the job done. © Copy Right Virtual University of Pakistan 29 CS605 Software Engineering-II VU Project A project comprises of all work the required to make the product a reality. In order to avoid failure, a project manager and software engineer is required to build the software product in a controlled and organized fashion and run it like other projects found in more concrete domains. We now discuss these 4 in more detail. People In a study published by IEEE, the project team was identified by the senior executives as the most important contributor to a successful software project. However, unfortunately, people are often taken for granted and do no get the attention and focus they deserve. There are a number of players that participate in software process and influence the outcome of the project. These include senior managers, project (technical) managers, practitioners, customers, and end-users. Senior managers define the business vision whereas the project managers plan, motivate, organize and control the practitioners who work to develop the software product. To be effective, the project team must be organized to use each individual to the best of his/her abilities. This job is carried out by the team leader. Team Leader Project management is a people intensive activity. It needs the right mix of people skills. Therefore, competent practitioners often make poor team leaders. Leaders should apply a problem solving management style. That is, a project manager should concentrate on understanding the problem to be solved, managing the flow of ideas, and at the same time, letting everyone on the team know that quality counts and that it will not be compromised. MOI model of leadership developed by Weinberg suggest that a leadership needs Motivation, Organization, and Innovation. Motivation is the ability to encourage technical people to produce to their best. Organization is the ability to mold the existing processes (or invent new ones) that will enable the initial concept to be translated into a final product, and Idea or Innovation is the ability to encourage people to create and feel creative. It is suggested that successful project managers apply a problem solving management style. This involves developing an understanding of the problem and motivating the team to generate ideas to solve the problem. Edgemon suggests that the following characteristics are needed to become an effective project manager: Problem Solving – Should be able to diagnose technical and organizational issues and be willing to change direction if needed. Managerial Identity – Must have the confidence to take control when necessary © Copy Right Virtual University of Pakistan 30 CS605 Software Engineering-II VU Achievement – Reward initiative (controlled risk taking) and accomplishment Influence and team building – Must remain under control in high stress conditions. Should be able to read signals and address peoples’ needs. DeMarco says that a good leader possesses the following four characteristics: – Heart: the leader should have a big heart. – Nose: the leader should have good nose to spot the trouble and bad smell in the project. – Gut: the leader should have the ability to make quick decisions on gut feeling. – Soul: the leader should be the soul of the team. If analyzed closely, all these researchers seem to say essentially the same thing and they actually complement each other’s point of view. © Copy Right Virtual University of Pakistan 31 CS605 Software Engineering-II VU Lecture No. 7 The Software Team There are many possible organizational structures. In order to identify the most suitable structure, the following factors must be considered: the difficulty of the problem to be solved the size of the resultant program(s) in lines of code or function points the time that the team will stay together (team lifetime) the degree to which the problem can be modularized the required quality and reliability of the system to be built the rigidity of the delivery date the degree of sociability (communication) required for the project Constantine suggests that teams could be organized in the following generic structural paradigms: closed paradigm—structures a team along a traditional hierarchy of authority random paradigm—structures a team loosely and depends on individual initiative of the team members open paradigm—attempts to structure a team in a manner that achieves some of the controls associated with the closed paradigm but also much of the innovation that occurs when using the random paradigm synchronous paradigm—relies on the natural compartmentalization of a problem and organizes team members to work on pieces of the problem with little active communication among themselves Mantei suggests the following three generic team organizations: Democratic decentralized (DD) In this organization there is no permanent leader and task coordinators are appointed for short duration. Decisions on problems and approach are made by group consensus and communication among team is horizontal. Controlled decentralized (CD) In CD, there is a defined leader who coordinates specific tasks. However, problem solving remains a group activity and communication among subgroups and individuals is horizontal. Vertical communication along the control hierarchy also occurs. Controlled centralized (CC) In a Controlled Centralized structure, top level problem solving and internal team coordination are managed by the team leader and communication between the leader and team members is vertical. Centralized structures complete tasks faster and are most useful for handling simple problems. On the other hand, decentralized teams generate more and better solutions than individuals and are most useful for complex problems For the team morale point of view, DD is better. © Copy Right Virtual University of Pakistan 32 CS605 Software Engineering-II VU Coordination and Communication Issues Lack of coordination results in confusion and uncertainty. On the other hand, performance is inversely proportional to the amount of communication and hence too much communication and coordination is also not healthy for the project. Very large projects are best addressed with CC or CD when sub-grouping can be easily accommodated. Kraul and Steeter categorize the project coordination techniques as follows: Formal, impersonal approaches In these approaches, coordination is achieved through impersonal and formal mechanism such as SE documents, technical memos, schedules, error tracking reports. Formal, interpersonal procedures In this case, the approaches are interpersonal and formal. These include QA activities, design and code reviews, and status meetings. Informal, interpersonal procedures This approach employs informal interpersonal procedures and includes group meetings and collocating different groups together. Electronic communication includes emails and bulletin boards. Interpersonal networking includes informal discussions with group members The effectiveness of these approaches has been summarized in the following diagram: Techniques that fall above the regression line yield more value to use ratio as compared to the ones below the line. The Product: Defining the Problem © Copy Right Virtual University of Pakistan 33 CS605 Software Engineering-II VU In order to develop an estimate and plan for the project, the scope of the problem must be established. This includes context, information objectives, and function and performance requirements. The estimate and plan is then developed by decomposing the problem and establishing a functional partitioning. The Process The next step is to decide which process model to pick. The project manager has to look at the characteristics of the product to be built and the project environment. For examples, for a relatively small project that is similar to past efforts, degree of uncertainty is minimized and hence Waterfall or linear sequential model could be used. For tight timelines, heavily compartmentalized, and known domain, RAD model would be more suitable. Projects with large functionality, quick turn around time are best developed incrementally and for a project in which requirements are uncertain, prototyping model will be more suitable. © Copy Right Virtual University of Pakistan 34 CS605 Software Engineering-II VU Lecture No. 8 The Project Management As discussed earlier, a project manager must understand what can go wrong and how to do it right. Reel has defined a 5 step process to improve the chances of success. These are: – Start on the right foot: this is accomplished by putting in the required effort to understand the problem, set realistic objectives, build the right team, and provide the needed infrastructure. – Maintain momentum: many projects, after starting on the right, loose focus and momentum. The initial momentum must be maintained till the very end. – Track progress: no planning is useful if the progress is not tracked. Tracking ensures timely delivery and remedial action, if needed, in a suitable manner. – Make smart decisions – Conduct a postmortem analysis: in order to learn from the mistakes and improve the process continuously, a project postmortem must be conducted. W5HH Principle Barry Boehm has suggested a systematic approach to project management. It is known as the WWWWWHH principle. It comprises of 7 questions. Finding the answers to these 7 questions is essentially all a project manager has to do. These are: WHY is the system being developed? WHAT will be done? By WHEN? WHO is responsible for a function? WHERE they are organizationally located? HOW will the job be done technically and managerially? HOW MUCH of each resource (e.g., people, software, tools, database) will be needed? Boehm’s W5HH principle is applicable, regardless of the size and complexity of the project and provide excellent planning outline. Critical Practices The Airlie Council has developed a list of critical success practices that must be present for successful project management. These are: Formal risk analysis Empirical cost and schedule estimation Metrics-based project management Earned value tracking Defect tracking against quality targets People aware project management © Copy Right Virtual University of Pakistan 35 CS605 Software Engineering-II VU Finding the solution to these practices is the key to successful projects. We’ll therefore spend a considerable amount of time in elaborating these practices. © Copy Right Virtual University of Pakistan 36 CS605 Software Engineering-II VU Lecture No. 9 Software Size Estimation The size of the software needs to be estimated to figure out the time needed in terms of calendar and man months as well as the number and type of resources required carrying out the job. The time and resources estimation eventually plays a significant role in determining the cost of the project. Most organizations use their previous experience to estimate the size and hence the resource and time requirements for the project. If not quantified, this estimate is subjective and is as good as the person who is conducting this exercise. At times this makes it highly contentious. It is therefore imperative for a government organization to adopt an estimation mechanism that is: 1. Objective in nature. 2. It should be an acceptable standard with wide spread use and acceptance level. 3. It should serve as a single yardstick to measure and make comparisons. 4. Must be based upon a deliverable that is meaningful to the intended audience. 5. It should be independent of the tool and technology used for the developing the software. A number of techniques and tools can be used in estimating the size of the software. These include: 1. Lines of code (LOC) 2. Number of objects 3. Number of GUIs 4. Number of document pages 5. Functional points (FP) Comparison of LOC and FPA Out of these 5, the two most widely used metrics for the measurement of software size are FP and LOC. LOC metric suffer from the following shortcomings: 1. There are a number of questions regarding the definition for lines of code. These include: a. Whether to count physical line or logical lines? b. What type of lines should be counted? For example, should the comments, data definitions, and blank lines be counted or not? 2. LOC is heavily dependent upon the individual programming style. 3. It is dependent upon the technology and hence it is difficult to compare applications developed in two different languages. This is true for even seemingly very close languages like in C++ and Java. 4. If a mixture of languages and tools is used then the comparison is even more difficult. For example, it is not possible to compare a project that delivers a 100,000-line mixture of Assembly, C++, SQL and Visual Basic to one that delivers 100,000 lines of COBOL. © Copy Right Virtual University of Pakistan 37 CS605 Software Engineering-II VU FP measures the size of the functionality provided by the software. The functionally is measured as a function of the data and the operations performed on that data. The measure is independent of the tool and technology used and hence provides a consistent measure for comparison between various organizations and projects. The biggest advantage of FP over LOC is that LOC can be counted only AFTER the code has been developed while FP can be counted even at the requirement phase and hence can be used for planning and estimation while the LOC cannot be used for this purpose. Another major distinction between the FP and LOC is that the LOC measures the application from a developer's perspective while the FP is a measure of the size of the functionality from the user's perspective. The user's view, as defined by IFPUG, is as follows: A user view is a description of the business functions and is approved by the user. It represents a formal description of the user’s business needs in the user’s language. It can vary in physical form (e.g., catalog of transactions, proposals, requirements document, external specifications, detailed specifications, user handbook). Developers translate the user information into information technology language in order to provide a solution. Function point counts the application size from the user’s point of view. It is accomplished using the information in a language that is common to both user(s) and developers. Therefore, Function Point Analysis measures the size of the functionality delivered and used by the end user as opposed to the volume of the artifacts and code. © Copy Right Virtual University of Pakistan 38 CS605 Software Engineering-II VU Assembler Version Ada Version Difference Source Code Size 100,000 25,000 -75,000 Activity - in person months Requirements 10 10 0 Design 25 25 0 Coding 100 20 -80 Documentation 15 15 0 Integration and Testing 25 15 -10 Management 25 15 -10 Total Effort 200 100 -100 Total Cost $1,000,000 $500,000 -$500,000 Cost Per Line $10 $20 $10 Lines Per Person-Month 500 250 -250 The Paradox of Reversed Productivity for High-Level Languages Consider the following example: In this example, it is assumed that the same functionality is implemented in Assembly and Ada. As coding in Assembly is much more difficult and time consuming as compared to Ada, it takes more time and it is also lengthy. Because there is a huge difference in the code size in terms of Lines of Code, the cost per line in case of Assembly is much less as compared to Ada. Hence coding in Assembly appears to be more cost effective than Ada while in reality it is not. This is a paradox! Function Point Analysis - A Brief History and Usage In the mid 70's, IBM felt the need to establish a more effective and better measure of system size to predict the delivery of software. It commissioned Allan Albrecht to lead this effort. As a result he developed this approach which today known as the Function Point Analysis. After several years of internal use, Albrecht introduced the methodology at a joint/share conference. From 1979 to 1984 continued statistical analysis was performed on the method and refinements were made. At that point, a non-profit organization by the name of International Function Point User Group (IFPUG) was formed which formally took onto itself the role of refining and defining the counting rules. The result is the function point methodology that we use today. Since 1979, when Albrecht published his first paper on FP, its popularity and use has been increasing consistently and today it is being used as a de facto standard for software measurement. Following is a short list of organizations using FP for estimation: 1. IEEE recommends it for use in productivity measurement and reporting. 2. Several governments including UK, Canada, and Hong Kong have been using it and it has been recommended to these governments that all public sector project use FP as a standard for the measurement of the software size. © Copy Right Virtual University of Pakistan 39 CS605 Software Engineering-II VU 3. Government of the Australian state Victoria has been using FP since 1997 for managing and outsourcing projects to the tune of US$ 50 Million every year. 4. In the US several large government departments including IRS have adopted FP analysis as a standard for outsourcing, measurement, and control of software projects. 5. A number of big organizations including Digital Corporation and IBM have been using FP for their internal use for the last many years. Usage of FP includes: Effort Scope Estimation Project Planning Determine the impact of additional or changed requirements Resource Planning/Allocation Benchmarking and target setting Contract Negotiations Following is a list of some of the FP based metrics used for these purposes: Size – Function Points Defects – Per Function Point Effort – Staff-Months Productivity – Function Points per Staff-Month Duration – Schedule (Calendar) Months Time Efficiency – Function Points per Month Cost – Per Function © Copy Right Virtual University of Pakistan 40 CS605 Software Engineering-II VU Lecture No. 10 Function Point Counting Process The following diagram depicts the function point counting process. Determine the type of count Enhancement Development Application Define the application boundary Count Transactional Count Data Calculate Value Functions Functions Adjustment Factor EI ILF EO EIF (VAF) EQ Calculate Unadjusted FP Count (UFP) Contribution of 14 Transactional Functions + Data Functions general system Calculate Adjusted characteristics FP Count UFP * VAF These steps are elaborated in the following subsections. The terms and definitions are the ones used by IFPUG and have been taken directly from the IFPUG Function Point Counting Practices Manual (CPM) Release 4.1. The following can therefore be treated as an abridged version of the IFPUG CPM Release 4.1. Determining the type of count A Function Point count may be divided into the following types: 1. Development Count: A development function point count includes all functions impacted (built or customized) by the project activities. 2. Enhancement Count: An enhancement function point count includes all the functions being added, changed and deleted. The boundary of the application(s) impacted remains the same. The functionality of the application(s) reflects the impact of the functions being added, changed or deleted. 3. Application Count: An application function point count may include, depending on the purpose (e.g., provide a package as the software solution): a) only the functions being used by the user © Copy Right Virtual University of Pakistan 41 CS605 Software Engineering-II VU b) all the functions delivered c) The application boundary of the two counts is the same and is independent of the scope. Defining the Application Boundary The application boundary is basically the system context diagram and determines the scope of the count. It indicates the border between the software being measured and the user. It is the conceptual interface between the ‘internal’ application and the ‘external’ user world. It depends upon the user’s external view of the system and is independent of the tool and technology used to accomplish the task. The position of the application boundary is important because it impacts the result of the function point count. The application boundary assists in identifying the data entering the application that will be included in the scope of the count. Count Data Functions Count of the data functions is contribution of the data manipulated and used by the application towards the final function point count. The data is divided into two categories: the Internal Logical Files (ILF) and the External Interface Files (EIF). These and the related concepts are defined and explained as follows. Internal Logical Files (ILF) An internal logical file (ILF) is a user identifiable group of logically related data or control information maintained within the boundary of the application. The primary intent of an ILF is to hold data maintained through one or more elementary processes of the application being counted. External Interface Files An external interface file (EIF) is a user identifiable group of logically related data or control information referenced by the application, but maintained within the boundary of another application. The primary intent of an EIF is to hold data referenced through one or more elementary processes within the boundary of the application counted. This means an EIF counted for an application must be in an ILF in another application. Difference between ILFs and EIFs The primary difference between an internal logical file and an external interface file is that an EIF is not maintained by the application being counted, while an ILF is. Definitions for Embedded Terms The following paragraphs further define ILFs and EIFs by defining embedded terms within the definitions. Control Information © Copy Right Virtual University of Pakistan 42 CS605 Software Engineering-II VU Control Information is data that influences an elementary process of the application being counted. It specifies what, when, or how data is to be processed. For example, someone in the payroll department establishes payment cycles to schedule when the employees for each location are to be paid. The payment cycle, or schedule, contains timing information that affects when the elementary process of paying employees occurs. User Identifiable The term user identifiable refers to defined requirements for processes and/or groups of data that are agreed upon, and understood by, both the user(s) and software developer(s). For example, users and software developers agree that a Human Resources Application will maintain and store Employee information in the application. Maintained The term maintained is the ability to modify data through an elementary process. Examples include, but are not limited to, add, change, delete, populate, revise, update, assign, and create. Elementary Process An elementary process is the smallest unit of activity that is meaningful to the user(s). For example, a user requires the ability to add a new employee to the application. The user definition of employee includes salary and dependent information. From the user perspective, the smallest unit of activity is to add a new employee. Adding one of the pieces of information, such as salary or dependent, is not an activity that would qualify as an elementary process. The elementary process must be self-contained and leave the business of the application being counted in a consistent state. For example, the user requirements to add an employee include setting up salary and dependent information. If all the employee information is not added, an employee has not yet been created. Adding some of the information alone leaves the business of adding an employee in an inconsistent state. If both the employee salary and dependent information is added, this unit of activity is completed and the business is left in a consistent state. ILF/EIF Counting Rules This section defines the rules that apply when counting internal logical files and external interface files. Summary of Counting Procedures The ILF and EIF counting procedures include the following two activities: 1) Identify the ILFs and EIFs. 2) Determine the ILF or EIF complexity and their contribution to the unadjusted function point count. ILF and EIF counting rules are used for each activity. There are two types of rules: Identification rules Complexity and contribution rules © Copy Right Virtual University of Pakistan 43 CS605 Software Engineering-II VU The following list outlines how the rules are presented: ILF identification rules EIF identification rules Complexity and contribution rules, which include: Data element types (DETs) Record element types (RETs) ILF Identification Rules To identify ILFs, look for groups of data or control information that satisfy the definition of an ILF. All of the following counting rules must apply for the information to be counted as an ILF. The group of data or control information is logical and user identifiable. The group of data is maintained through an elementary process within the application boundary being counted. EIF Identification Rules To identify EIFs, look for groups of data or control information that satisfy the definition of an EIF. All of the following counting rules must apply for the information to be counted as an EIF. The group of data or control information is logical and user identifiable. The group of data is referenced by, and external to, the application being counted. The group of data is not maintained by the application being counted. The group of data is maintained in an ILF of another application. Complexity and Contribution Definitions and Rules The number of ILFs, EIFs, and their relative functional complexity determine the contribution of the data functions to the unadjusted function point count. Assign each identified ILF and EIF a functional complexity based on the number of data element types (DETs) and record element types (RETs) associated with the ILF or EIF. This section defines DETs and RETs and includes the counting rules for each. DET Definition A data element type is a unique user recognizable, non-repeated field. DET Rules The following rules apply when counting DETs: 1. Count a DET for each unique user recognizable, non-repeated field maintained in or retrieved from the ILF or EIF through the execution of an elementary process. For example: An account number that is stored in multiple fields is counted as one DET. © Copy Right Virtual University of Pakistan 44 CS605 Software Engineering-II VU A before or after image for a group of 10 fields maintained for audit purposes would count as one DET for the before image (all 10 fields) and as one DET for the after image (all 10 fields) for a total of 2 DETs. The result(s) of a calculation from an elementary process, such as calculated sales tax value for a customer order maintained on an ILF is counted as one DET on the customer order ILF. Accessing the price of an item which is saved to a billing file or fields such as a time stamp if required by the user(s) are counted as DETs. If an employee number which appears twice in an ILF or EIF as (1) the key of the employee record and (2) a foreign key in the dependent record, count the DET only once. Within an ILF or EIF, count one DET for the 12 Monthly Budget Amount fields. Count one additional field to identify the applicable month. For Example: 2. When two applications maintain and/or reference the same ILF/EIF, but each maintains/references separate DETs, count only the DETs being used by each application to size the ILF/EIF. For Example: Application A may specifically identify and use an address as street address, city, state and zip code. Application B may see the address as one block of data without regard to individual components. Application A would count four DETs; Application B would count one DET. Application X maintains and/or references an ILF that contains a SSN, Name, Street Name, Mail Stop, City, State, and Zip. Application Z maintains and/or references the Name, City, and State. Application X would count seven DETs; Application Z would count three DETs. 3. Count a DET for each piece of data required by the user to establish a relationship with another ILF or EIF. In an HR application, an employee's information is maintained on an ILF. The employee’s job name is included as part of the employee's information. This DET is counted because it is required to relate an employee to a job that exists in the organization. This type of data element is referred to as a foreign key. In an object oriented (OO) application, the user requires an association between object classes, which have been identified as separate ILFs. Location name is a DET in the Location EIF. The location name is required when processing employee information; consequently, it is also counted as a DET within the Employee ILF. © Copy Right Virtual University of Pakistan 45 CS605 Software Engineering-II VU Lecture No. 11 Function Point Counting Process (cont.) RET Definition A record element type (RET) is a user recognizable subgroup of data elements within an ILF or EIF. There are two types of subgroups: Optional Mandatory Optional subgroups are those that the user has the option of using one or none of the subgroups during an elementary process that adds or creates an instance of the data. Mandatory subgroups are subgroups where the user must use at least one. For example, in a Human Resources Application, information for an employee is added by entering some general information. In addition to the general information, the employee is a salaried or hourly employee. The user has determined that an employee must be either salaried or hourly. Either type can have information about dependents. For this example, there are three subgroups or RETs as shown below: Salaried employee (mandatory); includes general information Hourly employee (mandatory); includes general information Dependent (optional) RET Rules One of the following rules applies when counting RETs: Count a RET for each optional or mandatory subgroup of the ILF or EIF. Or If there are no subgroups, count the ILF or EIF as one RET. Hints to Help with Counting The following hints may help you apply the ILF and EIF counting rules. Caution: These hints are not rules and should not be used as rules. 1. Is the data a logical group that supports specific user requirements? a) An application can use an ILF or EIF in multiple processes, but the ILF or EIF is counted only once. b) A logical file cannot be counted as both an ILF and EIF for the same application. If the data group satisfies both rules, count as an ILF. c) If a group of data was not counted as an ILF or EIF itself, count its data elements as DETs for the ILF or EIF, which includes that group of data. d) Do not assume that one physical file, table or object class equals one logical file when viewing data logically from the user perspective. e) Although some storage technologies such as tables in a relational DBMS or sequential flat file or object classes relate closely to ILFs or EIFs, do not assume that this always equals a one-to-one physical-logical relationship. f) Do not assume all physical files must be counted or included as part of an ILF or EIF. 2. Where is data maintained? Inside or outside the application boundary? a) Look at the workflow. © Copy Right Virtual University of Pakistan 46 CS605 Software Engineering-II VU b) In the process functional decomposition, identify where interfaces occur with the user and other applications. c) Work through the process diagram to get hints. d) Credit ILFs maintained by more than one application to each application at the time the application is counted. Only the DETs being used by each application being counted should be used to size the ILF/EIF. 3. Is the data in an ILF maintained through an elementary process of the application? a) An application can use an ILF or EIF multiple times, but you count the ILF or EIF only once. b) An elementary process can maintain more than one ILF. c) Work through the process diagram to get hints. d) Credit ILFs maintained by more than one application to each application at the time the application is counted. Hints to Help with Identifying ILFs, EIFs, and RETs Differentiating RETs from ILFs and EIFs is one of the most activities in FP analysis. Different concepts regarding entities play a pivotal role in this regards. Let us therefore understand what an entity is and what different types of entities are. Entity An entity is defined by different people as follows: A thing that can be distinctly identified. (Chen) Any distinguishable object that is to be represented in the database. (Date) Any distinguishable person, place, thing, event or concept about which information is kept. (Bruce) A data entity represents some "thing" that is to be stored for later reference. The term entity refers to the logical representation of data. (Finkelstein) An entity may also represent the relationship between two or more entities, called associative entity. (Reingruber) An entity may represent a subset of information relevant to an instance of an entity, called subtype entity. (Reingruber) That is, an entity is a principal data object about which information is collected that is a fundamental thing of relevance to the user, about which a collection of facts is kept. An entity can be a weak entity or a strong entity. A weak entity is the one which does not have any role in the problem domain without some other entity. Weak entities are RETs and strong entities are ILFs and EIFs. Identification of weak entities is therefore important for distinguishing between RETs and logical files. Weak Entities There are three types of weak entities: Associative entity types, attributive entity type, and entity subtype. These are elaborated as follows: Associative Entity Type – An entity which defines many-to-many relationship between two or more entities. – Student – course – Part – dealer © Copy Right Virtual University of Pakistan 47 CS605 Software Engineering-II VU Attributive Entity Type – An entity type which further describes one or more characteristics of another entity. – Product – Part – Product – Product Price Information Entity Subtype – A subdivision of entity. A subtype inherits all the attributes of its parent entity type, and may have additional, unique attributes. – Employee Permanent Employee Contract Employee – Employee Married Employee

Use Quizgecko on...
Browser
Browser