🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

SE : Project Management By Dr. Jayanta Mondal 1 Syllabus Software Process Software product, Software crisis, Handling complexity through Abstraction and Decomposition, Models Overview of software development activities....

SE : Project Management By Dr. Jayanta Mondal 1 Syllabus Software Process Software product, Software crisis, Handling complexity through Abstraction and Decomposition, Models Overview of software development activities. Process Models: Classical waterfall model, iterative waterfall model, prototyping model, evolutionary model, spiral model, RAD model. Agile models: Extreme programming and Scrum. Software Requirement Requirement Gathering and analysis, Functional and non functional requirements, Software Engineering Requirement Specification(SRS) , IEEE 830 guidelines, Decision tables and trees. Software Project Responsibilities of a Software project manager, project planning, Metrics for project size estimation, Management Project estimation techniques, Empirical estimation techniques, COCOMO models, Scheduling, Organization & team structure, Staffing. Structural Analysis & Overview of design process : High level and detailed design, Cohesion & coupling, Modularity and Design layering, Function–Oriented software design: Structural Analysis, Structural Design (DFD and Structured Chart), Object Oriented Analysis & Design, Command language, menu and iconic interfaces. Testing Strategies Coding, Code Review, Documentation, Testing: - Unit testing, Black-box Testing, White-box testing, Cyclomatic complexity measure, Coverage analysis, Debugging, Integration testing, System testing, Regression testing. Software Reliability Software reliability, reliability measures, reliability growth modelling, Quality SEI CMM, Characteristics Software Maintenance of software maintenance, software reverse engineering, software re engineering, Software reuse Emerging Topics Client-Server Software engineering, Service Oriented Architecture (SOA), Software as a Service2 (SaaS) 3. Software Project Management (5 hrs) 1st : Responsibilities of a Software project manager, project planning, Metrics for project size estimation, 2nd: Project cost estimation techniques, Empirical estimation techniques, COCOMO models, 3rd: Scheduling, Risk Management 4th: Organization & team structure. 5th: Unit 3 Activities 3 Introduction The main goal of software project management is to enable a group of developers to work effectively towards the successful completion of a project. Who manages a project? Technical leadership and their responsibilities. Bad software project management is the reason for almost 50% of failed software projects. 4 SOFTWARE PROJECT MANAGEMENT COMPLEXITIES Invisibility: Software remains invisible, until its development is complete and it is operational. Invisibility of software makes it difficult to assess the progress of a project and is a major cause for the complexity of managing a software project. Changeability: Because the software part of any system is easier to change as compared to the hardware part, the software part is the one that gets most frequently changed. These changes usually arise from changes to the business practices, changes to the hardware or underlying software (e.g. operating system, other applications), or just because the client changes his mind. 5 SOFTWARE PROJECT MANAGEMENT COMPLEXITIES Complexity: Even a moderate sized software has millions of parts (functions) that interact with each other in many ways—data coupling, serial and concurrent runs, state transitions, control dependency, file sharing, etc. This inherent complexity of the functioning of a software product makes managing these projects much more difficult as compared to many other kinds of projects. Uniqueness: Every software project is usually associated with many unique features or situations. As a result, a software project manager has to confront many unanticipated issues in almost every project that he manages. 6 SOFTWARE PROJECT MANAGEMENT COMPLEXITIES Exactness of the solution: The parameters of a function call in a program are required to be in complete conformity with the function definition. This requirement not only makes it difficult to get a software product up and working, but also makes reusing parts of one software product in another difficult. Team-oriented and intellect-intensive work: Software development projects are akin to research projects in the sense that they both involve team-oriented, intellect-intensive work. In a software development project, each member has to typically interact, review, and interface with several other members, constituting another dimension of complexity of software projects. 7 RESPONSIBILITIES OF A SOFTWARE PROJECT MANAGER Job Responsibilities for Managing Software Projects We can broadly classify a project manager’s varied responsibilities into the following two major categories: Project planning: Involves estimating several characteristics of a project and then planning the project activities based on these estimates made. Project monitoring and control: The focus of project monitoring and control activities is to ensure that the software development proceeds as per plan. 8 Skills Necessary for Managing Software Projects good qualitative judgment, decision taking capabilities, cost estimation, risk management, configuration management, good communication skills and the ability to get work done, tracking and controlling the progress of the project, customer interaction, managerial presentations, team building etc. Three skills that are most critical to successful project management are the following: Knowledge of project management techniques. Decision taking capabilities. Previous experience in managing similar projects. 9 PROJECT PLANNING Project planning is undertaken and completed before any development activity starts. Project planning requires utmost care and attention since commitment to unrealistic time and resource estimates result in schedule slippage. Schedule delays can cause customer dissatisfaction and adversely affect team morale. It can even cause project failure. For this reason, project planning is undertaken by the project managers with utmost care and attention. 10 Project Planning Activities Size Estimation: Size is the most fundamental parameter based on which all other estimations and project plans are made. The following project attributes are estimated. Cost: How much is it going to cost to develop the software product? Duration: How long is it going to take to develop the product? Effort: How much effort would be necessary to develop the product? Scheduling: After all the necessary project parameters have been estimated, the schedules for manpower and other resources are developed. Staffing: Staff organization and staffing plans are made. Risk management : This includes risk identification, analysis, and abatement planning. Miscellaneous plans: This includes making several other plans such as quality assurance plan, and configuration management plan, etc. 11 Precedence ordering among planning activities. 12 Sliding Window Planning It is usually very difficult to make accurate plans for large projects at project initiation. During the span of the project, the project parameters, scope of the project, project staff, etc., often change drastically resulting in the initial plans going haywire. In order to overcome this problem, sometimes project managers undertake project planning over several stages. Planning a project over a number of stages protects managers from making big commitments at the start of the project. This technique of staggered planning is known as sliding window planning. 13 The SPMP Document of Project Planning 1. Introduction (a) Objectives (b) Major Functions (c) Performance Issues (d) Management and Technical Constraints 2. Project estimates (a) Historical Data Used (b) Estimation Techniques Used (c) Effort, Resource, Cost, and Project Duration Estimates 3. Schedule (a) Work Breakdown Structure (b) Task Network Representation (c) Gantt Chart Representation (d) PERT Chart Representation 4. Project resources (a) People (b) Hardware and Software (c) Special Resources 14 The SPMP Document of Project Planning 5. Staff organisation (a) Team Structure (b) Management Reporting 6. Risk management plan (a) Risk Analysis (b) Risk Identification (c) Risk Estimation (d) Risk Abatement Procedures 7. Project tracking and control plan (a) Metrics to be tracked (b) Tracking plan (c) Control plan 8. Miscellaneous plans (a) Process Tailoring (b) Quality Assurance Plan (c) Configuration Management Plan (d) Validation and Verification (e) System Testing Plan (f ) Delivery, Installation, and Maintenance Plan 15 METRICS FOR PROJECT SIZE ESTIMATION The project size is a measure of the problem complexity in terms of the effort and time required to develop the product. Currently, two metrics are popularly being used to measure size— lines of code (LOC) and function point (FP). 16 Lines of Code (LOC) LOC measures the size of a project by counting the number of source instructions in the developed program. While counting the number of source instructions, comment lines, and header lines are ignored. The project manager divides the problem into modules, and each module into sub-modules and so on, until the LOC of the leaf-level modules are small enough to be predicted. 17 Shortcomings of LOC LOC is a measure of coding activity alone: The implicit assumption made by the LOC metric is that the overall product development effort is solely determined from the coding effort alone is flawed. Coding is only a small part of the overall software development effort. The design or testing issues are very complex, the code size might be small and vice versa. Thus, the design and testing efforts can be grossly disproportional to the coding effort. 18 Shortcomings of LOC LOC count depends on the choice of specific instructions: LOC gives a numerical value of problem size that can vary widely with coding styles of individual programmers. One programmer might write several source instructions on a single line, whereas another might split a single instruction across several lines. One programmer may use a switch statement in writing a C program and another may use a sequence of if... then... else... statements. 19 Shortcomings of LOC LOC measure correlates poorly with the quality and efficiency of the code: Calculating productivity as LOC generated per man-month may encourage programmers to write lots of poor quality code rather than fewer lines of high quality code achieve the same functionality. LOC metric penalizes use of higher-level programming languages and code reuse: A paradox is that if a programmer consciously uses several library routines, then the LOC count will be lower. This would show up as smaller program size, and in turn, would indicate lower effort! Thus, if managers use the LOC count to measure the effort put in by different developers (that is, their productivity), they would be discouraging code reuse by developers. Modern programming methods such as object-oriented programming and reuse of components makes the relationships between LOC and other project attributes even less precise. 20 Shortcomings of LOC LOC metric measures the lexical complexity of a program and does not address the more important issues of logical and structural complexities: Between two programs with equal LOC counts, a program incorporating complex logic would require much more effort to develop than a program with very simple logic. It is very difficult to accurately estimate LOC of the final program from problem specification: From the project managers perspective, the biggest shortcoming of the LOC metric is that the LOC count is very difficult to estimate during project planning stage, and can only be accurately computed after the software development is complete. 21 Function Point (FP) Metric Function point metric was proposed by Albrecht in 1983. One of the important advantages of the function point metric over the LOC metric is that it can easily be computed from the problem specification itself. Conceptually, the function point metric is based on the idea that a software product supporting many features would certainly be of larger size than a product with less number of features. 22 Function Point (FP) Metric Different features may take very different amounts of efforts to develop. This has been considered by the function point metric by counting the number of input and output data items and the number of files accessed by the function. The implicit assumption made is that the more the number of data items that a function reads from the user and outputs and the more the number of files accessed, the higher is the complexity of the function. In addition to the number of basic functions that a software performs, size also depends on the number of files and the number of interfaces that are associated with the software. 23 Function point (FP) metric computation It is computed using the following three steps: Step 1: Compute the unadjusted function point (UFP) using a heuristic expression. Step 2: Refine UFP to reflect the actual complexities of the different parameters used in UFP computation. Step 3: Compute FP by further refining UFP to account for the specific characteristics of the project that can influence the entire development effort. 24 Step 1: UFP computation The unadjusted function points (UFP) is computed as the weighted sum of five characteristics of a product as shown in the following expression. The weights associated with the five characteristics were determined empirically by Albrecht through data gathered from many projects. UFP = (Number of inputs)*4 + (Number of outputs)*5 + (Number of inquiries)*4 + (Number of files)*10 + (Number of interfaces)*10 25 The meanings of the different parameters Number of inputs: Each data item input by the user is counted. Individual data items input by the user are not simply added up to compute the number of inputs, but related inputs are grouped and considered as a single input. Number of outputs: The outputs considered include reports printed, screen outputs, error messages produced, etc. Number of inquiries: An inquiry is a user command (without any data input) and only requires some actions to be performed by the system. Number of files: The files referred to here are logical files. Logical files include data structures as well as physical files. Number of interfaces: Here the interfaces denote the different mechanisms that are used to exchange information with other systems. 26 Step 2: Refine parameters UFP is refined by taking into account the complexities of the parameters of UFP computation. The complexity of each parameter is graded into three broad categories—simple, average, or complex. The weights for the different parameters are determined based on the numerical values shown in Table. 27 Step 3: Refine UFP based on complexity of the overall project Albrecht identified 14 parameters that can influence the development effort. Each of these 14 parameters is assigned a value from 0 (not present or no influence) to 6 (strong influence). The resulting numbers are summed, yielding the total degree of influence (DI). A technical complexity factor (TCF) for the project is computed and the TCF is multiplied with UFP to yield FP. The TCF expresses the overall impact of the corresponding project parameters on the development effort. TCF is computed as (0.65+0.01*DI). As DI can vary from 0 to 84, TCF can vary from 0.65 to 1.49. Finally, FP is given as the product of UFP and TCF. That is, FP=UFP*TCF. 28 Function Point Relative Complexity Adjustment Factors 29 Function point metric shortcomings A major shortcoming of the function point measure is that it does not take into account the algorithmic complexity of a function. FP only considers the number of functions that the system supports, without distinguishing the difficulty levels of developing the various functionalities. To overcome this problem, an extension to the function point metric called feature point metric has been proposed. Feature point metric incorporates algorithm complexity as an extra parameter. This parameter ensures that the computed size using the feature point metric reflects the fact that higher the complexity of a function, the greater the effort required to develop it—therefore, it should have larger size compared to a simpler function. 30 Example Compute the Function Point value for a software project with the following details: User inputs = 12, number of files = 6, user outputs = 25, external interfaces = 4, inquiries = 10, and number of algorithms = 8. Assume the multipliers at average and all the complexity adjustment factors at their moderate to average values. Multiplier = 2.5. 31 Computable Measure Countable Multipliers Count = measure at average measure X values multiplier External Inputs (EI) 12 4 48 External Outputs (EO) 25 5 125 No.of External EnQuiries (EQ) 10 4 40 No.of master Internal Logical Files (ILF) 6 10 60 No.of External Interface Files (EIF) 4 7 28 Count-Total 301 Function Point (FP) = UFP * TCF = Count-total * [0.65 + (0.01 * sum(Fj)) 14 Fj – each assumes a value of 2.5 in case the influence is moderate to average. Hence, Σ(Fj) = 14 * 2.5 = 35 Thus, FP = 301 * (0.65 + (0.01 * 35)) = 301 NB: Number of algorithms is not relevant here! 32 Function point (FP) metric computation: An Example Determine the function point measure of the size of the following supermarket software. A supermarket needs to develop the following software to encourage regular customers. For this, the customer needs to supply his/her residence address, telephone number, and the driving license number. Each customer who registers for this scheme is assigned a unique customer number (CN) by the computer. Based on the generated CN, a clerk manually prepares a customer identity card after getting the market manager’s signature on it. A customer can present his customer identity card to the check out staff when he makes any purchase. In this case, the value of his purchase is credited against his CN. At the end of each year, the supermarket intends to award surprise gifts to 10 customers who make the highest total purchase over the year. Also, it intends to award a 22 caret gold coin to every customer whose purchase exceeded Rs. 10,000. The entries against the CN are reset on the last day of every year after the prize winners’ lists are generated. Assume that various project characteristics determining the complexity of software development to be average. 33 Software Cost Estimation Three main approaches to estimation: Empirical Heuristic Analytical 34 Software Cost Estimation Techniques Empirical techniques: an educated guess based on past experience. Make an educated guess on project parameters. Based on common sense and prior experience. Heuristic techniques: assume that the characteristics to be estimated can be expressed in terms of some mathematical expression. Analytical techniques: derive the required results starting from certain simple assumptions. 35 Empirical Cost Estimation Techniques Expert Judgement: An euphemism for guess made by an expert. Suffers from individual bias. Delphi Estimation: overcomes some of the problems of expert judgement. 36 Expert judgement Experts divide a software product into component units: e.g. GUI, database module, data communication module, billing module, etc. Add up the guesses for each of the components. 37 Expert Judgment Technique suffers from several shortcomings The outcome of the expert judgment technique is subject to human errors and individual bias. An expert may overlook some factors. An expert making an estimate may not have relevant experience and knowledge of all aspects of a project. A more refined form of expert judgment is the estimation made by a group of experts. The estimate made by a group of experts may still exhibit bias. The decision made by a group may be dominated by overly assertive members. 38 Delphi Estimation: Team of Experts and a coordinator. Experts carry out estimation independently: mention the rationale behind their estimation. coordinator notes down any extraordinary rationale: circulates among experts. 39 Delphi Estimation: Experts re-estimate. Experts never meet each other to discuss their viewpoints. 40 Delphi Cost Estimation ▪ A team – group of experts, coordinator ▪ Coordinator provides a SRS document to each expert and a form for recording his cost estimate ▪ Estimators submit to the coordinator ▪ The unusual characteristics are mentioned ▪ A summary is prepared and distributed ▪ Based on the summary estimators re-estimate ▪ No discussion in the entire process, no influence ▪ After several iterations, the coordinator compiles the result and makes the final estimate. 41 Heuristic Estimation Techniques Single Variable Model: Parameter to be Estimated=C1 *e1 d1 e1 (Estimated Characteristic) C1 and d1 are constants Multivariable Model: Assumes that the parameter to be estimated depends on more than one characteristic. Parameter to be Estimated=C1 *e1 d1+ C2 *e2 d2+…… e1 , e2 (Estimated Characteristic) C1, C2,d1,d2 are constants Usually more accurate than single variable models. 42 COCOMO Model COCOMO (COnstructive COst estimation MOdel) proposed by Boehm in 1981. COCOMO uses both single and multivariable estimation models at different stages of estimation. The three stages of COCOMO estimation technique are—basic COCOMO, intermediate COCOMO, and complete COCOMO. 43 COCOMO Model Divides software product developments into 3 categories: Organic Semidetached Embedded Organic corresponds to application program Semidetached corresponds to utility program Embedded corresponds to system program Application program – data processing programs Utility programs - compilers, linkers System programs - operating system, real time system 44 Factors for the Classification the characteristics of the product the characteristics of the development team the characteristics of the development environment 45 Elaboration of Product classes Organic: Relatively small groups working to develop well-understood applications. Semidetached: Project team consists of a mixture of experienced and inexperienced staff. may be unfamiliar with some aspects of the system being developed Embedded: The software is strongly coupled to complex hardware, or real-time systems. 46 COCOMO Product classes Roughly correspond to: application, utility and system programs respectively. Data processing and scientific programs are considered to be application programs. Compilers, linkers, editors, etc., are utility programs. Operating systems and real-time system programs, etc. are system programs. relative levels of product development complexity for the three categories (application, utility and system programs) of products are 1:3:9. 47 COCOMO Model (CONT.) Boehm provides different sets of expressions to predict the effort (in units of person-months) and development time from the size estimation given in kilo lines of source code (KLSC). One person month is the effort an individual can typically put in a month. Person-month (PM) is considered to be an appropriate unit for measuring effort, because developers are typically assigned to a project for a certain number of months. 48 Basic COCOMO Model The basic COCOMO model is a single variable heuristic model that gives an approximate estimate of the project parameters. The basic COCOMO estimation model is given by expressions of the following forms: Effort = a1 × (KLOC)a2 PM Tdev = b1 × (Effort)b2 months where, KLOC is the estimated size of the software product expressed in Kilo Lines Of Code. a1, a2, b1, b2 are constants for each category of software product. Tdev is the estimated time to develop the software, expressed in months. Effort is the total effort required to develop the software product, expressed in person- months (PMs). 49 Development Effort Estimation Organic : Effort = 2.4 (KLOC)1.05 PM Semi-detached: Effort = 3.0(KLOC)1.12 PM Embedded: 1.20 Effort = 3.6 (KLOC) PM 50 Development Time Estimation Organic: Tdev = 2.5 (Effort)0.38 Months Semi-detached: Tdev = 2.5 (Effort)0.35 Months Embedded: Tdev = 2.5 (Effort)0.32 Months 51 Basic COCOMO Model Effort = E = a1 X (KLOC)a2 Time = T = b1 X (E)b2 Software Project a1 a2 b1 b2 Organic 2.40 1.05 2.50 0.38 Semi-detached 3.00 1.12 2.50 0.35 Embedded 3.60 1.20 2.50 0.32 Estimation of development effort Organic : Effort = 2.4(KLOC)1.05 PM Semi-detached : Effort = 3.0(KLOC)1.12 PM Embedded : Effort = 3.6(KLOC)1.20 PM Estimation of development time Organic : Tdev = 2.5(Effort)0.38 months Semi-detached : Tdev = 2.5(Effort)0.35 months Embedded : Tdev = 2.5(Effort)0.32 months 52 Ex- Assume that the size of an organic type software product has been estimated to be 32,000 lines of source code. Assume that the average salary of software engineers is 15,000 rupees per month. Determine the effort required to develop the software product and the development time. From basic COCOMO estimation formula for organic software: Effort = 2.4 X (32)1.05 = 91 PM Nominal development time = 2.5 X (91)0.38 = 14 months Cost required to develop the product = 14 X 15,000 = 210,000 Rs. 53 Ex-Two software managers separately estimated a given product to be of 10,000 and 15,000 lines of code respectively. Bring out the effort and schedule time implications of their estimation using COCOMO. For the effort estimation, use a coefficient value of 3.2 and exponent value of 1.05. For the schedule time estimation, the similar values are 2.5 and 0.38 respectively. Assume all adjustment multipliers to be equal to unity. For 10,000 LOC Effort = 3.2 X 101.05 = 35.90 PM Schedule Time = Tdev = 2.5 X 35.900.38 = 9.75 months For 15,000 LOC Effort = 3.2 X 151.05 = 54.96 PM Schedule Time = Tdev = 2.5 X 54.960.38 = 11.46 months NB: Increase in size drastic increase in effort but moderate change in time. 54 Ex-A project size of 200 KLOC is to be developed. Software development team has average experience on similar type of projects. The project schedule is not very tight. Calculate the effort, development time, average staff size and productivity of the project. Sol: The semi-detached mode is the most appropriate mode; keeping in view the size, schedules and experience of the development team. Hence: Effort = E = 3.0(200)1.12 = 1133.12 PM Tdev = 2.5(1133.12)0.35 = 29.3 M Average staff size (SS) = E/ Tdev = 1133.12 / 29.3 = 38.67 Persons Productivity = KLOC/E = 200/1133.12 = 0.1765 KLOC/PM = 176 LOC/PM 55 Students to answer Suppose you are the manager of a software project. Explain why it would not be proper to calculate the number of developers required for the project as a simple division of the effort estimate (in person months) by the nominal duration estimate (in months). 56 Basic COCOMO Model (CONT.) h ed Effort is somewhatEffort id e tac super-linear in Se m ed problem size. dd be Em n ic Orga Size 57 Basic COCOMO Model (CONT.) Development time sublinear function of product size. Dev. Time When product size ed tached increases two times, mb e d d emi de E S development time does 18 Months not double. 14 Months Time taken: O r gani c almost same for all the three product 30K 60K categories. Size 58 Embedded Semi-detached Estimated effort Organic Effort vs. product size (effort is super-linear in the size of the software) Effort required to develop a product increases very rapidly with project size. Size Embedded Nominal development time Semi-detached Organic Development time vs. product size (development time is a sub-linear function of the size of the product) When the size of the software increases by two times, the development time increases moderately Size 59 Basic COCOMO Model (CONT.) Development time does not increase linearly with product size: For larger products more parallel activities can be identified: can be carried out simultaneously by a number of engineers. 60 Basic COCOMO Model (CONT.) Development time is roughly the same for all the three categories of products: For example, a 60 KLOC program can be developed in approximately 18 months regardless of whether it is of organic, semi-detached, or embedded type. There is more scope for parallel activities for system and application programs, than utility programs. 61 Intermediate COCOMO Basic COCOMO model assumes effort and development time depend on product size alone. However, several parameters affect effort and development time: Reliability requirements Availability of CASE tools and modern facilities to the developers Size of data to be handled 62 Intermediate COCOMO For accurate estimation, the effect of all relevant parameters must be considered: Intermediate COCOMO model recognizes this fact: refines the initial estimate obtained by the basic COCOMO by using a set of 15 cost drivers (multipliers). 63 Intermediate COCOMO (CONT.) If modern programming practices are used, initial estimates are scaled downwards. If there are stringent reliability requirements on the product : initial estimate is scaled upwards. 64 Intermediate COCOMO (CONT.) Rate different parameters on a scale of one to three: Depending on these ratings, multiply cost driver values with the estimate obtained using the basic COCOMO. 65 Intermediate COCOMO (CONT.) Cost driver classes: Product: Inherent complexity of the product, reliability requirements of the product, etc. Computer: Execution time, storage requirements, etc. Personnel: Experience of personnel, etc. Development Environment: Sophistication of the tools used for software development. 66 Intermediate COCOMO Model In order to obtain accurate cost estimation of the effort and time, 15 cost drivers (multipliers), based on various attributes of software development, are taken. All the attributes are grouped into 4 major categories: Product, Hardware, Personnel, Project. The attributes are rated on a six-point scale ranging from very low to extra high. Based on the rating, an effort multiplier is determined. The product of all effort multiplier results an Effort Adjustment Factor (EAF). The value of EAF ranges from 0.9 to 1.4. 67 Effort adjustment factors RATING Code Description Very Low Nominal High Very Extra low high high Product RELY Required software reliability 0.75 0.88 1.00 1.15 1.40 - DATA Size of application database - 0.94 1.00 1.08 1.16 - CPLX Complexity of product 0.70 0.85 1.00 1.15 1.30 1.65 Hardware TIME Execution time constraint - - 1.00 1.11 1.30 1.66 STOR Main storage constraint - - 1.00 1.06 1.21 1.56 VIRT Virtual machine volatility - 0.87 1.00 1.15 1.30 - TURN Computer turn-around time - 0.87 1.00 1.07 1.15 - Personal ACAP Analyst capability 1.46 1.19 1.00 0.86 0.71 - AEXP Applications experience 1.29 1.13 1.00 0.91 0.82 - PCAP Programmer capability 1.42 1.17 1.00 0.86 0.70 - VEXP Virtual machine experience 1.21 1.10 1.00 0.90 - - LEXP Language experience 1.14 1.07 1.00 0.95 - - Project MODP Modern programmingpractice 1.24 1.10 1.00 0.91 0.82 - TOOL Software tools 1.24 1.10 1.00 0.91 0.83 - SCED Development schedule 1.23 1.08 1.00 1.04 1.10 - 68 EAF=(RELY*DATA*CPLX*TIME*STOR*VIRT*TURN*AC AP*AEXP*PCAP*VEXP*LEXP*MODP*TOOL*SCED) Effort = E = a1 X (KLOC)a2 X EAF Time = T = b1 X (E)b2 Value of b1 and b2 will not change Software Project a1 a2 Organic 3.2 1.05 Semi-detached 3.0 1.12 Embedded 2.8 1.20 69 Ex-A new project with estimated 400 KLOC embedded system has to be developed. Project manager has a choice of hiring from two pools of developers: Very highly capable with very little experience in the programming language being used or developers of low quality but a lot of experience with programming language. What is the impact of hiring all developers from one of the pool? Sol: This is a case of embedded mode and model is intermediate COCOMO. Hence E = 2.8(400)1.20 = 3712 PM CASE-I: Developers are very highly capable with very little experience in the programming being used. PCAP = very high (0.70), LEXP = very low (1.14) EAF = 0.70 X 1.14 = 0.798 E = 3712 X 0.798 = 2962 PM Tdev = 2.5(2962)0.32 = 10.41 M 70 CASE-II: Developers are of low quality i.e. low capability but lot of experience with the programming being used. PCAP = low (1.17), LEXP = high (0.95) EAF = 1.17 X 0.95 = 1.11 E = 3712 X 1.11 = 4120 PM Tdev = 2.5(4120)0.32 = 35.87 M CASE II requires more effort and time. Hence, low quality developers with lot of programming language experience could not match with the high quality developer with very little experience with the programming language. 71 Shortcoming of basic and intermediate COCOMO models Both models: consider a software product as a single homogeneous entity: However, most large systems are made up of several smaller sub-systems. Some sub-systems may be considered as organic type, some may be considered embedded, etc. for some the reliability requirements may be high, and so on. 72 Complete COCOMO Cost of each sub-system is estimated separately. Costs of the sub-systems are added to obtain total cost. Reduces the margin of error in the final estimate. 73 Complete COCOMO Example A Management Information System (MIS) for an organization having offices at several places across the country: Database part (semi-detached) Graphical User Interface (GUI) part (organic) Communication part (embedded) Costs of the components are estimated separately: summed up to give the overall cost of the system. 74 Complete COCOMO Model In previous models, the software product is considered as a single homogeneous entity. Most large systems are made up many sub-systems having widely different characteristics. Some may be treated as organic or semidetached or embedded ones. Ex- A distributed Management Information System Sub-systems Type Cost Database part - Organic C1 Graphical User Interface - Semi-detached C2 Communication part - Embedded C3 Cost of the project = ∑(Ci) 75 Halstead's Software Science(Analytical Approach) An analytical technique to estimate: Size, Development effort Development cost 76 Halstead's Software Science Halstead used a few primitive program parameters number of operators and operands Derived expressions for: over all program length, potential minimum volume actual volume, language level, effort, development time, Actual length estimation 77 Halstead's Software Science ⚫ M.H. Halstead proposed this metric ⚪ Using primitive measures ⚪ Derived once the design phase is complete and code is generated. ⚫ The measures are: ⚪ n1= number of distinct operators in a program ⚪ n2= number of distinct operands in a program ⚪ N1= total numbers of operators ⚪ N2= total number of operands ⚫ By using these measures, Halstead developed an expression for overall program length, program volume, program difficulty, development effort 78 79 General Counting rules for C-Program ⚫ Comments are not considered ⚫ The identifiers and function declarations are not considered ⚫ All the Hash directives are ignored. ⚫ All the variables and constants are considered as operands. ⚫ Global variables used in different modules are considered as multiple occurrences of same variable ⚫ Local variables in different functions are considered as unique operands. 80 CONT… ⚫ Function calls are considered as operators. ⚫ All looping statements and all control statements are considered as operators. ⚫ In control construct switch, both switch and cases are considered as operators. ⚫ The reserve words like return, default etc., are considered as operators ⚫ All the braces, commas and terminators are operators ⚫ GOTO is counted as operator and label is treated as operand ⚫ In the array variables such as array-name[index], array-name and index are treated as operands and [ ] is treated as operator and so on……………….., 81 CONT.. ⚫ Program length (N) can be calculated by using equation: ⚪ N = n1log2 n1 + n2log2 n2 ⚫ Program volume (V) can be calculated by using equation: ⚪ V = N log2(n1+n2) ⚫ Volume ratio (L) can be calculated by using the following equation: ⚪ L=(Volume of most compact program)/(volume of actual program) ⚪ L = (2/n1 )* (n2/N2 ) ⚫ Program difficulty level (D) and effort(E) can be calculated by equations: ⚪ D = (n1/2)*(N2/n2) ⚪ E=D*V 82 Example 83 84 Staffing Level Estimation Once the effort required to develop a software has been determined, it is necessary to determine the staffing requirement for the project. Putnam first studied the problem of proper staffing pattern for software projects. He extended the work of Norden who had earlier investigated the staffing pattern of (R&D) type of projects. In order to appreciate the staffing pattern of software projects. Norden’s and Putnam’s results must be understood. 85 Staffing Level Estimation Number of personnel required during any development project: not constant. Norden in 1958 analyzed many R&D projects, and observed: Rayleigh curve represents the number of full-time personnel required at any time. 86 Norden’s Work Norden studied the staffing patterns of several R & D projects. He found that the staffing pattern can be approximated by the Rayleigh distribution curve Norden represented the Rayleigh curve by the following Equation E is the effort required at time t. E is an indication of the number of engineers (or the staffing level) at any particular time during the duration of the project. 87 Rayleigh Curve Rayleigh curve is Rayleigh Curve specified by two parameters: Effort Td the time at which the curve reaches its maximum td K the total area under the curve. Time L=f(K, td) 88 Rayleigh Curve Very small number of engineers are needed at the beginning of a project carry out planning and specification. As the project progresses: more detailed work is required, number of engineers slowly increases and reaches a peak. 89 Putnam’s Work: In 1976, Putnam studied the problem of staffing of software projects: observed that the software development has characteristics very similar to other R & D projects studied by Norden and found that the Rayleigh-Norden curve relates the number of delivered lines of code to effort and development time. 90 Putnam’s Work (CONT.) : Putnam analysed a large number of army projects, and derived the expression: K is the effort expended and L is the size in KLOC. td is the time to develop the software. Ck is the state of technology constant reflects factors that affect programmer productivity. 91 Putnam’s Work (CONT.) : Ck=2 for poor development environment no methodology, poor documentation, and review, etc. Ck=8 for good software development environment software engineering principles used Ck=11 for an excellent environment 92 Putnam’s Work (CONT.) : Putnam observed that: the time at which the Rayleigh curve reaches its maximum value corresponds to system testing and product release. After system testing, the number of project staff falls till product installation and delivery. 93 Putnam’s Work (CONT.) : From the Rayleigh curve observe that: approximately 40% of the area under the Rayleigh curve is to the left of td and 60% to the right. 94 Effect of Schedule Change on Cost Using the Putnam's expression for L, For the same product size, C1=L3/Ck3 is a constant. Project development effort is equally proportional to development costs. 95 Effect of Schedule Change on Cost (CONT.) Observe: a relatively small compression in delivery schedule can result in substantial penalty on human effort. Also, observe: benefits can be gained by using fewer people over a somewhat longer time span. 96 Example If the estimated development time is 1 year, then in order to develop the product in 6 months, the total effort and hence the cost increases 16 times. In other words, the relationship between effort and the chronological delivery time is highly nonlinear. 97 Effect of Schedule Change on Cost (CONT.) Putnam model indicates extreme penalty for schedule compression and extreme reward for expanding the schedule. Putnam estimation model works reasonably well for very large systems, but seriously overestimates the effort for medium and small systems. 98 Effect of Schedule Change on Cost (CONT.) Boehm observed: “There is a limit beyond which the schedule of a software project cannot be reduced by buying any more personnel or equipment.” This limit occurs roughly at 75% of the nominal time estimate. 99 Effect of Schedule Change on Cost (CONT.) If a project manager accepts a customer demand to compress the development time by more than 25% very unlikely to succeed. every project has only a limited amount of parallel activities sequential activities cannot be speeded up by hiring any number of additional engineers. many engineers have to sit idle. 100 Jensen Model Jensen model is very similar to Putnam model. attempts to soften the effect of schedule compression on effort makes it applicable to smaller and medium sized projects. 101 Jensen Model Jensen proposed the equation: L=CtetdK1/2 Where, Cte is the effective technology constant, td is the time to develop the software, and K is the effort needed to develop the software. 102 SCHEDULING The scheduling problem, in essence, consists of deciding which tasks would be taken up when and by whom. In order to schedule the project activities, a software project manager needs to do the following: 1. Identify all the major activities that need to be carried out to complete the project. 2. Break down each activity into tasks. 3. Determine the dependency among different tasks. 4. Establish the estimates for the time durations necessary to complete the tasks. 5. Represent the information in the form of an activity network. 6. Determine task starting and ending dates from the information represented in the activity network. 7. Determine the critical path. A critical path is a chain of tasks that determines the duration of the project. 8. Allocate resources to tasks. 103 Work Breakdown Structure Work breakdown structure (WBS) is used to recursively decompose a given set of activities into smaller activities. Tasks are the lowest level work activities in a WBS hierarchy. They also form the basic units of work that are allocated to the developer and scheduled. Once project activities have been decomposed into a set of tasks using WBS, the time frame when each activity is to be performed is to be determined. The end of each important activity is called a milestone. 104 Work breakdown structure of an MIS problem 105 How long to decompose? The decomposition of the activities is carried out until any of the following is satisfied: A leaf-level sub-activity (a task) requires approximately two weeks to develop. Hidden complexities are exposed, so that the job to be done is understood and can be assigned as a unit of work to one of the developers. Opportunities for reuse of existing software components is identified. 106 Activity Networks An activity network shows the different activities making up a project, their estimated durations, and their interdependencies. Two equivalent representations for activity networks are possible and are in use: Activity on Node (AoN): In this representation, each activity is represented by a rectangular (some use circular) node and the duration of the activity is shown alongside each task in the node. The inter-task dependencies are shown using directional edges. Activity on Edge (AoE): In this representation tasks are associated with the edges. The edges are also annotated with the task duration. The nodes in the graph represent project milestones. 107 Activity network representation of the MIS problem. 108 Project Parameters Computed from Activity Network 109 Critical Path Method (CPM) CPM is an operation research technique that was developed in the late 1950s. Since then, it has remained extremely popular among project managers. Of late, CPM & PERT techniques have got merged and many project management tools support them as CPM/PERT. A path in the activity network graph is any set of consecutive nodes and edges in this graph from the starting node to the last node. A critical path consists of a set of dependent tasks that need to be performed in a sequence and which together take the longest time to complete. A critical task is one with a zero slack time. A path from the start node to the finish node containing only critical tasks is called a critical path. 110 CPM involves calculating the following quantities: Minimum time (MT): It is the minimum time required to complete the project. It is computed by determining the maximum of all paths from start to finish. Earliest start (ES): It is the time of a task is the maximum of all paths from the start to this task. The ES for a task is the ES of the previous task plus the duration of the preceding task. Latest start time (LST): It is the difference between MT and the maximum of all paths from this task to the finish. The LST can be computed by subtracting the duration of the subsequent task from the LST of the subsequent task. Earliest finish time (EF): The EF for a task is the sum of the earliest start time of the task and the duration of the task. Latest finish (LF): LF indicates the latest time by which a task can finish without affecting the final completion time of the project. A task completing beyond its LF would cause project delay. LF of a task can be obtained by subtracting maximum of all paths from this task to finish from MT. Slack time (ST): The slack time (or float time) is the total time that a task may be delayed before it will affect the end time of the project. The slack time indicates the ”flexibility” in starting and completion of tasks. ST for a task is LS-ES and can equivalently be written as LF-EF. 111 ES and EF for every task for the MIS problem of Example 112 LS and LF for every task for the MIS problem 113 Project Parameters Computed From Activity Network The critical paths are all the paths whose duration equals MT. The critical path in Previous Figure is shown using a thick arrows. 114 PERT Charts Why PERT Charts? The activity durations computed using an activity network are only estimated duration. It is therefore not possible to estimate the worst case (pessimistic) and best case (optimistic) estimations using an activity diagram. Since, the actual durations might vary from the estimated durations, the utility of the activity network diagrams are limited. The CPM can be used to determine the duration of a project, but does not provide any indication of the probability of meeting that schedule. 115 Project evaluation and review technique Project evaluation and review technique (PERT) charts are a more sophisticated form of activity chart. Project managers know that there is considerable uncertainty about how much time a task would exactly take to complete. The duration assigned to tasks by the project manager are after all only estimates. Therefore, in reality the duration of an activity is a random variable with some probability distribution. In this context, PERT charts can be used to determine the probabilistic times for reaching various project mile stones, including the final mile stone. PERT charts like activity networks consist of a network of boxes and arrows. The boxes represent activities and the arrows represent task dependencies. A PERT chart represents the statistical variations in the project estimates assuming these to be normal distribution. 116 PERT allows for some randomness in task completion times, and therefore provides the capability to determine the probability for achieving project milestones based on the probability of completing each task along the path to that milestone. Each task is annotated with three estimates: Optimistic (O): The best possible case task completion time. Most likely estimate (M): Most likely task completion time. Worst case (W): The worst possible case task completion time. The optimistic (O) and worst case (W) estimates represent the extremities of all possible scenarios of task completion. The most likely estimate (M) is the completion time that has the highest probability. 117 PERT Chart The standard deviation for a task ST = (P – O)/6 The mean estimated time is calculated as ET = (O + 4M + W )/6. Since all possible completion times between the minimum and maximum duration for every task has to be considered, there can be many critical paths, depending on the various permutations of the estimates for each task. This makes critical path analysis in PERT charts very complex. 118 PERT chart representation of the MIS problem. 119 Gantt Charts Gantt chart has been named after its developer Henry Gantt. A Gantt chart is a form of bar chart. The vertical axis lists all the tasks to be performed. The bars are drawn along the y-axis, one for each task. In the Gantt charts used for software project management, each bar consists of a unshaded part and a shaded part. The shaded part of the bar shows the length of time each task is estimated to take. The unshaded part shows the slack time. The lax time represents the leeway or flexibility available in meeting the latest time by which a task must be finished. 120 A Gantt chart is a special type of bar chart where each bar represents an activity. The bars are drawn along a time line. The length of each bar is proportional to the duration of time planned for the corresponding activity. Gantt chart representation of a project schedule is helpful in planning the utilization of resources, while PERT chart is useful for monitoring the timely progress of activities. 121 Gantt chart representation of the MIS problem 122 Organization Structure Functional Organization Project Organization Matrix organization 123 Functional Organization Functional Organization: Engineers are organized into functional groups, e.g. specification, design, coding, testing, maintenance, etc. Engineers from functional groups get assigned to different projects 124 Organisation Structures Functional Format Projects borrow developers from various functional groups Different team of programmers of functional groups perform different phases of a project Partially completed product passes from one team to other as product evolves The functional team requires more communication among different teams Development of good quality documentation since the work of one team is clearly understood by subsequent teams working in the same project Mandates good quality of documentation to be produced after every activity 125 Advantages of Functional Organization Specialization Ease of staffing Good documentation is produced different phases are carried out by different teams of engineers. Helps identify errors earlier. 126 Project Organization Engineers get assigned to a project for the entire duration of the project Same set of engineers carry out all the phases Advantages: Engineers save time on learning details of every project. Leads to job rotation 127 Top management Top management Requirements Project Design team #1 Coding Project Project Testing team #1 team #n Project Project Mgmt. team #n Maintenance Project organisation Functional organisation 128 Functional vs Project Even though greater communication among the team members may appear as an avoidable overhead, the functional format has many advantages. The main advantages of a functional organisation are: Ease of staffing Production of good quality documents Job specialisation Efficient handling of the problems associated with manpower turnover 129 Functional vs Project A project organisation structure forces the manager to take in almost a constant number of developers for the entire duration of his project. This results in developers idling in the initial phase of software development and are under tremendous pressure in the later phase of development. In spite of several important advantages of the functional organisation, it is not very popular in the software industry. WHY?? 130 Functional vs Project The project format provides job rotation to the team members. That is, each team member takes on the role of the designer, co der, tester, etc during the course of the project. On the other hand, considering the present skill shortage, it would be very difficult for the functional organisations to fill slots for some roles such as the maintenance, testing, and coding groups. Another problem with the functional organisation is that if an organisation handles projects requiring knowledge of specialized domain areas, then these domain experts cannot be brought in and out of the project for the different phases, unless the company handles a large number of such projects. For obvious reasons the functional format is not suitable for small organisations handling just one or two projects. 131 MATRIX ORGANISATION Project Functional #1 #2 #3 Groups #1 2 0 3 Functional Manager #1 #2 0 5 3 Functional Manager #1 #3 0 4 2 Functional Manager #1 #4 1 4 0 Functional Manager #1 #5 0 4 6 Functional Manager #1 Project Project Project Manager Manager Manager 1 2 3 132 Strong or Weak Matrix Organization Matrix organisations can be characterised as weak or strong, depending upon the relative authority of the functional managers and the project managers. In a strong functional matrix, the functional managers have authority to assign workers to projects and project managers have to accept the assigned personnel. In a weak matrix, the project manager controls the project budget, can reject workers from functional groups, or even decide to hire outside workers. 133 Disadvantage of Matrix Organization Two important problems that a matrix organisation often suffers from are: Conflict between functional manager and project managers over allocation of workers. Frequent shifting of workers in a firefighting mode as crises occur in different projects. 134 Team Structure Problems of different complexities and sizes require different team structures: Democratic team Chief-programmer team Mixed organization 135 Team Organization Democratic Team Chief Programmer team 136 Mixed team organization 137 Democratic Teams Suitable for: small projects requiring less than five or six engineers research-oriented projects A manager provides administrative leadership: at different times different members of the group provide technical leadership. 138 Democratic Teams Democratic organization provides higher morale and job satisfaction to the engineers therefore leads to less employee turnover. Suitable for less understood problems, a group of engineers can invent better solutions than a single individual. 139 Democratic Teams Disadvantage: team members may waste a lot time arguing about trivial points: absence of any authority in the team. 140 Software engineer Democratic team Does not enforce hierarchy Communication path Manager, administrative leader Others are leader in their field Higher morale, job satisfaction Team is appropriate for less understood problems Group can invent better solution than an individual Suitable for a team of 5-6 engineers, research type project For larger size project the democratic team will be chaotic The democratic setup causes egoless programming Code walk through to review the problem and locate the bugs by someone else 141 Chief Programmer Team A senior engineer provides technical leadership: partitions the task among the team members. verifies and integrates the products developed by the members. 142 Chief Programmer Team Works well when the task is well understood also within the intellectual grasp of a single individual, importance of early completion outweighs other factors team morale, personal development, etc. 143 Chief Programmer Team Chief programmer team is subject to single point failure: too much responsibility and authority is assigned to the chief programmer. 144 Chief programmer structure Chief programmer leads the team. Project manager He is a senior engineer (manager). He verifies & integrates the products. reporting He leads to lower team morale. Constant supervision. Software engineer The team is the most efficient to complete simple and small projects. Task is within the single intellectual grasp of single individual. Good for MIS kind of projects. Used, if project needs early completion. 145 Mixed Control Team Organization Draws upon ideas from both: democratic organization and chief-programmer team organization. Communication is limited to a small group that is most likely to benefit from it. Suitable for large organizations. 146 Project manager Mixed control team Combination of both democratic and chief programmer organization Senior engineers Both hierarchical reporting and democratic setup Democratic connections are in dashed lines and reporting in solid arrows Software engineers Communication Suitable for large team size project Reporting Democratic arrangement at the senior engineer level is used to decompose the problem into small parts Each democratic setup at the programmer level attempts to find solution to a single part Team structure is very popular, used by many Software companies. 147 STAFFING Software project managers need to identify good software developers for the success of the project. The assumption that one software engineer is as productive as another is wrong. There exists a large variability of productivity between the worst and the best software developers in a scale of 1 to 30. The worst developers may sometimes even reduce the overall productivity of the team. Choosing good software developers is crucial to the success of a project. 148 Who is a good software engineer? Domain knowledge Good programming abilities. Good communication skills. These skills comprise of oral, written, and interpersonal skills. High motivation. Sound knowledge of fundamentals of computer science Intelligence. Ability to work in a team. Discipline, etc. 149 Communication Skill is essential Since software development is a group activity, it is vital for a software developer to possess three main kinds of communication skills—Oral, Written, and Interpersonal. A software developer not only needs to effectively communicate with his teammates (e.g. reviews, walk throughs, and other team communications) but may also have to communicate with the customer to gather product requirements. Poor interpersonal skills hamper these vital activities and often show up as poor quality of the product and low productivity. Software developers are also required at times to make presentations to the managers and to the customers. This requires a different kind of communication skill (oral communication skill). A software developer is also expected to document his work (design, code, test, etc.) as well as write the users’ manual, training manual, installation manual, maintenance manual, etc. This requires good written communication skill. 150 RISK MANAGEMENT Every project is susceptible to a large number of risks. Without effective management of the risks, even the most meticulously planned project may go hay ware. A risk is any anticipated unfavorable event or circumstance that can occur while a project is underway. It is necessary for the project manager to anticipate and identify different risks that a project is susceptible to, so that contingency plans can be prepared beforehand to contain each risk. Risk management aims at reducing the chances of a risk becoming real as well as reducing the impact of a risks that becomes real. Risk management consists of three essential activities—risk identification, risk assessment, and risk mitigation. following subsections. 151 Risk Identification A project can be subject to a large variety of risks. In order to be able to systematically identify the important risks which might affect a project, it is necessary to categorize risks into different classes. The project manager can then examine which risks from each class are relevant to the project. There are three main categories of risks which can affect a software project: project risks, technical risks, and business risks. 152 Project risks Project risks concern various forms of budgetary, schedule, personnel, resource, and customer-related problems. An important project risk is schedule slippage. Since, software is intangible, it is very difficult to monitor and control a software project. The invisibility of the product being developed is an important reason why many software projects suffer from the risk of schedule slippage. 153 Technical risks Technical risks concern potential design, implementation, interfacing, testing, and maintenance problems. Technical risks also include ambiguous specification, incomplete specification, changing specification, technical uncertainty, and technical obsolescence. Most technical risks occur due the development team’s insufficient knowledge about the product. 154 Business risks This type of risks includes the risk of building an excellent product that no one wants, losing budgetary commitments, etc. Similar product developed by some other company. 155 Risk Assessment The objective of risk assessment is to rank the risks in terms of their damage causing potential. For risk assessment, first each risk should be rated in two ways: The likelihood of a risk becoming real (r). The consequence of the problems associated with that risk (s). Based on these two factors, the priority of each risk can be computed as follows: p=r*s where, p is the priority with which the risk must be handled, r is the probability of the risk becoming real, and s is the severity of damage caused due to the risk becoming real. If all identified risks are prioritized, then the most likely and damaging risks can be handled first and more comprehensive risk abatement procedures can be designed for those risks. 156 Risk Mitigation 1. Avoid the risk: Risks can be avoided in several ways. Risks often arise due to project constraints and can be avoided by suitably modifying the constraints. The different categories of constraints that usually give rise to risks are: Process-related risk: These risks arise due to aggressive work schedule, budget, and resource utilisation. Product-related risks: These risks arise due to commitment to challenging product features (e.g. response time of one second, etc.), quality, reliability etc. Technology-related risks: These risks arise due to commitment to use certain technology (e.g., satellite communication). 157 Risk Mitigation 2. Transfer the risk: This strategy involves getting the risky components developed by a third party, buying insurance cover, etc. 3. Risk reduction: This involves planning ways to contain the damage due to a risk. For example, if there is risk that some key personnel might leave, new recruitment may be planned. The most important risk reduction techniques for technical risks is to build a prototype that tries out the technology that you are trying to use. 158 Risk Leverage There can be several strategies to cope up with a risk. To choose the most appropriate strategy for handling a risk, the project manager must consider the cost of handling the risk and the corresponding reduction of risk. For this we may compute the risk leverage of the different risks. Risk leverage is the difference in risk exposure divided by the cost of reducing the risk. More formally, 159 How to handle schedule slippage risk? Increase the visibility of the software product. Visibility of a software product can be increased by producing relevant documents during the development process and getting these documents reviewed by an appropriate team. Milestones should be placed at regular intervals to provide a manager with regular indication of progress. Every phase can be broken down to reasonable-sized tasks and milestones can be associated with these tasks. A milestone is reached, once documentation produced as part of a software engineering task is produced and gets successfully reviewed. Milestones need not be placed for every activity. An approximate rule of thumb is to set a milestone every 10 to 15 days. 160 Modules Topics Lesson Plan Software product, Software crisis, Handling complexity through Abstraction and Decomposition, software development activities. 1. Software process models Software process Models: Classical waterfall model, iterative waterfall model, (10 hrs) prototyping model, evolutionary model, spiral model, RAD model. Agile models: Extreme programming and Scrum. Module 1 Activities 2. Software Requirement Requirements Analysis, Requirements Analysis principles, Software Engineering (2hrs) Requirement Specifications (SRS document), IEEE 830 guidelines Responsibilities of a Software project manager, project planning, Metrics for project size estimation, Project cost estimation techniques, Empirical estimation techniques, COCOMO 3. Software Project models, Management Scheduling, Risk Management (10hrs) Organization & team structure Module 2 and 3 Activities MIDSEM

Use Quizgecko on...
Browser
Browser