CPE Module I - Introduction To Computer System Performance Evaluation PDF

Document Details

EfficaciousConstellation6769

Uploaded by EfficaciousConstellation6769

Kwara State University

Tags

computer performance evaluation computer performance performance appraisal computer science

Summary

This document provides an introduction to computer system performance evaluation. It discusses the concept of performance, performance appraisal processes, and the importance of performance evaluation in the design, procurement, and use of computer systems. The document also covers different techniques for evaluating computer performance and potential mistakes to avoid.

Full Transcript

**MODULE 1: INTRODUCTION TO COMPUTER SYSTEM PERFORMANCE EVALUATION** **The concept of performance is a problematic concept and will remain so as long since the definition of performance varies depending on the domain and interests of users of information. Performance is the act "of performing" or "...

**MODULE 1: INTRODUCTION TO COMPUTER SYSTEM PERFORMANCE EVALUATION** **The concept of performance is a problematic concept and will remain so as long since the definition of performance varies depending on the domain and interests of users of information. Performance is the act "of performing" or "of doing something successfully" using knowledge as distinguished from merely possessing it. It can be described as using abilities, skills, and achievements in a specific role or task. Duncan defines performance as being comprised of productivity, efficiency, satisfaction, suppleness, development, and survival.** **A performance appraisal, also referred to as a performance review, performance evaluation, (career) development discussion,  or employee appraisal, sometimes shortened to \"PA\", is a periodic and systematic process whereby the job performance of an [employee](https://en.wikipedia.org/wiki/Employee) is documented and evaluated. This is done after employees are trained about work and settle into their jobs. Performance appraisals are a part of [career development](https://en.wikipedia.org/wiki/Career_development) and consist of regular reviews of employee performance within [organizations](https://en.wikipedia.org/wiki/Organization).** **Performance appraisals are most often conducted by an employee\'s immediate [manager](https://en.wikipedia.org/wiki/Manager) or [line manager](https://en.wikipedia.org/wiki/Line_manager). While extensively practiced, annual performance reviews have also been criticized as providing feedback too infrequently to be useful, and some critics argue that performance reviews in general do more harm than good. It is an element of the principal-agent framework, that describes the relationship of information between the employer and employee, and in this case the direct effect and response received when a performance review is conducted.** **It is beneficial in the sense that it assists managers make decisions about pay increases and promotions. Employees get to know how they perform in their jobs, Managers understand the strengths and weaknesses of their direct reports and are able to make decisions about compensation packages.** **It might however be disadvantageous as it can be stressful for the employees (the process of giving and receiving feedback can be uncomfortable for both parties and, if not handled properly, can lead to bitterness over time). Secondly, performance evaluations are often subjective and inaccurate, because they are based on the manager\'s judgments rather than on objective measures. A performance evaluation software can help ensure a fair distribution of standards across the workforce in terms of KPIs, targets, grading systems and more.** **Computer performance is the amount of useful work accomplished by a [computer](https://en.wikipedia.org/wiki/Computer_system) system, that is, the amount of work that is completed by a computer system. Here, the word 'performance' is used to represent how well a computer can accomplish its assigned tasks. Computer performance refers to the efficiency and speed with which a computer or computing system executes tasks and processes. It measures how well a computer system operates under specific conditions and workloads, encompassing various factors such as processing speed, scalability, throughput etc.** ***Performance evaluation* is a basic element of experimental computer science. It is used to compare design alternatives when building new systems, to tune parameter values of existing systems, and to assess capacity requirements when setting up systems for production use. Lack of adequate performance evaluation can lead to bad decisions, which imply either not being able to accomplish mission objectives or inefficient use of resources. *Performance is a key criterion in the design, procurement, and use of computer systems*. As such, the goal of computer systems engineers, scientists, analysts, and users is to get the highest performance for a given cost. To achieve that goal, computer systems professionals need, at least, a basic knowledge of performance evaluation terminology and techniques. Anyone associated with computer systems should be able to state the performance requirements of their systems and should be able to compare different alternatives to find the one that best meets their requirements.** **Computer performance evaluation, as we call it today, probably began between 1961 and 1962 when IBM developed the first channel analyzer to measure the IBM 7080 computer performance. The real push towards performance monitoring began between 1967 and 1969. Earlier generations of computers worked on one job at a time, which made it relatively easy to tell how efficiently each component was being used. However, later generation computers used multi-programming, and efficiency was not apparent because the computer could be working on several jobs at the same time, and each job could be competing for the same resources. This method of operation could result in an idle central processing unit (CPU) or in serious imbalances in loads imposed on peripherals. As a result, these later generations of computers fostered a need for computer performance evaluation to help balance workloads and improve operating efficiency.** **A computer performance evaluation is defined as the process by which a computer system\'s resources and outputs are assessed to determine whether the system is performing at an optimal level. Computer performance evaluation is a specialty of the computer profession and concerns itself with the efficient use of computer resources (e.g. the central processing unit, tape drives, disk drives, and memory). Using selected tools to take measurements over time, the CPE specialist can determine the computer resource usage. This measurement data provides the CPE specialist with information relative to component capacity, system hardware, software, operating procedure inefficiencies, and workload. Thus, CPE provides the information to answer such questions as:** i. **Can we decrease the workload or make the application programs more efficient?** ii. **Do we need an additional or a more powerful computer?** iii. **Do we need additional components (memory, tape, disk, etc.)?** iv. **Can we eliminate certain components or replace them at a lower cost?** v. **Can we improve certain aspects of computer service (response time, turnaround time)?** **Computer performance evaluation is the process of assessing and measuring the effectiveness, efficiency, and capability of a computer system in executing tasks and handling workloads. This involves analyzing various metrics to determine how well the system performs under different conditions, with the aim of identifying strengths, weaknesses, and areas for improvement.** ***Computer system performance* is characterized by the amount of useful work accomplished by a computer system compared to the time and resources used. The performance of any computer system can be evaluated in measurable, technical terms, using one or more computer performance metrics. While the above definition relates to a scientific, technical approach, the following definition given by Arnold Allen would be useful for a non-technical audience: "The word *performance* in computer performance means how well is the computer doing the work it is supposed to do".** **THE ART OF PERFORMANCE EVALUATION** **Contrary to common belief, performance evaluation is an art. Like a work of art, successful evaluation cannot be produced mechanically. Every evaluation requires an intimate knowledge of the system being modeled and a careful selection of the methodology, workload, and tools. Like an artist, each analyst has a unique style. Given the same problem, two analysts may choose different performance metrics and evaluation methodologies. In fact, given the same data, two analysts may interpret them differently. The following example shows a typical case for which, given the same measurements, two system designers can each prove that his/her system is better than that of the other.** **Example 1: The throughputs of two systems A and B were measured in transactions per second. The results are shown in Table 1.1.** **Table 1.1: Throughput in Transactions per Second** **System** **Workload 1** **Workload 2** ------------ ---------------- ---------------- **A** **20** **10** **B** **10** **20** **There are three ways to compare the performance of the two systems. The first way is to take the** **average of the performances on the two workloads. This leads to the analysis shown in Table 1.2. The conclusion in this case is that the two systems are equally good. The second way is to consider the ratio of the performances with system B as the base, as shown in Table 1.3. The conclusion in this case is that system A is better than B. The third way is to consider the performance ratio with system A as the base, as shown in Table 1.4. The conclusion in this case is that system B is better than A.** **Table 1.2: Comparing the Average Throughput** **System** **Workload 1** **Workload 2** **Average** ------------ ---------------- ---------------- ------------- **A** **20** **10** **15** **B** **10** **20** **15** **Table 1.3: Throughput with Respect to System B** **System** **Workload 1** **Workload 2** **Average** ------------ ---------------- ---------------- ------------- **A** **2** **0.5** **1.25** **B** **1** **1** **1** **Table 1.4: Table 1.3: Throughput with Respect to System A** **System** **Workload 1** **Workload 2** **Average** ------------ ---------------- ---------------- ------------- **A** **1** **1** **1** **B** **0.5** **2** **1.25** **COMMON MISTAKES IN PERFORMANCE EVALUATION** **The mistakes listed here happen due to simple oversights, misconceptions, and lack of knowledge about performance evaluation techniques.** **1. [*No Goals*:] Goals are an important part of all endeavors. Any endeavor without goals is bound to fail. Performance evaluation projects are no exception. The need for a goal may sound obvious, but many performance efforts are started without any clear goals. A performance analyst, for example, is routinely hired along with the design team. The analyst may then start modeling or simulating the design. When asked about the goals, the analyst's answer typically is that the model will help answer a design question that may arise. A common claim is that the model will be flexible enough to be easily modified to solve different problems. Experienced analysts know that there is no such thing as a general-purpose model. Each model must be developed with a particular goal in mind. The metrics, workloads, and methodology all depend upon the goal. The part of the system design that needs to be studied in the model varies from problem to problem.** **2. [*Biased Goals*:] Another common mistake is implicit or explicit bias in stating the goals. If, for example, the goal is "to show that OUR system is better than THEIRS," the problem becomes that of finding the metrics and workloads such that OUR system turns out better rather than that of finding the right metrics and workloads for comparing the two systems. One rule of professional etiquette for performance analysts is to be unbiased. *The performance analyst's role is like that of a jury*. Do not have any preconceived biases and base all conclusions on the results of the analysis rather than on pure beliefs.** **3. [*Unsystematic Approach*:] Often analysts adopt an unsystematic approach whereby they select system parameters, factors, metrics, and workloads arbitrarily. This leads to inaccurate conclusions. The systematic approach to solving a performance problem is to identify a complete set of goals, system parameters, factors, metrics, and workloads.** **4. [*Analysis without Understanding the Problem*:] Inexperienced analysts feel that nothing really has been achieved until a model has been constructed and some numerical results have been obtained. With experience, they learn that a large share of the analysis effort goes in to defining a problem. This supports the old saying: *A problem well stated is half solved*.** **5. [*Incorrect Performance Metrics*:] A metric refers to the criterion used to quantify the performance of the system. Examples of commonly used performance metrics are *throughput* and *response time*. The choice of correct performance metrics depends upon the services provided by the system or subsystem being modeled. For example, the performance of Central Processing Units (CPUs) is compared on the basis of their throughput, which is often measured in terms of millions of instructions per second (MIPS).** **6. [*Unrepresentative Workload*:] The workload used to compare two systems should be representative of the actual usage of the systems in the field. For example, if the packets in networks are generally a mixture of two sizes---short and long---the workload to compare two networks should consist of short and long packet sizes. The choice of the workload has a significant impact on the results of a performance study. The wrong workload will lead to inaccurate conclusions.** **7. [*Wrong Evaluation Technique*:] There are three evaluation techniques: measurement, simulation, and analytical modeling. Analysts often have a preference for one evaluation technique that they use for every performance evaluation problem. For example, those proficient in queueing theory will tend to change every performance problem to a queueing problem even if the system is too complex and is easily available for measurement. Those proficient in programming will tend to solve every problem by simulation. This marriage to a single technique leads to a model that they can best solve rather than to a model that can best solve the problem.** **8. [*Overlooking Important Parameters*:] It is a good idea to make a complete list of system and workload characteristics that affect the performance of the system. These characteristics are called parameters. For example, system parameters may include quantum size (for CPU allocation) or working set size (for memory allocation). Workload parameters may include the number of users, request arrival patterns, priority, and so on. The analyst can choose a set of values for each of these parameters; the final outcome of the study depends heavily upon those choices. Overlooking one or more important parameters may render the results useless.** **9. [*Improper Presentation of Results*:] The eventual aim of every performance study is to help in decision making. An analysis that does not produce any useful results is a failure, as is the analysis with results that cannot be understood by the decision makers. The decision makers could be the designers of a system, the purchasers of a system, or the sponsors of a project.** **10. [*Ignoring Social Aspects*:] Successful presentation of the analysis results requires two types of skills: social and substantive. Writing and speaking are social skills while modeling and data analysis are substantive skills. Most analysts have good substantive skills, but only those who have good social skills are successful in selling their results to the decision makers. Acceptance of the analysis results requires developing a trust between the decision makers and the analyst and presentation of the results to the decision makers in a manner understandable to them.** **11. *[Erroneous Analysis]*: There are a number of mistakes analysts commonly make in measurement, simulation, and analytical modeling, for example, taking the average of ratios and too short simulations.** **12. *[No Analysis]*: One of the common problems with measurement projects is that they are often run by performance analysts who are good in measurement techniques but lack data analysis expertise. They collect enormous amounts of data but do not know how to analyze or interpret it. The result in a set of magnetic tapes (or disks) full of data without any summary. At best, the analyst may produce a thick report full of raw data and graphs without any explanation of how one can use the results. Therefore, it is better to have a team of performance analysts with measurement as well as analysis background.** ***13. [No Sensitivity Analysis]*[:] Often analysts put too much emphasis on the results of their analysis, presenting it as fact rather than evidence. The fact that the results may be sensitive to the workload and system parameters is often overlooked. Without a sensitivity analysis, one cannot be sure if the conclusions would change if the analysis was done in a slightly different setting. Also, without a sensitivity analysis, it is difficult to access the relative importance of various parameters.** **14. [*Ignoring Errors in Input*:] Often the parameters of interest cannot be measured. Instead, another variable that can be measured is used to estimate the parameter. For example, in one computer network device, the packets were stored in a linked list of buffers. Each buffer was 512 octets long. Given the number of buffers required to store packets, it was impossible to accurately predict the number of packets or the number of octets in the packets. Such situations introduce additional uncertainties in the input data. The analyst needs to adjust the level of confidence on the model output obtained from such data. Also, it may not be worthwhile to accurately model the packet sizes when the input can be off by as much as 512 octets. Another point illustrated by this example is the fact that input errors are not always equally distributed about the mean. In this case, the buffer space is always more than the actual number of octets transmitted on or received from the network. In other words, the input is biased.** **15. [*Improper Treatment of Outliers*:] Values that are too high or too low compared to a majority of values in a set are called outliers. Outliers in the input or model output present a problem. If an outlier is not caused by a real system phenomenon, it should be ignored. Including it would produce an invalid model. On the other hand, if the outlier is a possible occurrence in a real system, it should be appropriately included in the model. Ignoring it would produce an invalid model. Deciding which outliers should be ignored and which should be included is part of the art of performance evaluation and requires careful understanding of the system being modeled.** **16. [*Assuming No Change in the Future*:] It is often assumed that the future will be the same as the past. A model based on the workload and performance observed in the past is used to predict performance in the future. The future workload and system behavior is assumed to be the same as that already measured. The analyst and the decision makers should discuss this assumption and limit the amount of time into the future that predictions are made.** **17. [*Ignoring Variability*:] It is common to analyze only the mean performance since determining variability is often difficult, if not impossible. If the variability is high, the mean alone may be misleading to the decision makers. For example, decisions based on the daily averages of computer demands may not be useful if the load demand has large hourly peaks, which adversely impact user performance.** **18. *[Too Complex Analysis]*: Given two analyses leading to the same conclusion, one that is simpler and easier to explain is obviously preferable. Performance analysts should convey final conclusions in as simple a manner as possible. Some analysts start with complex models that cannot be solved or a measurement or simulation project with very ambitious goals that are never achieved. It is better to start with simple models or experiments, get some results or insights, and then introduce the complications.** **There is a significant difference in the types of models published in literature and those used in the real world. The models published in the literature and, therefore, taught in schools are generally too complex. This is because trivial models, even when very illuminating, are not generally accepted for publication. For some reason, the ability to develop and solve a complex model is valued more highly in academic circles than the ability to draw conclusions from a simple model. However, in the industrial world, decision makers are rarely interested in the modeling technique or its innovativeness. Their chief concern is the guidance that the model provides along with the time and cost to develop the model. Decision deadlines often lead to choosing simple models. Thus, a majority of day-to-day performance problems in the real world are solved by simple models. Complex models are rarely, if ever, used. Even if the time required to develop the model was not restricted, complex models are not easily understood by the decision makers, and therefore, the model results may be misbelieved. This causes frustrations for new graduates who are very well trained in complex modeling techniques but find few opportunities to use them in the real world.** **19. *[Improper Presentation of Results]*: The eventual aim of every performance study is to help in decision making. An analysis that does not produce any useful results is a failure, as is the analysis with results that cannot be understood by the decision makers. The decision makers could be the designers of a system, the purchasers of a system, or the sponsors of a project. Conveying (or selling) the results of the analysis to decision makers is the responsibility of the analyst. This requires the prudent use of words, pictures, and graphs to explain the results and the analysis. *The right metric to measure the performance of an analyst is not the number of analyses performed but the number of analyses that helped the decision makers*.** **20. *[Ignoring Social Aspects]*: Successful presentation of the analysis results requires two types of skills: social and substantive. Writing and speaking are social skills while modeling and data analysis are substantive skills. Most analysts have good substantive skills, but only those who have good social skills are successful in selling their results to the decision makers. Acceptance of the analysis results requires developing a trust between the decision makers and the analyst and presentation of the results to the decision makers in a manner understandable to them. If decision makers do not believe or understand the analysis, the analyst fails to make an impact on the final decision. Social skills are particularly important in presenting results that are counter to the decision makers' beliefs and values or that require a substantial change in the design.** **Beginning analysts often fail to understand that social skills are often more important than substantive skills. High-quality analyses may be rejected simply because the analyst has not put enough effort and time into presenting the results. The decision makers are under time pressures and would like to get to the final results as soon as possible. They generally are not interested in the innovativeness of the approach or the approach itself. On the other hand, the analyst, having spent a considerable amount of time on the analysis, may be more interested in telling the decision makers about the innovativeness of the modeling approach than the final results. This disparity in viewpoint may lead to a report that is too long and fails to make an impact. The problem is compounded by the fact that the analyst also has to present the results to his/her peers who are analysts themselves and would like to know more about the approach than the final results. One solution, therefore, is to prepare two separate presentations (or reports) for the two audiences. The presentation to the decision makers should have minimal analysis jargon and emphasize the final results, while the presentation to other analysts should include all the details of the analysis techniques. Combining these two presentations into one could make it meaningless for both audiences.** **Inexperienced analysts assume that the decision makers are like themselves and share the same beliefs, values, language, and jargon. This is often not true. The decision makers may be good at evaluating the results of the analysis but may not have a good understanding of the analysis itself. In their positions as decision makers, they have to weigh several factors that the analyst may not consider important, such as the political impact of the decision, the delay in the project schedule, or the availability of personnel to implement a particular decision. The analyst who makes an effort to understand the decision makers' concerns and incorporates these as much as possible into the presentation will have a better chance of "selling" the analysis than one who sees things only from his/her own point of view.** **21. *[Omitting Assumptions and Limitations]*: Assumptions and limitations of the analysis are often omitted from the final report. This may lead the user to apply the analysis to another context where the assumptions will not be valid. Sometimes analysts list the assumptions at the beginning of the report but then forget the limitations at the end and make conclusions about environments to which the analysis does not apply.** **CHECKLIST FOR AVOIDING COMMON MISTAKES IN PERFORMANCE EVALUATION** **1. Is the system correctly defined and the goals clearly stated?** **2. Are the goals stated in an unbiased manner?** **3. Have all the steps of the analysis followed systematically?** **4. Is the problem clearly understood before analyzing it?** **5. Are the performance metrics relevant for this problem?** **6. Is the workload correct for this problem?** **7. Is the evaluation technique appropriate?** **8. Is the list of parameters that affect performance complete?** **9. Have all parameters that affect performance been chosen as factors to be varied?** **10. Is the experimental design efficient in terms of time and results?** **11. Is the level of detail proper?** **12. Is the measured data presented with analysis and interpretation?** **13. Is the analysis statistically correct?** **14. Has the sensitivity analysis been done?** **15. Would errors in the input cause an insignificant change in the results?** **16. Have the outliers in the input or output been treated properly?** **17. Have the future changes in the system and workload been modeled?** **18. Has the variance of input been taken into account?** **19. Has the variance of the results been analyzed?** **20. Is the analysis easy to explain?** **21. Is the presentation style suitable for its audience?** **22. Have the results been presented graphically as much as possible?**

Use Quizgecko on...
Browser
Browser