🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

28/8/07 14:01 Page 125 | 125 Continual Service Improvement The business/customers Requirements Service Strategy Strategies Policies Service Design Solution Designs Transition Plans Tested solutions Service Operation ual Service Improvement (CSI) provides practical nce in evaluating and improving the...

28/8/07 14:01 Page 125 | 125 Continual Service Improvement The business/customers Requirements Service Strategy Strategies Policies Service Design Solution Designs Transition Plans Tested solutions Service Operation ual Service Improvement (CSI) provides practical nce in evaluating and improving the quality of s, overall maturity of the ITSM Service Lifecycle and erlying processes, at three levels within the zation: e overall health of ITSM as a discipline SDPs Standards Architectures Service Transition Objectives from Requirements Resource and Constraints SKMS Operational services Continual Service Improvement Improvement actions and plans  The maturity of the enabling IT processes required to support business processes in a continual Service Lifecycle model. 8.1 PURPOSE OF CSI The primary purpose of CSI is to continually align and re- 28/8/07 14:01 Page 126 Continual Service Improvement gy, Service Design, Service Transition and Service ion. In effect, CSI is about looking for ways to ve process effectiveness and efficiency as well as fectiveness. CSI OBJECTIVES view, analyse and make recommendations on provement opportunities in each lifecycle phase: vice Strategy, Service Design, Service Transition and vice Operation view and analyse Service Level achievement results ntify and implement individual activities to improve ervice quality and improve the efficiency and ectiveness of enabling ITSM processes prove cost effectiveness of delivering IT services hout sacrificing customer satisfaction ure applicable quality management methods are d to support continual improvement activities. These activities do not happen automatically. They must be owned within the IT organization which is capable of handling the responsibility and possesses the appropriate authority to make things happen. They must also be planned and scheduled on an ongoing basis. By default, ‘improvement’ becomes a process within IT with defined activities, inputs, outputs, roles and reporting. CSI must ensure that ITSM processes are developed and deployed in support of an end-to-end service management approach to business customers. It is essential to develop an ongoing continual improvement strategy for each of the processes as well as the services. As Figure 8.1 shows, there are many opportunities for CSI. The figure also illustrates a constant cycle of improvement. The improvement process can be summarized in six steps:  Embrace the vision by understanding the high-level  lowing activities support a continual process vement plan: viewing management information and trends to ure that services are meeting agreed Service Levels viewing management information and trends to ure that the output of the enabling ITSM processes achieving the desired results iodically conducting maturity assessments against process activities and roles associated with the cess activities to demonstrate areas of improvement conversely, areas of concern iodically conducting internal audits verifying ployee and process compliance viewing existing deliverables for relevance king ad hoc recommendations for approval    business objectives. The vision should align the business and IT strategies Assess the current situation to obtain an accurate, unbiased snapshot of where the organization is right now. This baseline assessment is an analysis of the current position in terms of the business, organization, people, process and technology Understand and agree on the priorities for improvement based on a deeper development of the principles defined in the vision. The full vision may be years away but this step provides specific goals and a manageable timeframe Detail the CSI plan to achieve higher quality service provision by implementing ITSM processes Verify that measurements and metrics are in place to ensure that milestones were achieved, process compliance is high, and business objectives and priorities were met by the level of service 28/8/07 14:01 Page 127 Continual Service Improvement | 127 8.1 Continual Service Improvement model How do we keep the momentum going? What is the vision? Business vision, mission, goals and objectives Where are we now? Baseline assessments Where do we want to be? Measureable targets How do we get there? Service & process improvement Did we get there? Measurements & metrics Perspectives on benefits are four commonly used terms when discussing improvement outcomes: provements nefits I (return on investment) I (value on investment). Much of the angst and confusion surrounding IT process improvement initiatives can be traced to the misuse of these terms. Below is the proper use:  Improvements – Outcomes that when compared to the ‘before’ state, show a measurable increase in a desirable metric or decrease in an undesirable metric  Example: ABC Corp achieved a 15% reduction in failed changes through implementation of a formal Change Management process  Benefits – The gains achieved through realization of improvements, usually but not always expressed in monetary terms. 28/8/07 14:01 Page 128 Continual Service Improvement I – The difference between the benefit (saving) hieved and the amount expended to achieve that nefit, expressed as a percentage. Logically, one uld like to spend a little to save a lot Example: ABC Corp spent £200,000 to establish the formal Change Management process that saved £395,000. The ROI at the end of the first year of operation was therefore £195,000 or 97.5% I – The extra value created by establishment of nefits that include non-monetary or long-term comes. ROI is a subcomponent of VOI Example: ABC Corp’s establishment of a formal Change Management process (which reduced the number of failed changes) improved the ability of ABC Corp to respond quickly to changing market conditions and unexpected opportunities resulting in an enhanced market position. In addition, it promoted collaboration between business units and the IT organization and freed up resources to work on other projects that otherwise might not have been completed. BUSINESS DRIVERS sses are becoming increasingly aware of the ance of IT as a service provider to not only support o enable business operations. As a result the ss leaders of today ask much more pointed and questions regarding the quality of IT services and mpetency and efficiency of their provider. This level of scrutiny buttresses the expanding need for eaning that: does more than enable existing business operations; enables business change and is, therefore, an  There is additional focus on the quality of IT in terms      of reliability, availability, stability, capacity, security and, especially, risk IT and IT governance is an integral component in corporate governance IT performance becomes even more visible – technical outages and customer dissatisfaction increasingly become boardroom issues IT organizations increasingly find themselves in a position where they have to not only realize but manage business-enabling technology and services that deliver the capability and quality demanded by the business IT must demonstrate value for money IT within e-business is not only supporting the primary business processes, but is the core of those processes. 8.4 TECHNOLOGY DRIVERS The rapid pace of technology developments, within which IT provides solutions, becomes a core component of almost every area of business operations. As a result, IT services must:  Understand business operations and advise about the     short- and long-term opportunities (and limitations) of IT Be designed for agility and nimbleness to allow for unpredictability in business needs Accommodate more technological change, with a reduced cycle time, for realizing change to match a reduced window in the business cycle Maintain or improve existing quality of services while adding or removing technology components Ensure that quality of delivery and support matches 28/8/07 14:01 Page 129 Continual Service Improvement | SERVICE MEASUREMENT elements of a service measurement framework: egrated into business planning cused on business and IT goals and objectives st-effective anced in its approach on what is measured e to withstand change. mance measures that: accurate and reliable well defined, specific and clear relevant to meeting the objectives not create a negative behaviour d to improvement opportunities. mance targets that: SMART. ed roles and responsibilities o defines the measures and targets? o monitors and measures? o gathers the data? o processes and analyses the data? o prepares the reports? o presents the reports? Baselines portant beginning point for highlighting vement is to establish baselines as markers or g points for later comparison. Baselines are also o establish an initial data point to determine if a or process needs to be improved. As a result, it is ant that baselines are documented, recognized and 129 If a baseline is not initially established the first measurement efforts will become the baseline. That is why it is essential to collect data at the outset, even if the integrity of the data is in question. It is better to have data to question than to have no data at all. 8.5.2 Why do we measure?  To validate – monitoring and measuring to validate previous decisions  To direct – monitoring and measuring to set direction for activities in order to meet set targets. It is the most prevalent reason for monitoring and measuring  To justify – monitoring and measuring to justify, with factual evidence or proof, that a course of action is required  To intervene – monitoring and measuring to identify a point of intervention including subsequent changes and corrective actions. The four basic reasons to monitor and measure lead to three key questions: ‘Why are we monitoring and measuring?’, ‘When do we stop?’ and ‘Is anyone using the data?’ To answer these questions, it is important to identify which of the above reasons is driving the measurement effort. Too often, we continue to measure long after the need has passed. Every time you produce a report you should ask: ‘Do we still need this?’ 8.6 CONTINUAL SERVICE IMPROVEMENT PROCESSES The Seven-step Improvement Process is illustrated in Figure 8.2. 8.6.1 Step 1 – Define what you should 28/8/07 14:01 Page 130 Continual Service Improvement 8.2 Seven-step Improvement Process Identify Vision Strategy Tactical Goals Operational Goals 1. Define what you should measure 2. Define what you can measure 7. Implement corrective action Goals 3. Gather the data Who? How? When? Integrity of data? 6. Present and use the information, assessment summary, action plans, etc. 5. Analyse the data Relations? Trends? According to plan? Targets met? Corrective action? t simple. The number of things you should measure ow quite rapidly. So too can the number of metrics easurements. y and link the following items: porate vision, mission, goals and objectives vision, mission, goals and objectives ical success factors vice level targets b description for IT staff. 4. Process the data Frequency? Format? System? Accuracy?  Corporate, divisional and departmental goals and objectives  Legislative requirements  Governance requirements  Budget cycle  Balanced Scorecard. 8.6.2 Step 2 – Define what you can measure Every organization may find that they have limitations on what can actually be measured. If you cannot measure something then it should not appear in an SLA. 28/8/07 14:01 Page 131 Continual Service Improvement | m a gap analysis between the two lists. Report this ation back to the business, the customers and IT ement. It is possible that new tools are required or onfiguration or customization is required to be able asure what is required. of what you should measure cess flows cedures rk instructions hnical and user manuals from existing tools sting reports. 131 performance. CSI may therefore also need access to regular performance reports. However since CSI is unlikely to need, or be able to cope with, the vast quantities of data that are produced by all monitoring activity, they will most likely focus on a specific subset of monitoring at any given time. This could be determined by input from the business or improvements to technology. When a new service is being designed or an existing one changed, this is a perfect opportunity to ensure that what CSI needs to monitor is designed into the service requirements (see Service Design publication). This has two main implications: Step 3 – Gathering the data ing data requires having some form of monitoring e. Monitoring could be executed using technology s application, system and component monitoring r could even be a manual process for certain tasks. y is the key objective of monitoring for Continual e Improvement. Monitoring will therefore focus on ectiveness of a service, process, tool, organization or uration Item (CI). The emphasis is not on assuring me service performance, rather it is on identifying improvements can be made to the existing level of , or IT performance. Monitoring for CSI will therefore o focus on detecting exceptions and resolutions. For le, CSI is not as interested in whether an Incident solved, but whether it was resolved within the time, and whether future Incidents can be ted. not only interested in exceptions, though. If a e Level Agreement is consistently met over time, CSI o be interested in determining whether that level of  Monitoring for CSI will change over time. They may be interested in monitoring the e-mail service one quarter, and then move on to look at HR systems in the next quarter  This means that Service Operation and CSI need to build a process which will help them to agree on what areas need to be monitored and for what purpose. It is important to remember that there are three types of metrics that an organization will need to collect to support CSI activities as well as other process activities. The types of metrics are:  Technology metrics – these metrics are often associated with component and application-based metrics such as performance, availability etc.  Process metrics – these metrics are captured in the form of CSFs, KPIs and activity metrics for the service management processes. These metrics can help determine the overall health of a process. Four key questions that KPIs can help answer are around quality, 28/8/07 14:01 Page 132 Continual Service Improvement vice metrics – these metrics are the results of the d-to-end service. Component/technology metrics are d to compute the service metrics. ch as possible, you need to standardize the data re through policies and published standards. For le, how do you enter names in your tools – John Smith, John; or J. Smith? These can be the same or nt individuals. Having three different ways of ng the same name would slow down trend analysis ll severely impede any CSI initiative. ing data is defined as the act of monitoring and data on. This activity needs to clearly define the following: o is responsible for monitoring and gathering the data? w the data will be gathered? en and how often is the data gathered? eria to evaluate the integrity of the data swers will be different for every organization. e monitoring allows weak areas to be identified, so medial action can be taken (if there is a justifiable ss case), thus improving future service quality. e monitoring also can show where customer actions using the fault and thus lead to identifying where g efficiency and/or training can be improved.  Quality – How well are the processes working? Monitor the individual or key activities as they relate to the objectives of the end-to-end process  Performance – How fast or slow? Monitor the process efficiency such as throughput or cycle times  Value – Is this making a difference? Monitor the effectiveness and perceived value of the process to the stakeholders and the IT staff executing the process activities. Monitoring is often associated with automated monitoring of infrastructure components for performance such as availability or capacity, but monitoring should also be used for monitoring staff behaviour such as adherence to process activities, use of authorized tools as well as project schedules and budgets. Exceptions and alerts need to be considered during the monitoring activity as they can serve as early warning indicators that services are breaking down. Sometimes the exceptions and alerts will come from tools, but they will often come from those who are using the service or service management processes. We don’t want to ignore these alerts. Inputs to the gather-the-data activity:  New business requirements e monitoring should also address both internal and al suppliers since their performance must be ted and managed as well.  Existing SLAs e management monitoring helps determine the and welfare of service management processes in lowing manner:  Service Improvement Plans cess compliance – Are the processes being owed? Process compliance seeks to monitor the  Existing monitoring and data capture capability  Availability and Capacity Plans  Previous trend analysis reports  List of what you should measure  List of what you can measure  Gap analysis report 28/8/07 14:01 Page 133 Continual Service Improvement | Step 4 – Processing the data data is gathered, the next step is to process the data e required format. Report-generating technologies are y used at this stage as various amounts of data are nsed into information for use in the analysis activity. ta is also typically put into a format that provides an -end perspective on the overall performance of a. This activity begins the transformation of raw data ckaged information. Use the information to develop into the performance of the service and/or processes. s the data into information (i.e. create logical ngs) which provides a better means to analyse the the next activity step in CSI. utput of logical groupings could be in spreadsheets, s generated directly from the service management ite, system monitoring and reporting tools, or ony tools such as an automatic call distribution tool. sing the data is an important CSI activity that is often oked. While monitoring and collecting data on a single ucture component is important, it is also important to tand that components impact on the larger ucture and IT service. Knowing that a server was up % of the time is one thing, knowing that no one could the server is another. An example of processing the taking the data from monitoring of the individual nents such as the mainframe, applications, WAN, LAN, etc., and processing this into a structure of an end-torvice from the customer’s perspective. estions that need to be addressed in the processing y are: at is the frequency of processing the data? This uld be hourly, daily, weekly or monthly. When oducing a new service or service management 133  What format is required for the output? This is also driven by how analysis is done and ultimately how the information is used  What tools and systems can be used for processing the data?  How do we evaluate the accuracy of the processed data? There are two aspects to data gathering. One is automated and the other is manual. While both are important and contribute greatly to the measuring process, accuracy is a major differentiator between the two types. The accuracy of the automated data gathering and processing is not the issue here. The vast majority of CSI-related data will be gathered by automated means. Human data gathering and processing is the issue. It is important for staff to properly document their compliance activities, to update logs and records. Common excuses are that people are too busy, that this is not important or that it is not their job. On-going communication about the benefits of performing administrative tasks is of utmost importance. Tying these administrative tasks to job performance is one way to alleviate this issue. Inputs to the processing-the-data activity:  Data collected through monitoring  Reporting requirements  SLAs  OLAs  Service Catalogue  List of metrics, KPI, CSF, objectives and goals  Report frequency  Report template. 8.6.5 Step 5 – Analysing the data 28/8/07 14:01 Page 134 Continual Service Improvement on: ‘Is this a good trend or a bad trend?’ You don’t f the call reduction is because you have reduced mber of recurring errors in the infrastructure by Problem Management activities or if the customers at the Service Desk doesn’t provide any value and ave started bypassing the Service Desk and going y to second-level support groups. nalysis transforms the information into knowledge events that are affecting the organization. More skill perience is required to perform data analysis than athering and processing. Verification against goals bjectives is expected during this activity. This ation validates that objectives are being supported lue is being added. It is not sufficient to simply ce graphs of various types but to document the ations and conclusions looking to answer the ng questions: there any clear trends? they positive or negative trends? changes required? we operating according to plan? we meeting targets? corrective actions required? there underlying structural problems? at is the cost of the service gap? eresting to note the number of job titles for IT sionals that contain the word ‘analyst’ and even urprising to discover that few of them actually e anything. This step takes time. It requires ntration, knowledge, skills, experience etc. One of ajor assumptions is that the automated processing, ng, monitoring tool has actually done the analysis. ten people simply point at a trend and say ‘Look,  Is this bad?  Is this expected?  Is this in line with targets? Combining multiple data points on a graph may look nice but the real question is, what does it actually mean. ‘A picture is worth a thousand words’ goes the saying. In analysing the data an accurate question would be, ‘Which thousand words?’ To transform this data into knowledge, compare the information from Step 3 against both the requirements from Step 1 and what could realistically be measured from Step 2. Be sure to also compare the clearly defined objectives against the measurable targets that were set in the Service Design, Transition and Operations lifecycle stages. Confirmation needs to be sought that these objectives and the milestones were reached. If not, have improvement initiatives been implemented? If so, then the CSI activities start again from the gathering data, processing data and analysing data steps to identify if the desired improvement in service quality has been achieved. At the completion of each significant stage or milestone, a review should be conducted to ensure the objectives have been met. It is possible here to use the Post-Implementation Review (PIR) from the Change Management process. The PIR will include a review of supporting documentation and the general awareness amongst staff of the refined processes or service. A comparison is required of what has been achieved against the original goals. During the analysis activity, but after the results are compiled and analysis and trend evaluation have occurred, it is recommended that internal meetings be held within IT to review the results and collectively identify improvement opportunities. It is important to have these internal meetings before you begin presenting and using 28/8/07 14:01 Page 135 mining how the results and any actions items are ted to the business. uts IT in a better position to formulate a plan of ting the results and any action items to the business senior IT management. Throughout this publication ms ‘service’ and ‘service management’ have been xtensively. IT is too often focused on managing the s systems used by the business, often (but ctly) equating service and system. A service is y made up of systems. Therefore if IT wants to be ved as a key player, then IT must move from a s-based organization to a service-based zation. This transition will force the improvement of unication between the different IT silos that exist in T organizations. ming proper analysis on the data also places the ss in a position to make strategic, tactical and ional decisions about whether there is a need for improvement. Unfortunately, the analysis activity is not done. Whether it is due to a lack of resources he right skills and/or simply a lack of time is unclear. s clear is that without proper analysis, errors will ue to occur and mistakes will continue to be ed. There will be little improvement. nalysis transforms the information into knowledge events that are affecting the organization. As an le, a sub-activity of Capacity Management is ad management. This can be viewed as analysing ta to determine which customers use what resource, hey use the resource, when they use the resource ow this impacts the overall performance of the ce. You will also be able to see if there is a trend on age of the resource over a period of time. From an ental improvement process this could lead to some Continual Service Improvement | 135 Consideration must be given to the skills required to analyse from both a technical viewpoint and from an interpretation viewpoint. When analysing data, it is important to seek answers to questions such as:  Are operations running according to plan? This could      be a project plan, financial plan, availability, capacity or even IT Service Continuity Management plan Are targets defined in SLAs or the Service Catalogue being met? Are there underlying structural problems that can be identified? Are corrective actions required? Are there any trends? If so then what are the trends showing? Are they positive trends or negative trends? What is leading to or causing the trends? Reviewing trends over a period of time is another important task. It is not good enough to see a snapshot of a data point at a specific moment in time, but to look at the data points over a period of time. How did we do this month compared to last month, this quarter compared to last quarter, this year compared to last year? 8.6.6 Step 6 – Presenting and using the information The sixth step is to take our knowledge and present it, that is, turn it into wisdom by utilizing reports, monitors, action plans, reviews, evaluations and opportunities. Consider the target audience; make sure that you identify exceptions to the service, benefits that have been revealed, or can be expected. Data gathering occurs at the operational level of an organization. Format this data into knowledge that all levels can appreciate and gain insight 28/8/07 14:01 Page 136 Continual Service Improvement exceptions to service, identifies benefits that were ed during the time period, and allows those ng the information to make strategic, tactical and ional decisions. In other words, presenting the ation in the manner that makes it the most useful target audience. ng reports and presenting information is an activity done in most organizations to some extent or er; however it often is not done well. For many zations this activity is simply taking the gathered ta (often straight from the tool) and reporting this data to everyone. There has been no processing and s of the data. are usually three distinct audiences: e business – Their real need is to understand ether IT delivered the service they promised at the els they promised and if not, what corrective actions being implemented to improve the situation nior (IT) management – This group is often used on the results surrounding CSFs and KPIs such customer satisfaction, actual vs. plan, costing and enue targets. Information provided at this level ps determine strategic and tactical improvements a larger scale. Senior (IT) management often wants s type of information provided in the form of a anced Scorecard or IT scorecard format to see the picture at one glance ernal IT – This group is often interested in KPIs and ivity metrics that help them plan, coordinate, edule and identify incremental improvement portunities. more than ever, IT must invest the time to tand specific business goals and translate IT metrics business benefits of a well-run IT support group. The starting point is a new perspective on goals, measures, and reporting, and how IT actions affect business results. You will then be prepared to answer the question: ‘How does IT help to generate value for your company?’ Although most reports tend to concentrate on areas where things are not going as well as hoped for, do not forget to report on the good news as well. A report showing improvement trends is IT services’ best marketing vehicle. It is vitally important that reports show whether CSI has actually improved the overall service provision and if it has not, the actions taken to rectify the situation. 8.6.7 Step 7 – Implementing corrective action Use the knowledge gained to optimize, improve and correct services. Managers need to identify issues and present solutions. Explain how the corrective actions to be taken will improve the service. CSI identifies many opportunities for improvement however organizations cannot afford to implement all of them. Based on goals, objectives and types of service breaches, an organization needs to prioritize improvement activities. Improvement initiatives can also be externally driven by regulatory requirements, changes in competition, or even political decisions. After a decision to improve a service and/or service management process is made, then the Service Lifecycle continues. A new Service Strategy may be defined, Service Design builds the changes, Service Transition implements the changes into production and then Service Operation manages the day-to-day operations of the service and/or service management processes. Keep in mind that CSI activities continue through each phase of the Service 28/8/07 14:01 Page 137 Continual Service Improvement | ial new technology or modifications to existing ology, potential changes to KPIs and other metrics ossibly even new or modified OLAs/UCs to support Communication, training and documentation are ed to transition a new/improved service, tool or management process into production. Monitoring and data collection throughout the Service Lifecycle e Strategy is responsible for monitoring the progress tegies, standards, policies and architectural decisions ave been made and implemented. e Design monitors and gathers data associated with g and modifying (design efforts of) services and management processes. This part of the Service le also measures against the effectiveness and to measure CSFs and KPIs that were defined h gathering business requirements. Service Design efines what should be measured. This would include oring project schedules, progress to project ones, and project results against goals and ves. e Transition develops the monitoring procedures and to be used during and after implementation. e Transition monitors and gathers data on the actual into production of services and service ement processes. It is the responsibility of Service ion to ensure that the services and service ement processes are embedded in a way that can naged and maintained according to the strategies esign efforts. Service Transition develops the oring procedures and criteria to be used during and mplementation. e Operation is responsible for the actual monitoring 137 measured and processed into logical groupings as well as doing the actual processing of the data. Service Operation would also be responsible for taking the component data and processing it in a format that will provide a better end-to-end perspective of the service achievements. 8.6.9 Role of other processes in monitoring and data collection Service Level Management SLM plays a key role in the data-gathering activity as SLM is responsible for not only defining business requirements but also IT’s capabilities to achieve them.  One of the first items in defining IT’s capabilities is to     identify what monitoring and data collection activities are currently taking place SLM then needs to look at what is happening with the monitoring data. Is the monitoring taking place only at a component level and, if so, is anyone looking at multiple components to provide an end-to-end service performance perspective? SLM should also identify who gets the data, whether any analysis takes place on the data before it is presented, and if any trend evaluation is undertaken to understand the performance over a period of time. This information will be helpful in following CSI activities Through the negotiation process with the business, SLM would define what to measure and which aspects to report. This would in turn drive the monitoring and data collection requirements. If there is no capability to monitor and/or collect data on an item then it should not appear in the SLA. SLM should be a part of the review process to monitor results SLM is responsible for developing and getting 28/8/07 14:01 Page 138 Continual Service Improvement ability and Capacity Management Security Management vide significant input into existing monitoring and a collection capabilities, tool requirements to meet w data collection requirements and ensure the ailability and Capacity Plans are updated to reflect w or modified monitoring and data collection uirements accountable for the actual infrastructure nitoring and data collection activities that take ce. Therefore roles and responsibilities need to be ined and the roles filled with properly skilled and ned staff accountable for ensuring tools are in place to her data accountable for ensuring that the actual nitoring and data collection activities are nsistently performed. Security Management contributes to monitoring and data collection in the following manner: nt Management and Service Desk ident Management can define monitoring uirements to support event and incident detection ough automation and also has the ability to omatically open Incident tickets and/or autoalate Incident tickets ent and Incident monitoring can identify abnormal uations and conditions which helps with predicting d pre-empting situations and conditions thereby oiding possible service and component failures nitor response times, repair times, resolution times d Incident escalations a single point of contact it is important for the vice Desk to monitor telephony items such as call umes, average speed of answer, call abandonment es etc., so that immediate action can be taken when  Defines security monitoring and data collection requirements  Monitors, verifies and tracks the levels of security according to the organizational security policies and guidelines  Assists in determining effects of security measures on data monitoring and collection from the perspectives of confidentiality (accessible only to those who should), integrity (data is accurate and not corrupted or not corruptible) and availability (data is available when needed). Financial Management Financial Management is responsible for monitoring and collecting data associated with the actual expenditures vs. budget and is able to provide input on questions such as: Are costing or revenue targets on track? Financial Management should also monitor the ongoing cost per service etc. In addition Financial Management will provide the necessary templates to assist CSI to create the budget and expenditure reports for the various improvement initiatives as well as providing the means to compute the ROI of the improvements. 8.6.10 Metrics and measurement In general, a metric is a scale of measurement defined in terms of a standard, i.e. in terms of a well-defined unit. The quantification of an event through the process of measurement relies on the existence of an explicit or 28/8/07 14:01 Page 139 Continual Service Improvement | 139 8.1 Service metric examples re Metric Quality goal Lower limit Upper limit le % variation against revised plan Within 7.5% of estimate Not to be less than 7.5% of estimate Not to exceed 7.5% of estimate % variation against revised plan Within 10% of estimate Not to be less than 10% of estimate Not to exceed 10% of estimate % variation against revised plan Within 10% of estimate Not to be less than 10% of estimate Not to exceed 10% of estimate % variation against planned defect Within 10% of estimate Not to be less than 10% of estimate Not to exceed 10% of estimate % variation against productivity goal Within 10% of estimate Not to be less than 10% of estimate Not to exceed 10% of estimate Customer satisfaction survey result Greater than 8.9 on the range of 1 to 10 Not to be less than 8.9 on the range of 1 to 10 tivity mer satisfaction s are a system of parameters or ways of quantitative ment of a process that is to be measured, along he processes to carry out such measurement. Metrics what is to be measured. Metrics are usually ized by the subject area, in which case they are nly within a certain domain and cannot be directly marked or interpreted outside it. Generic metrics, er, can be aggregated across subject areas or ss units of an enterprise. 1 Interpreting metrics beginning to interpret the results it is important to the data elements that make up the results, the se of producing the results and the expected normal Simply looking at some results and declaring a trend is dangerous. Figure 8.3 shows a trend that the Service Desk is opening fewer incident tickets over the last few months. One could believe that this is because there are fewer Incidents or perhaps it is because the customers are not happy with the service that is being provided, so they go elsewhere for their support needs. Perhaps the organization has implemented a self-help knowledge base and some customers are now using this service instead of contacting the Service Desk. Some investigation is required to understand what is driving these metrics. 28/8/07 14:01 Page 140 Continual Service Improvement 8.3 Number of incident tickets opened over time Number of incident tickets opened 16,000 15,500 15,000 14,500 14,000 13,500 Number of incident tickets 13,000 12,500 12,000 SERVICE REPORTING ficant amount of data is collated and monitored by he daily delivery of quality service to the business; er, only a small subset is of real interest and ance to the business. (Figure 8.4 shows the s.) The majority of data and its meaning are more to the internal management needs of IT. usiness likes to see a historical representation of the eriod’s performance that portrays their experience; er, it is more concerned with those historical events ontinue to be a threat going forward, and how IT to militate against such threats. ne Ju ay M ril Ap ch ar M br Fe Ja nu ar ua y ry 11,500 It is not satisfactory simply to present reports which depict adherence (or otherwise) to SLAs, which in themselves are prone to statistical ambiguity. IT needs to build an actionable approach to reporting, i.e. this is what happened, this is what we did, this is how we will ensure it doesn’t impact you again, and this is how we are working to improve the delivery of IT services generally. 28/8/07 14:01 Page 141 Continual Service Improvement | 8.4 Service reporting process The Business Define Reporting Policies and Rules Collate Business Views Service Reporting Translate and Apply Publish IT Liaison, Education and Communication 141 28/8/07 14:01 Page 142 28/8/07 14:01 Page 143 28/8/07 14:01 Page 144

Use Quizgecko on...
Browser
Browser