Chapter 6 Operational Risk Tools - Operational Risk Indicators PDF
Document Details
Uploaded by ColorfulBildungsroman
The Institute of Risk Management
Tags
Summary
This document provides an overview of operational risk tools, specifically operational risk indicators. It discusses the definitions, roles, and different types of indicators, including risk indicators, control indicators, and performance indicators. It also covers challenges concerning operational risk indicators.
Full Transcript
7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing Chapter 6 Back Up Book for Printing Chapter 6: Operational Risk Tools - Operational Risk Indicators...
7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing Chapter 6 Back Up Book for Printing Chapter 6: Operational Risk Tools - Operational Risk Indicators Learning outcomes and assessment criteria 6. Understand the fundamentals of the use of operational risk indicators as applied to operational risk management. 6.1 Explain the role and purpose of different forms of operational risk indicators. 6.2 Examine the nature and use of operational risk indicators. 6.3 Describe the challenges surrounding operational risk indicators. Key themes The key themes of this chapter are as follows: The definitions and roles of different types of indicator. Principles for selecting, replacing and monitoring indicators. Setting thresholds, limits and escalation triggers. The differences between lagging and leading indicators. Techniques for risk indicator aggregation and reporting. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 1/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing Introduction to Chapter 6 Risk and control indicators are useful tools to support the monitoring and control of operational risk in a number of areas, including: risk identification, risk and control assessments, and the implementation of effective risk appetite and governance frameworks. However, it is important not to over-estimate their importance. An indicator is just that – it provides an indication of risk exposure, control effectiveness or performance, but not necessarily a fully accurate one. Indicators may at times provide a valuable early warning of changes in operational risk exposure or control effectiveness. However, they can also provide a misleading picture and false alarms. Some indicators may move coincidentally with the risk or control they are tracking – and they may only do this for a short period of time. They may not actually be indicators of the causes of operational loss events and control failures. Ideally we would use more ‘causal’ indicators of that kind, but they can be much harder to uncover. Also it’s important to understand that not all risks can be measured and tracked via risk indicators, and one of the hardest parts of establishing risk indicators is finding good ones. It is almost better to have no indicator than a bad one, which would give false comfort or send misleading signals to senior management. In this chapter, we will cover some of the key considerations involved in selecting, monitoring and reporting risk and control indicators, as well as setting thresholds, limits and escalation triggers. There is a close relationship with Chapter 3, Operational Risk Appetite, so it is important to have read and understood that chapter before starting here. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 2/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.1 Explain the role and purpose of different forms of operational risk indicators In general terms, an indicator is a metric that tells a firm something about the changing nature of its risk environment. Here we are restricting the scope to operational risk. This could mean a change in the underlying (inherent/gross) exposure to the risk: e.g. the risk of internal or external fraud might increase during an economic recession. Or it could mean a change in the effectiveness of a control: e.g. a reduction in the frequency with which data backups are performed. Or it could mean a wider change in the firm’s business which could have implications for the management of operational risk: e.g. rapid growth in assets or employee numbers could put existing systems and controls under significant pressure. The focus here is on two main types of indicator: risk and control indicators. A third type of indicator, performance indicators, is mentioned briefly for completeness but is not covered in detail. 6.1.1 Risk indicators and key risk indicators Risk indicators are metrics used to monitor identified risk exposures over time. Therefore, any piece of data that can perform this function may be considered a risk indicator. An indicator becomes ‘key’ when it tracks a significant risk exposure for the organisation in question (a ‘key risk’). In this chapter we refer mainly to risk indicators (RI) but in different organisations you will also frequently see references to ‘key risk indicators’ and KRIs. There is no particular distinction intended between the two. In theory, the term KRI should only be applied to a few important (or ‘key’) operational risks. In practice, though, the term KRI often ends up being applied to all risk indicator metrics, regardless of how critical the underlying risk is. Consequently it is not distinguished between the two in this chapter: the broader term ‘risk indicator’ is used most of the time but it is accepted that for many practitioners the terms risk indicator and KRI are synonymous. In an operational risk context, the term RI (KRI) is usually taken to mean a metric that provides information on the level of exposure to a given operational risk which an organisation is faced with at a particular point in time. RIs often provide an indication of changes in the level of inherent (gross) risk, i.e. the level of risk present in the absence of controls. In this regard, RIs can be seen as metrics of the underlying drivers or causes of operational risk exposures, announcing possible trouble ahead Risk indicator Potential linked operational risks Comment Staff turnover could signal low levels of staff morale. Depending on the quality and Internal fraud, process errors, data entry errors employee Staff turnover. seniority of leavers and the degree of compensating recruitment and training, staff relations., turnover could be a leading indicator of higher operational risks to come. Number of Attempts to breach firewalls via Trojan horses, etc., indicate the general level of attempted hacking External fraud, systems failure. hacking activity and whether the organisation is currently a target for hackers. attacks. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 3/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.1 Explain the role and purpose of different forms of operational risk indicators Risk indicator Potential linked operational risks Comment Systems availability, calculated as a percentage or in absolute terms, is a common Systems availability. Systems availability. measure of the reliability of IT systems. Credit rating of key The weaker a third party contractor is in financial terms the more likely they are to fail, third party Failure of a key supplier or outsourcing partner. causing a disruption in service and reputation. contractors. Staff satisfaction Dissatisfied staff are more likely to commit fraud or behave Many organisations measure staff satisfaction in their annual staff surveys. surveys. in a negligent way. There is also the potential for employee relation issues and high levels of staff absence. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 4/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.1.2 Control indicators Control indicators (CIs or, sometimes, Key Control Indicators), are metrics that provide information on the extent to which a given control is meeting its intended objectives at a specific point in time. In essence, they provide an indication of whether or not a control is working as intended. Similar to the term Key Risk Indicators described previously, Key Control Indicators is generally applied to a few important control indicators, which are either controls which apply to key risks or controls which apply to a number of risks. Consider the objective the firm is trying to achieve and identify what absolutely must happen to ensure this is achieved. This can help you identify which control indicators may be ‘key’. Control indicator Potential linked operational risk controls Comment A lack of frequent data backups and or missed scheduled Frequency of data backups. Systems failures, damage to physical assets. backups increases the chance that important data will be non-recoverable. Frequency in the testing of Low levels of testing increases the chance that controls will fire alarms, building Damage to physical assets. fail. evacuation, etc. Delays in the patching of IT Systems that do not receive software patches in a timely Systems failure, external fraud. systems. fashion are more prone to failure and or hacking attacks. Customer complaints that are not resolved in a timely Speed with which customer Reputational risk, other customer account management related risks. fashion signal potential weaknesses in the complaints complaints are resolved. handling and resolution process. Number of audit actions This is a potential measure of the general effectiveness of an organisation’s Agreed audit actions are typically tracked and any delays in that have not been internal control environment and the willingness of managers to completion recorded. It would be possible to track both the completed within the agree correct weaknesses. It is often a good sign that the status of internal audit is number of delayed actions and the total/average length of timescale. not considered as important as it should. delay for greater granularity. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 5/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.1.3 Performance indicators Performance indicators are metrics that measure organisational performance or the achievement of business targets (e.g. sales targets or keeping within financial budgets). These are usually considered more relevant to finance, accounting and general business management than to operational risk. Nevertheless it is possible that you will come across performance indicators that have implications for the management of operational risk within firms. For example asset growth, cost to income ratio or customer retention. Performance Relevance for operational risk Comment indicator Fast growing organisations are usually more vulnerable to operational risk events. This may be because existing Growth in assets usually expressed Asset growth. systems and controls cannot keep up with such a fast level of growth or because management/employees become in percentage terms. over-loaded. Increasing cost to income ratio shows increasing costs versus income. This may result in pressure to cut costs and Cost to income Cost to income ratio usually control environments being relaxed with a view to saving money, which may in itself lead to an operational risk ratio. expressed in percentage terms. event as a result. Customer Low levels of customer retention may be as a result of competitors offering a more attractive product; however, Customer retention usually retention this may also indicate failings in operational processes resulting in poor customer service. expressed in terms of customer balances. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 6/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.1.4 Overlaps between Risk Indicators, Control Indicators and Performance Indicators It follows that, in practice, there can be overlaps between these different types of metrics. Some metrics can be performance and risk indicators, or control and risk indicators, or may have elements of all three, especially where ‘failed’ Performance Indicators or Control Indicators become potential sources of increased exposure to certain types of operational risk. The distinction is actually not relevant. If a metric is indicative of operational risk it should be used regardless of how it is labelled. For example, a high level of IT system downtime, which may be very relevant for business performance as a result of the exceptional success of a new product and thus a Performance Indicator, may also be an important risk indicator with regard to data entry/integrity risks. Other examples of overlapping metrics that might be relevant for operational risk as well as business performance include: Poor levels of call centre performance (e.g. a big increase in the number of unanswered calls) which could increase the risks associated with customer complaints and lose business. An inability to stick to pre-agreed budgets could increase the potential for internal fraud, as over-spending is not properly challenged. A reduction in operational efficiency (e.g. staff productivity) which could in turn signal weaknesses in certain processes and systems. Overdue deal confirmations in the back office of a financial trading firm might indicate poor performance in the back office, an increased risk of legal disputes, processing errors and rogue trading events, or control weaknesses in the processing of transactions. Workplace reflection Look for examples of risk indicators and control indicators in your organisation. See if you can find at least ten for each category. Then look for some example indicators that may be primarily business performance indicators but may be also relevant for operational risk 6.2 Examine the nature and use of operational risk indicators As stated in the Introduction of this chapter, risk and control indicators are useful tools to support the monitoring and control of operational risk in a number of areas, including: risk identification, risk and control assessments, and the implementation of effective risk appetite and governance frameworks. An indicator is just that – it provides an indication of risk exposure, control effectiveness or performance. However, they may provide a valuable early warning of changes in operational risk exposure or control effectiveness. There are many potential operational risk and control indicators, and it is clearly not practical to collect and monitor data on them all. Collecting data on indicators takes time and expense. In addition, if an organisation tries to monitor very large numbers of indicators it is unlikely that management will be able to make sense of all this data. So care is needed to ensure that the right set of indicators is selected. It should also be stressed that this set of indicators will change – so there will be a need to add, change or remove indicators from time to time. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 7/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.1 The desirable features of operational risk indicators Below is a summary of the desirable features of effective operational risk and control indicators. Feature Explanation The indicator in question must be relevant to the element that is being monitored (e.g. a particular operational risk exposure or control). In short, it Relevant should be a strong indicator of likelihood, impact or control effectiveness. Indicators should ideally be measurable with a reasonable degree of accuracy. This is to support the identification of trends and to facilitate more objective monitoring. Indicators that are described by text alone can be misinterpreted and subject to manipulation through the structure of the text. Measurable Various types of measure can be used including counts (number of days, employees, etc.), monetary values, percentages, ratios, time duration or a value from some pre-defined rating set (such as that used by a credit rating agency). Effective indicators should help organisations to predict future changes in risk exposure or control effectiveness, rather than simply indicating what has happened already. This means that they should be leading rather than lagging. Preventative By doing this, indicators can be used to help reduce the likelihood of future operational risk loss events and or reduce their impact. (Leading) Note that this does not mean that historical information is of no use – past trends can be a useful predictor of future events (e.g. increasing levels of customer complaints could signal the potential for greater numbers of customer related operational risk events like mis-selling). As with any form of risk reporting, the benefits associated with monitoring an operational risk indicator must exceed the cost of collection/monitoring. The harder it is to collect data, or process it into meaningful indicators, the less value there is likely to be associated with monitoring that indicator. Easy to monitor Notably, indicators that require a lot of manual human input, in terms of collecting/processing data, will generally be harder/more expensive to monitor. Equally, creating lots of new indicators can also increase costs. A single indicator rarely gives enough information to really understand the operational risk exposure that it relates to. Accordingly, it is desirable to use Comparable some form of benchmark, either built up over time (time series) or across peer organisations (cross sectional). To do this effectively, the data collected should be comparable. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 8/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.1 The desirable features of operational risk indicators Feature Explanation The methods used to collect/process data for a particular indicator should be easy to verify. For good governance, independent validation of the Auditable operational risk indicator monitoring process, including how it is sourced, aggregated and delivered to management, should be done periodically. This helps ensure that the selected indicators are being monitored in a statistically robust and accurate way. Workplace reflection Ask your line manager or Head of Operational Risk for examples of the operational risk and control indicators that are monitored by your organisation. Assess the effectiveness of these indicators using the desirable features outlined above. 6.2.2 Top-down versus bottom-up There are two main approaches that organisations can use to select the operational risk or control indicators they want to monitor: top-down or bottom-up, comparable to the approach explained in Chapter 5, RCSAs. The top-down approach starts with senior management or directors, who choose indicators to be monitored across the business. The bottom-up approach allows business entity or other managers to choose and monitor their own sets of indicators. Neither approach is intrinsically better than the other. A top-down approach can facilitate aggregation and senior management understanding, while a bottom-up approach ensures that business entity managers can select and monitor those indicators that are most relevant to their particular situation. In practice, many organisations employ a combination of the two. In either instance the selection process must take into account the results of the RCSA, ensuring clear linkage between identified operational risk exposures and the metrics chosen to gauge the changing levels of these exposures. For example, depending on the approach used to develop the RCSA, as described in section 5.4, the workshop approach or questionnaire approach could be extended to consider identification of appropriate indicators. Many firms possess existing indicators or management information, which may already be being tracked for other purposes, but which meet the criteria in section 7.2.1 and most importantly clearly aligns to the firm’s risk profile. For example, a number of firms track staff turnover. As demonstrated in section 7.1.1 this may also be a valid risk indicator. An initial step may be to identify such indicators and consider whether these are sufficient to monitor the risk profile and identify any gaps. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 9/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.3 Adding or changing indicators The set of indicators used will change over time. Hence, to ensure good governance, procedures for adding, changing or removing specific indicators should be agreed and evidenced by proper documentation. This will ensure that the rationale for any changes is recorded and that changes are approved in an appropriate way. These procedures would typically include the following elements: Review frequency. Who has the authority to approve the addition, change or removal of selected indicators. When removing a previously monitored indicator, what will happen to the data that has been collected: will it be retained or deleted? When replacing an existing indicator with another similar indicator, whether past data should be recalculated or amended and applied to the new indicator. The introduction of indicators relating to new products or business activities; including how long such indicators should be monitored post implementation. The introduction of indicators following recommendations by department manager(s), regulators and/or auditors (both internal and external). In the case of geographically dispersed groups it is also important to track and oversee the activities of remote entities to ensure that their indicator selection processes are in line with the group as a whole. If one part of the group adopts a different set of indicators this can make consistent group-wide operational risk monitoring much more difficult. 6.2.4 Monitoring indicators: How many and how often? There is no right or wrong answer for how many indicators should be selected. Too few indicators may not deliver a clear picture of the risk exposures, controls or performance elements being monitored. Too many may present an overly confusing picture. When deciding the number of operational indicators to be monitored it is helpful to consider the following: Number and nature of the key operational risks that have been identified. Availability of the data required to monitor the selected indicators. The cost of processing or storing data for the selected indicators; and The intended audience, for example local management, executive management or governing body. In relation to the intended audience, it is usually appropriate to collect a more detailed set of metrics for the local management of a specific business area/entity than for executive management or the governing body. In terms of the frequency of monitoring, this will typically follow the cycle of the activity that relates to the indicator in question. For example, indicators that relate to real time market transactions will usually be monitored on a continuous basis (e.g. limit breaches) while indicators, such as staff turnover might only be assessed monthly or even quarterly. It should be noted that the frequency of indicator monitoring (in terms of data capture) is not the same as the frequency of reporting – which will depend on management need (for more on reporting, see below). https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 10/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.5 Setting thresholds, limits and escalation triggers Operational risk indicators add little value without thresholds, limits or escalation triggers. While trends may highlight potential increases or decreases in operational risk exposures, control effectiveness or organisational performance, they do not, on their own, provide a basis for effective decision making. For example, an indicator might suggest an increasing area of operational risk exposure, but so what? Is it worth investing valuable time and money trying to correct the increase? When should we intervene to correct the increase? By setting thresholds, limits or triggers, firms can make more effective use of their limited risk management resources, by helping them to clarify when action may be required. Note that there is a close link here between operational risk indicators and operational risk appetite (see Chapter 3). The limits and thresholds that are assigned to operational risk indicators are a mechanism that is widely used to express an organisation’s appetite for operational risk. It would be worth re-familiarising yourself with the elements of Chapter 3 that covered the use of limits and thresholds. Defining targets, limits and thresholds A target implies that the organisation is aiming for a specific value. For example, a firm might set itself a target of reducing its operational losses below a specified amount (say 3% of its operating income, or less than £1m per year). Equally a firm might set a target level for staff turnover, since too little staff turnover may be just as problematic as too much. Where the indicator deviates from this target, action may be required to address this. A limit implies some kind of maximum absolute value (or maximum tolerable value, which is slightly less severe) for a particular indicator. For example, a firm might want to set a limit on the delay permitted in relation to overdue audit actions which relate to operational risk systems and controls. It might also set limits relating to delays in the scheduled maintenance of key controls (e.g. electrical safety checks). Finally, a threshold implies a value which when exceeded – whether above or below the threshold value – requires some kind of management action, either to investigate the cause or implications of the information reported, or to correct the situation and reduce (or increase, as appropriate) the value of the indicator. Thresholds are frequently encountered within firms, even if they are sometimes called limits. In terms of thresholds, it is common to assign two levels of threshold: Amber and Red, each relating to a different level of intensity and urgency in terms of management response. This was discussed in Chapter 3. Setting thresholds As thresholds are the most common mechanism encountered in organisations the remainder of this section will focus on them. First, it is important to consider the organisation’s appetite for operational risk when setting thresholds for indicators. In general, the greater the organisation’s appetite for risk, the less conservative it should be when setting Red or Amber thresholds and vice versa. Note that it is not always possible to determine an objective relationship between risk appetite and the thresholds set for specific indicators. However, it should be possible to check for any obvious inconsistencies. Setting thresholds is never easy and will typically require an element of judgment. It is rarely possible to determine these thresholds in a fully scientific and objective way. You should not assume that the thresholds chosen will be 100% perfect, or that they should not change over time. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 11/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.5 Setting thresholds, limits and escalation triggers One good way to set Red and Amber thresholds is to review the historical values of an indicator to identify potential spikes. Where these spikes occur close to risk events (whether an actual loss or a near miss) this might also help in determining the right level of the threshold, though clear correlations between indicators and identified events are rare. Looking at historical norms allows us to set variances to these norms and hence start with tolerances that are more relevant. Often, there will be enough spikes to help set an Amber threshold, but judgement is required to help set a Red threshold that is not so low that it is passed under relatively mundane operating conditions. As a general rule, if a Red threshold is passed relatively often and significant action is not required to correct this, then it has probably been set too low. Another approach is to use benchmark data – i.e. to compare the values of an indicator with those of peer organisations. It is common, for example, for organisations to share staff turnover data to permit the calculation of sector averages. Firms whose indicators are significantly above or below these averages might have a problem that requires investigation and or action. Even in the absence of public data sharing, benchmarking may still be possible by referring to accepted good practice (common in areas like information security) or formal standards such as International Standards. Guidance may also be provided by regulators and professional institutes. Finally, information on publicly reported operational risk events may also refer to certain indicators and the values which might be used to set Red or Amber thresholds. If trend or benchmark data is not available, then seeking expert judgment (from experienced professionals inside or outside the firm) is an option. Care is needed here, however, as expert opinions can differ significantly, depending on the risk attitudes and perceptions of these experts. It should be expected that over a period of time, as a firm becomes more risk aware and the benefits of proactive risk management deliver value, indicator thresholds should be tightened. This implies that the organisation should periodically review not just the indicators it is using, but its thresholds also. However, if the thresholds are too tight, they will result in false alerts which mean that over time people start ignoring the alerts altogether. Too wide, on the other hand, and the organisation learns too late that a major issue has suddenly emerged, with potentially adverse consequences. Changing thresholds Changing the values of Red or Amber thresholds is quite common, especially, as indicated above, after a firm has monitored an indicator for some time and gained a deeper understanding of its movements. Firms may also alter thresholds in response to changes in its appetite for operational risk. Although it may be normal to change thresholds, this must not be done on a whim. Hence, just as when indicators are added or removed, there should be procedures in place to ensure that the rationale for any change in threshold is recorded and to specify who can recommend/sign off the change. Using thresholds Where an indicator’s value breaches a threshold this suggests that either the organisation’s exposure to the associated operational risk is too high, or in the case of a control indicator, that the associated control is not as effective as it should be. As with risk appetite, the degree of response will depend on whether a Red or Amber threshold is breached: https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 12/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.5 Setting thresholds, limits and escalation triggers Figure 6.2.5: RAG thresholds Actions should always have a specific owner, be allocated to a specific individual to execute and have a definitive target completion date. Agreed actions should then be monitored on an ongoing basis to ensure the appropriate corrective action is implemented in a timely way. Indicator thresholds may also be used as triggers to escalate to senior management and even the governing body about significant increases in operational risk exposure or declines in control performance. Escalation to governing body level will generally only occur for indicators of particularly important operational risks (i.e. those that could affect the achievement of strategic objectives). Different levels of escalation triggers may also be agreed – with increasingly senior management being alerted as the value of an indicator passes each trigger. For example, assume that the level of open customer complaints is deemed an important operational risk indicator for the firm. Assume that historically this metric has had a range between 50 and 100 per month. On this basis the firm could establish the following escalation thresholds: 100 open customer complaints: Local management threshold. 120 open customer complaints: Further local management threshold, accompanied by a divisional threshold. This implies that the local management responsible for managing the open complaints log is informed and needs to take action and that the direct divisional management above that unit is also informed, so that they can monitor the situation and take action where necessary. 150 open customer complaints: General alert threshold. In addition to local management and divisional management, senior head office management is also informed so that they can monitor the situation and take action where necessary. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 13/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.5 Setting thresholds, limits and escalation triggers Just as different boundary thresholds can be applied to different indicators, different trigger conditions can also be established. The most basic is a ‘touch’ trigger, where as soon as the boundary is reached, the trigger is initiated and alerts generated as appropriate. Other trigger conditions include ‘repetitive touch’ where when the boundary is first reached nothing happens, but if in the next data submission period the boundary is still in breach, then the alert is triggered. Such triggers should be linked to risk appetite and to the degree of sophistication required in the warning system. Also they must take into account the resource overhead (people, systems and cost) necessary to implement more sophisticated systems. The next few paragraphs look at lagging vs leading indicators, the similarities and differences between them, and some of the challenges involved in identifying truly leading indicators. Lagging indicators Lagging indicators rely on historical data and provide information on what has happened in the past. Operational loss event data is a good example of this – it only provides information on losses that have already occurred. The problem with lagging indicators is that they may not provide much insight into what may happen in the future. That does not mean that they will provide no insight: it depends on the extent to which history repeats itself. This is the point of lagging indicators. They can help to model what may happen based on past events, focusing in particular on the size of impact given the event has happened. Historical trends can be a valuable indicator of the future. For example, if operational loss levels are showing an upward trend over an extended period then such a trend might be expected to continue, unless action is taken to address the situation. However, care needs to be taken when making such inferences. Just because an indicator has shown a worsening trend in the past, does not mean that such a trend will necessarily continue. The internal or external environment is continually changing. Similarly, the absence of a worsening historical trend does not mean that all will be well in the future. Possible lagging indicators include: Risk indicator Potential linked operational risks Comment Recorded values of reported operational loss events over a given time horizon (e.g., monthly or Value of operational loss All. on a moving annual basis), provides a good lagging indicator for all operational risk event types. events. Number of closed customer All risks relating to customers (e.g. Total number of customer complaints that have been resolved over a given time period (e.g. complaints. processing errors, mis-selling, etc.) monthly). This provides an overview of historical customer satisfaction levels. Number of days lost due to Information on historical health and safety incidents. The length of time taken off work by health and safety incidents. Health and safety. injured staff members provides an indication of the severity of these events. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 14/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.5 Setting thresholds, limits and escalation triggers Risk indicator Potential linked operational risks Comment Number of bank cards reported External fraud. Total number of bank cards reported stolen over a given time period. stolen. Leading indicators Leading indicators provide information on potential future changes in risk exposure or control effectiveness. As such they are more predictive than lagging indicators and potentially more valuable, helping firms to prevent increased risk exposure or reduced control effectiveness before those things actually occur – or at least to mitigate them quickly should they occur. To be truly effective leading indicators should provide information about the causes of a particular operational risk event. Here, it is important to remember the difference between cause and correlation. Just because an indicator tracks known operational losses – i.e. they trend together – does not mean that there is a direct causal relationship between them, only an incidental relationship (or correlation). In reality the observed correlation may turn out to be spurious (incidental or false). However if an indicator can be found which helps to track a potential cause of a given operational loss event, then any adverse move in the value of this indicator is more likely to signal future losses. True leading indicators are rare, because even a forward-looking indicator may rely to some degree on historical trends. That said, it is possible to find some indicators that are linked to the causes of certain operational loss events, as illustrated in the table below: Potential linked Risk indicator Comment operational risks Percentage of product suitability Mis-selling. Errors in the recommendation of suitable products could lead to mis-selling claims. This metric reports the approvals outstanding. number of outstanding suitability approvals as a percentage of the total. Percentage of IT staff who are not up Systems failure and Technical IT training is constantly being updated and where staff members lag behind in maintaining their to date with important hardware/ hacking attacks. skills errors could occur. software training. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 15/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.2.5 Setting thresholds, limits and escalation triggers Potential linked Risk indicator Comment operational risks Customer Where trades on behalf of a customer are not made on the best possible contractual terms or at best Number of best execution exceptions. compensation claims. possible price customers may complain and demand compensation. Note: These are risk events in their own right, which need individual resolution, as well as being a leading indicator. Learning activity Find five examples of leading operational risk indicators within your organisation. Be sure to justify why you believe that they are leading indicators. Content Content 1: Describe the challenges surrounding operational risk indicators Jump 1: Next page Import questions | Add a content page | Add a cluster | Add an end of cluster | Add an end of branch | Add a question page here https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 16/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.3 Describe the challenges surrounding operational risk indicators In this section we explore some of the implementation challenges you might encounter when setting up a risk indicator monitoring framework, in particular to do with reporting. A poor reporting framework can provide misleading information on operational risk exposures, including false comfort to senior managers and governing bodies. So it is important to think carefully about the various challenges associated with reporting. 6.3.1 Levels of reporting An effective reporting process is essential for managing operational risk. It can assist in the early detection and correction of emerging operational risk issues. It can also serve as a basis for assessing operational risk and related mitigation strategies and creating incentives to improve operational risk management throughout the firm. Hence risk and control indicators are of little use if they are not reported to a firm’s decision makers. Operational risk indicator reports may be produced at all levels of decision making, from governing body level to the managers of individual business lines and functions. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 17/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.3 Describe the challenges surrounding operational risk indicators Figure 6.3.1: Main levels of operational risk reporting that relate to operational risk indicators 6.3.2 Frequency of reporting There is no single answer to the question of how frequently to report. It depends on the nature of the risks, indicators and environment. Reporting should however be linked to decision making processes, and reports of different frequency will be required to suit specific audiences. The table below outlines some of the more common frequencies that can be used for indicator reporting. Interval Benefits Suitable to Usual audience Drawbacks Drawbacks Volume, control breaks, fails Allow instant indication of risk issues on Line Management, Business Daily etc. for routine business Lack of in-depth analysis. day-to-day business. Management. activities. Good for tracking the status of the Volume, control breaks, fails Line Management, Business Have to assess whether it contains Weekly common issues in a short period of etc., for routine business Management the events happening during the time. activities. week or just at a snapshot. Including the above but also Align with other monthly MIS reporting include other non-transaction Line Management, Business Monthly and regarded as a good timing to meet Lack of sense of urgency. related activities. Management, Group Management. with Management. Align with announcement of quarterly Line Management, Business Can also consider specific Too high level to review granularities, Quarterly results and prepare for committee Management, Group Management, indicators to meet the lack of sense of urgency. meetings. Audit Committee. regulatory requirement. Line Management, Business Too high level to review Yearly Align with year-end financial results. Those indicators mainly for Management, Group Management, granularities, lack of sense of presenting the high level Executive Board. urgency. Line management relates to functional management within a specific business entity, so it might refer to management of the IT function or the compliance function. Business management refers to the central, and often more senior management and directors of a specific business entity (e.g. a national retail bank that is part of a wider banking group). Group management relates to the managers, senior managers and directors that oversee the operations of the entire group. The Audit Committee and governing body are self-explanatory. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 18/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.3.3 Presenting indicator reports The presentation of a report will depend on the requirements of the intended audience and, where possible, reports should be developed in conjunction with them. However, central co-ordination can help to ensure that a consistent view of information is delivered so that reports can be compared across business lines and functions and/or aggregated for senior management. In larger organisations documented procedures for indicator reporting may also be necessary to ensure consistency. An indicator report should be presented in a user-friendly manner with appropriate visual aids and use clear and simple language. This presentation could be in a variety of forms, for example: country or regional view reports, firm-wide reports, business unit specific reports; or special theme reports (i.e. focusing on a specific control topic such as fraud or information security). Reports may cover all the agreed set of indicators or they may be exception-based, focusing on those indicators that have breached agreed thresholds/limits or are trending in an adverse way and are expected to breach agreed thresholds/limits soon. Judgement on the part of the operational risk manager on what to include or exclude from a report may also be necessary to help the intended audience reach the right conclusions. However, both the recipients of a report and internal auditors should be able to access the underlying data that is used to produce the report, so they can satisfy themselves that the most appropriate indicators have been presented. The presentation of indicator reports can vary significantly, but broadly speaking there are two main variables: complexity and objectivity. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 19/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.3.3 Presenting indicator reports Figure 6.3.3(a): Presenting indicator reports Objective reports rely on quantitative methods to present data – such as bar charts or trend diagrams. Full details on all indicators may be provided or a more simplified dashboard of the indicators that currently warrant the most attention. An example is provided below: Figure 6.3.3(b): A dashboard report Judgment-based reports rely more on commentary and the use of simplified data presentation techniques, such as trend arrows. An example is provided below: https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 20/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.3.3 Presenting indicator reports Figure 6.3.3(c): Example of a judgment-based report Even where there is greater emphasis on statistical methods of presentation, the provision of a suitably detailed narrative is often needed to support these statistics to ensure that the intended recipients are able to interpret the reports that they receive and use them to support decision making. In particular, brief and relevant commentary should be provided to explain abnormal items and data trends. Workplace reflection Collect examples of operational risk indicator reports that are used within your firm – preferably ones that are used by different types and levels of management (be sure to ask permission first, both from your line manager and the report owners). Examine and explain the differences between these reports in terms of detail and presentation – are different types of report needed for different audiences? https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 21/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing 6.3.4 The aggregation challenge Aggregating risk indictor data is never easy. For example, where different business units monitor the same indicator it might seem logical to calculate an average of these values for group level report. However, a key problem with this is that extreme values (sometimes called ‘outliers’) can be averaged out. An example of this is provided below: Figure 6.3.4: The averaging problem In the above example, different departments have different levels of turnover, though for the most part these levels cluster around 2-4%, a level of staff turnover that most organisations would be comfortable with. However, the Finance department has a much higher level of staff turnover. It could, of course, be that this is because Finance has a very small number of staff and only one or two have left, thus having a disproportionately large effect. At the very least, in such circumstances, additional commentary will be required to ensure that such extreme values are not ignored by management. Workplace reflection Look for an example of an indicator which, when aggregated, may suffer from averaging effects. Staff turnover might be a good place to start – compare turnover rates in different departments/business units to see if there are any outliers. Be sure to ask permission to review this information before proceeding. https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 22/23 7/29/24, 10:05 AM Chapter 6 Back Up Book for Printing Summary Risk and control indicators form an important element of most operational risk management frameworks. However, it can be challenging both to find good indicators and then to use them effectively. No single indicator, or even a carefully selected set of indicators, will ever provide a fully accurate and up to date picture of the entire range of your operational risk exposures or the controls surrounding them. Remember also that managing the selection, collection and reporting of risk and control indicators is a complex exercise. It is essential to do this properly with appropriate documentation, consultation and governance. A poor set of indicators can do more harm than good and may provide a very misleading picture. Key learning You will be ready to move to the next chapter when you can confidently answer the following questions: 1. What is the difference between a risk indicator and a control indicator? 2. What is the purpose of tracking risk and control indicators? 3. What does a “good” risk indicator look like? 4. How do you select risk indicators? 5. What is the benefit of leading indicators versus lagging? 6. How can the results of risk indicators be reported? 7. What does threshold and limit mean in relation to risk indicators? 8. How do risk indicators link to risk appetite? https://www.irmvle.org/mod/book/tool/print/index.php?id=4164&chapterid=2323 23/23