FRM 2024 Part I Schweser Secret Sauce PDF

Summary

This PDF is a study guide for the 2024 FRM Part I exam, providing a concise overview of foundational risk management concepts, quantitative analysis, and financial markets, including details on measuring risk, and different types of risks. It is intended to supplement other study materials.

Full Transcript

Schweser’s Secret Sauce® FRM Part I 2024 SCHWESER’S SECRET SAUCE®: 2024 FRM® PART I ©2024 Kaplan, Inc. All rights reserved. Published in 2024 by Kaplan, Inc. ISBN: 978-1-0788-4259-4 Required Disclaimer: GARP® does not endorse, promote, review, or warrant the accuracy of the products or servic...

Schweser’s Secret Sauce® FRM Part I 2024 SCHWESER’S SECRET SAUCE®: 2024 FRM® PART I ©2024 Kaplan, Inc. All rights reserved. Published in 2024 by Kaplan, Inc. ISBN: 978-1-0788-4259-4 Required Disclaimer: GARP® does not endorse, promote, review, or warrant the accuracy of the products or services offered by Kaplan Schweser of FRM® related information, nor does it endorse any pass rates claimed by the provider. Further, GARP® is not responsible for any fees or costs paid by the user to Kaplan Schweser, nor is GARP® responsible for any fees or costs of any person or entity providing any services to Kaplan Schweser. FRM®, GARP®, and Global Association of Risk ProfessionalsTM are trademarks owned by the Global Association of Risk Professionals, Inc. These materials may not be copied without written permission from the author. The unauthorized duplication of these notes is a violation of global copyright laws. Your assistance in pursuing potential violators of this law is greatly appreciated. Disclaimer: Schweser study tools should be used in conjunction with the original readings as set forth by GARP®. The information contained in these books is based on the original readings and is believed to be accurate. However, their accuracy cannot be guaranteed nor is any warranty conveyed as to your ultimate exam success. CONTENTS Foreword Foundations of Risk Management Study Session 1: Risk Management Overview Study Session 2: Pricing Models and Enterprise Risk Management Study Session 3: Case Studies and Code of Conduct Quantitative Analysis Study Session 4: Probability and Statistics Study Session 5: Sample Moments and Hypothesis Testing Study Session 6: Regression Analysis Study Session 7: Forecasting, Correlation, and Machine Learning Financial Markets and Products Study Session 8: Financial Institutions, Markets, and Central Clearing Study Session 9: Forwards, Futures, and Foreign Exchange Study Session 10: Options Study Session 11: Interest Rates, Fixed Income Securities, and Swaps Valuation and Risk Models Study Session 12: Measuring Risk and Volatility Study Session 13: Credit Risk, Country Risk, Operational Risk, and Stress Testing Study Session 14: Fixed Income Valuation Study Session 15: Option Valuation Essential Exam Strategies Index FOREWORD This review book is a valuable addition to the study tools of any FRM exam candidate. It offers concise coverage of exam topics to enhance your retention of the FRM curriculum. We suggest that you use this book as a companion to your other, more comprehensive study materials. It is easier to carry with you and will allow you to study these key concepts, definitions, and techniques over and over, which is a crucial part of mastering the material. For a majority of you, there are no shortcuts to learning the broad array of subject matter covered by the FRM curriculum, but this book should be a very valuable tool for learning and reviewing the material as you progress in your studies over the weeks leading up to exam day. Previous Part I exam pass rates have been slightly below 50%, and many FRM candidates have commented on the high difficulty level of the exam. This is an indication that you should not underestimate the task at hand. Our SchweserNotes, Mock Exams, SchweserPro™ QBank, OnDemand Class, and Schweser’s Secret Sauce are all designed to help you study as efficiently as possible, grasp and retain the material, and apply your knowledge with confidence on the exam. As a reminder, the 2024 FRM Part I topic area coverage and weightings assigned by GARP are as follows: FOUNDATIONS OF RISK MANAGEMENT Study Sessions 1–3 STUDY SESSION 1: RISK MANAGEMENT OVERVIEW THE BUILDING BLOCKS OF RISK MANAGEMENT Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 1. The Concept of Risk In an investing context, risk is the uncertainty surrounding outcomes. Investors are generally more concerned about negative outcomes (unexpected investment losses) than they are about positive surprises (unexpected investment gains). Additionally, there is an observed natural trade-off between risk and return; opportunities with high risk have the potential for high returns and those with lower risk also have lower return potential. The concept of risk taking refers to the active acceptance of incremental risk in the pursuit of incremental gains. In this context, risk taking can be thought of as an opportunistic action. The Risk Management Process The risk management process is a formal series of actions designed to determine if the perceived reward justifies the expected risks. A related query is whether the risks could be reduced and still provide an approximately similar reward. There are several core building blocks in the risk management process. They are as follows: 1. Identify risks. 2. Measure and manage risks. 3. Distinguish between expected and unexpected risks. 4. Address the relationships among risks. 5. Develop a risk mitigation strategy. 6. Monitor the risk mitigation strategy and adjust as needed. The risk management process involves a four-way decision. The company might decide to avoid risk directly by selling a product line, avoiding certain markets or jurisdictions, or offshoring production. They also might decide to retain risk, depending on the expected rewards relative to the probability and frequency of any expected losses. Another option is to mitigate risk by reducing either the magnitude or the frequency of exposure to a given risk factor. Finally, risk managers could transfer risk to a third party using derivatives or structured products. They could also purchase insurance to outsource risk to an insurance company. One of the challenges in ensuring that risk management will be beneficial to the economy is that risk must be sufficiently dispersed among willing and able participants in the economy. Another challenge of the risk management process is that it has failed to consistently assist in preventing market disruptions or preventing financial accounting fraud (due to corporate governance failures). In addition, the use of derivatives as complex trading strategies assisted in overstating the financial position (i.e., net assets on balance sheet) of many entities and complicating the level of risk assumed by many entities. Finally, risk management may not be effective on an overall economic basis because it only involves risk transferring by one party and risk assumption by another party. Measuring and Managing Risk Value at risk (VaR) calculates an estimated loss amount given a certain probability of occurrence. For example, a financial institution may have a one-day VaR of $2.5 million at the 95% confidence level. That would be interpreted as having a 5% chance that there will be a loss greater than $2.5 million on any given day. VaR is a useful measure for liquid positions operating under normal market circumstances over a short period of time. It is less useful and potentially dangerous when attempting to measure risk in non-normal circumstances, in illiquid positions, and over a long period of time. Economic capital is the amount of liquid capital necessary to cover unexpected losses. For example, if one-day VaR is $2.5 million and the entity holds $2.5 million in liquid reserves, then they have sufficient economic capital (i.e., they are unlikely to go bankrupt in a one-day expected tail risk event). Scenario analysis is a process that considers potential future risk factors and the associated alternative outcomes. Stress testing is a form of scenario analysis that examines a financial outcome based on a given “stress” on the entity. In practice, the term enterprise risk management (ERM) refers to a general process by which risk is managed within an organization. An ERM system is highly integrative in that it is deployed at the enterprise level and not siloed at the department level. The value in this top-down approach is that risk is not considered independently, but rather in relation to its potential impact on multiple divisions of a company. Expected and Unexpected Loss Expected loss (EL) considers how much an entity expects to lose in the normal course of business. These losses can be calculated through statistical analysis with relative reliability over short time horizons. The EL of a portfolio can generally be calculated as a function of: (1) the probability of a risk occurring; (2) the dollar exposure to the risk event; and (3) the expected severity of the loss if the risk event does occur. In a banking context, EL could be modeled as the product of a borrower’s probability of default (PD), the bank’s exposure at default (EAD), and the magnitude of the loss given default (LGD). Unexpected loss (UL) considers how much an entity could lose in excess of their average (expected) loss scenarios. There is considerable challenge involved with predicting unexpected losses because they are, by definition, unexpected. The Relationship Between Risk and Reward There is a natural trade-off between risk and reward. In general, the greater the risk taken, the greater the potential reward. However, one must consider the variability of the potential reward. The portion of the variability that is measurable as a probability function could be thought of as risk (EL) whereas the portion that is not measurable could be thought of as uncertainty (UL). One of the biggest structural concerns is the potential for conflicts of interest. Those in the position to be most aware of the presence, probability, and potential impact of various risk factors are sometimes the ones who try to profit from its presence. This reality could be seen in the actions of rogue traders. It may also be seen from managers who conceal knowledge of a risk factor to maximize short-term stock price movements to enhance personal compensation through stock-based remuneration structures. Types of Risk All firms face risks. These risks can be subcategorized as market risks, credit risks, liquidity risks, operational risks, legal and regulatory risks, business and strategic risks, and reputation risks. Market risk refers to the fact that market prices and rates are continually in a state of change. The four key subtypes of market risk are interest rate risk, equity price risk, foreign exchange risk, and commodity price risk. The key to mitigating these risks is to understand the relationship between positions. As these relationships change, risk management methods need to change as well. Credit risk refers to a loss suffered by a party whereby the counterparty fails to meet its contractual obligations. Credit risk may arise if there is an increasing risk of default by the counterparty throughout the duration of the contract. There are four subtypes of credit risk: (1) default risk, (2) bankruptcy risk, (3) downgrade risk, and (4) settlement risk. Liquidity risk is subdivided into two parts: (1) funding liquidity risk and (2) market liquidity risk. If liquidity risk becomes systemic, it could lead to elevated credit risk (e.g., a potential default scenario). Operational risk refers to potential losses flowing from inadequate (or failed) internal processes, human error, or an external event.1 The details of operational risk could relate to factors such as inadequate computer systems (technology risk), insufficient internal controls, incompetent management, fraud (e.g., losses due to intentional falsification of information), employee mistakes (e.g., losses due to incorrect data entry or accidental deletion of a file), natural disasters, cyber security risks, or rogue traders. Legal risk is the potential for litigation to create uncertainty for a firm. Regulatory risk refers to uncertainty surrounding actions by a governmental entity. Business risk refers to variability in inputs that influence either revenues (e.g., customer demand trends, product pricing policies, etc.) or cost structures (e.g., the cost of production inputs, supplier negotiations, etc.). Diverse business elements such as new product innovations, shipping delays, and production cost overruns could also be labeled as business risks. Strategic risk involves long-term decision-making about fundamental business strategy. These long-term strategic initiatives may involve large capital investments in either equipment or human capital. Reputation risk is the danger that a firm will suffer a loss in public perception (or consumer acceptance) due to either: (1) a loss of confidence in the firm’s financial soundness or (2) a perception of a lack of fair dealing with stakeholders. Reputation risk is often one of the outcomes of experiencing a loss in another risk category. Risk Factor Interactions A significant danger in risk management occurs when independent risk factors are correlated. For example, a granular factor that leads to default risk for a loan could ultimately spill over into credit risk, operational risk, business risk, and reputation risk. This is most dangerous with unexpected losses. Realizing the potential for correlation between risks will help a risk manager measure and manage unexpected losses with marginally more certainty. For example, a risk manager could consider historical correlations between identified risk factors and forecast the nature of these relationships to measure the risk planning process. VaR and the associated economic capital measurement are both useful metrics that provide risk managers information. A risk-adjusted return on capital (RAROC) can be calculated for comparison purposes, but VaR should not be considered as a stand-alone risk metric because it makes certain assumptions, can be adjusted by input parameters, and there are different types of VaR measurements. However, VaR, economic capital, and RAROC can be useful for helping risk managers better understand the aggregate risk exposure of VaR should not be considered as a stand-alone risk metric because it makes certain assumptions and can be adjusted by input parameters and there are different types of VaR measurements. However, VaR, economic capital, and RAROC can be useful for helping risk managers better understand the aggregate risk exposure of a firm. HOW DO FIRMS MANAGE FINANCIAL RISK? Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 2. Strategies for Risk Management At a high level, a firm can pick from four different risk management strategies. Senior management and the board of directors are ultimately responsible for strategy selection, but risk managers can help inform the decision-making process. The risk management strategies are as follows: 1. Accept the risk. 2. Avoid the risk. 3. Mitigate the risk. 4. Transfer the risk. Risk acceptance could be done to actively include a risk factor in company performance or because the risk is being passed through to customers. Risk could also be avoided. If risk is retained, then it may be desirable to mitigate it through deal enhancement (i.e., more collateral on a loan or investing in new technology to offset a known risk). Risk can also be transferred to a third party, but this introduces counterparty risk into the equation. Risk Appetite Relative to Risk Decision-Making Risk appetite refers to the level (and types) of risk that a firm is willing to retain. There are two very important subcomponents: risk willingness and risk ability. Risk willingness relates to a firm’s desire to accept risk in pursuit of its business goals, while risk ability can put a cap on risk willingness for various reasons. The most common reasons for reduced risk ability are internal risk controls (to keep risk in a desired range) and regulatory constraints. After a firm establishes its risk appetite, it should assemble an inventory of all known risks. This process is called risk mapping and it is the next logical step in the risk management process. This robust approach systematically considers any risk with a known (or potential) cash impact on the firm. Every type of risk (i.e., market risk, credit risk, liquidity risk, operational risk, legal and regulatory risk, business and strategic risk, and reputation risk) is considered. Risk managers should incorporate any known interactions between risk factors in terms of correlation risk or the possibility that one risk might cancel out the cash impact of another risk (i.e., there might be a risk netting that occurs). Hedging Risk Exposures Some of the benefits of deploying a hedging strategy include reduced costs, smoother operating performance, enhanced business planning, and the ability to lock-in positive results in the short-term. Some of the disadvantages include the potential for managerial focus to be shifted away from core operations, compliance costs, the possibility that new risks might be introduced in an attempt to minimize other risks, and the high level of complexity associated with many hedging strategies. Common challenges in the risk management process include misunderstanding or mismapping risk exposures, managing changes with risk variables in dynamic markets, and internal communication breakdowns. Hedging Operational and Financial Risks Hedging operational risk covers a firm’s activities in production and sales (i.e., expenses and revenue). These operational risks can be considered as income statement risks. However, financial risk relates to a firm’s balance sheet (i.e., assets and liabilities). By making the realistic assumption that there are some imperfections in the financial markets, a firm could benefit from hedging financial risk. Hedging activities should cover both the firm’s assets and liabilities to fully account for the risks. Pricing risk could be thought of as a type of operational risk, requiring the hedging of revenues and costs. Foreign currency risk refers to the risk of economic loss due to unfavorable changes in the foreign currency exchange rate; to the extent that there is production and sales activity in the foreign currency, pricing risk would exist simultaneously. Interest rate risk refers to the risk inherent in a firm’s net exposure to unfavorable interest rate fluctuations. The Impact of Risk Management Tools A firm needs to decide if its hedging strategy is a one-off event or if it is part of broader risk management need. This decision is sometimes referred to as rightsizing a risk management program. The financial markets are very dynamic, and a broadly-applied risk management strategy requires investment in complex systems and hiring experienced traders. There are several risk limits that need to be understood and potentially controlled depending on the results of the risk mapping process (e.g., stop- loss limit, notional limit). Derivatives instruments could be used to physically manage risk, including forward contracts, futures contracts, swap contracts, call option contracts, put option contracts, exotic option contracts, and swaption contracts. Financial instruments used to hedge risks and can be classified as exchange traded or over the counter (OTC). Exchange-traded instruments cover only certain underlying assets and are quite standardized (e.g., maturities and strike prices) in order to promote liquidity in the marketplace. OTC instruments are privately traded between a bank and a firm and thus can be customized to suit the firm’s risk management needs. In exchange for the customization, OTC instruments are less liquid and more difficult to price than exchange-traded instruments. In addition, there is credit risk by either of the counterparties (e.g., default risk) that would generally not exist with exchange-traded instruments. THE GOVERNANCE OF RISK MANAGEMENT Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 3. Governance After the Global Financial Crisis The financial crisis of 2007–2009 has been linked to several risk management failures. The following is a list of some of the key lessons learned from risk management failures during the financial crisis, with respect to the banking industry: The needs of all of the firms’ stakeholders must be considered. The board needs to have competent and independent directors. The board needs to take a highly proactive role in the firm’s risk management process. The firm’s risk appetite needs to be clearly articulated by the board. Compensation should be structured to better align management behavior with long- term stakeholder priorities as determined by the board. Basel III and the Dodd-Frank Act were also issued in response to the financial crisis of 2007–2009. Their goals are to focus banks on capital adequacy measures and to prevent commercial banks from engaging in proprietary trading (among other things). Governance of Risk Management Best Practices Best practices in corporate governance include factors like board member independence, competency standards for board members, consideration of all stakeholders, and structuring managerial compensation packages to flow out of risk management goals. There should also be separation between the CEO and the chairperson of the board so that there is true accountability (i.e., there needs to be two different individuals, not one). One of the duties of the board is to supervise the risk management process. Best practices for risk management include adequately mapping risks and specifying an enterprise-level risk appetite, which needs to be communicated throughout the organization. Risk Governance The board of directors has ultimate responsibility for enterprise-level risk management. If the board does not have sufficient expertise to adequately understand, map, and manage the firm’s risk exposures, then they need to recruit a risk advisory director (an independent expert in industry-specific risk factors) to the board and to the risk management committee. The risk management committee will make all risk appetite decisions and then bring these discussions back to the full board for their awareness. The compensation committee is charged with aligning managerial compensation with long-term stakeholder needs. Risk Appetite vs. Business Strategy A firm’s risk appetite reflects its tolerance (especially willingness) to accept risk. The subsequent implementation of the risk appetite into defining the firm’s risk limits sets some bounds to its business strategy and to its ability to exploit business opportunities. The board needs to develop/approve the firm’s risk appetite as well as assist management in developing the firm’s overall strategic plan. Interdependence of Functional Units The various functional units within a firm are dependent on each other when it comes to risk management and reporting. Senior management, business units, finance and operation functions, and risk management all work together to conduct the firm’s risk management process. Frontline managers are vital in this process and the CRO communicates progress to senior management and the risk committee on a very regular basis. Audit Committee The audit committee is a subcommittee of the full board. Members traditionally monitor compliance with accounting standards, but they also have a role to play in supervision of risk management policies. They need to verify that policies are being followed and offer opinions on the variables used in testing exposures, as well as the functional value of the current risk management systems. These opinions are informed by internal auditors and are collected and transferred to the full board for further consideration. CREDIT RISK TRANSFER MECHANISMS Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 4. Types of Credit Derivatives Credit risk, the risk of a borrower defaulting, is the core risk exposure held by a bank. Three derivative products helped to transfer credit risk leading up to financial crisis of 2007–2009: credit default swaps (CDSs), collateralized debt obligations (CDOs), and collateralized loan obligations (CLOs). Credit default swaps (CDSs) are financial derivatives that pay off when the issuer of a reference instrument (e.g., a corporate bond or a securitized fixed-income instrument) defaults. This is a very direct way to measure and transfer credit risk. These derivatives function like an insurance contract in which a buyer makes regular (quarterly) premium payments, and in return, they receive a payment in the event of a default. A collateralized debt obligation (CDO) is a structured product that banks can use to unburden themselves of credit risk. These financial assets are repacked loans which are then sold to investors on the secondary markets. A CDO could include some combination of asset-backed securities (ABSs) which could include mortgages (commercial or residential), auto loans, credit card debt, or some other loan product. Typically, the loans included in a CDO are heavily biased toward mortgage debt through a securitized basket of mortgages called a mortgage-backed security (MBS). When a CDO consists only of mortgage loans, it is technically known as a collateralized mortgage obligation (CMO). A collateralized loan obligation (CLO) is a structured product that is extremely similar to a CDO. Like a CDO, they are a bundle of repackaged loans that are organized into tranches. However, a CLO’s constituent loans are predominantly bank loans, which have typically been exposed to a rigorous underwriting process. CLOs did not experience the same level of defaults that plagued the CDO market (largely due to heavy exposure to mortgages in the CDO space). For this reason, CLOs continued to attract investor interest in the wake of the financial crisis of 2007–2009, while CDOs lost interest quickly. Reducing Credit Risk Exposure Beyond the direct use of credit derivatives, banks have several different traditional approaches that can be used to transfer credit risk. These mechanisms include purchasing third-party insurance, exposure netting, marking-to-market, requiring collateral, adding termination clauses, and possibly loan reassignment. Another option is to syndicate a loan. In this approach, a lead bank will retain some of the loan and find other banks to hold the remainder of the desired loan amount. These approaches may involve credit derivatives as a part of the risk mitigation strategy. Credit Derivatives in the Global Financial Crisis The existence of credit derivatives did not cause the financial crisis of 2007–2009, but the misuse of these products certainly did. Investors used CDS contracts for speculation rather than risk mitigation. Collateralized debt obligations also held a very complex mixture of mortgages that included both subprime loans and adjustable-rate loans as well. There was a perfect storm when the Federal Reserve began raising rates, adjustable- rate loans attained their reset date and produced unaffordable payments, and the housing market declined, causing home prices to drop. This confluence of factors led to massive defaults that rippled through the MBS and CDO markets. Banks then became reluctant to lend to each other while some were going bankrupt. As typically happens after a crisis, new regulation was created. Dodd-Frank was formed to better regulate the credit derivatives space and to keep bank trading in check. The SEC also added Section 15G to further protect investors. Securitization and Special Purpose Vehicles Securitization is the general process of repackaging loans into a bundled new product that can be sold to investors on the secondary markets. This process involves four key steps: 1. Create a special purpose vehicle (SPV), which is an off-balance sheet legal entity that functions as a semi-hidden subsidiary of the issuing parent company. An SPV will hold financial assets in such a way that is opaque for investors to analyze. 2. The SPV will use borrowed funds to purchase loan assets from one bank or possibly several banks to create structured products (e.g., CMO, CDO, or CLO). 3. The SPV’s constituent loans will be arranged by either seniority or credit rating and structured into tranches to form risk layers within the SPV. 4. The various tranches are then sold to investors on the secondary markets. When sourcing loans, banks can choose between two high-level business models. The traditional model is referred to as the buy-and-hold strategy. In this approach, banks will source a loan and then retain it on their books. They enjoy periodic interest payments to compensate for holding credit risk. The innovation enabled by securitization is the originate-to-distribute (OTD) model. The OTD model involves banks sourcing loans with the explicit intention to securitize them and sell the structured products to investors. With this model, banks do not retain credit risk and they are paid a fee for sourcing the loans that feed into the securitized products rather than receiving interest payments, which belong to the investors in the structured products. The incentive in the OTD model is to generate high loan volume, not high- quality loans, which is the incentive in the buy-and-hold model. STUDY SESSION 2: PRICING MODELS AND ENTERPRISE RISK MANAGEMENT MODERN PORTFOLIO THEORY AND THE CAPITAL ASSET PRICING MODEL Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 5. Modern Portfolio Theory One of the most notable market risk researchers was Harry Markowitz. He laid the foundation for modern portfolio theory in the early 1950s. Markowitz’s portfolio theory makes the following assumptions: Returns are normally distributed. This means that, when evaluating utility, investors only consider the mean and the variance of return distributions. Investors are rational and risk-averse. Markowitz defines a rational investor as someone who seeks to maximize utility from investments. Capital markets are perfect. This implies that investors do not pay taxes or commissions. Rational investors maximize portfolio return per unit of risk. Plotting all those maximum returns for various risk levels produces the efficient frontier, which is represented by the blue curve passing through C-D-E-F-G, shown in Figure 1.1. Figure 1.1: Efficient Frontier In general, any portfolio below the efficient frontier is, by definition, inefficient, whereas any portfolio above the efficient frontier is unattainable. In the absence of a risk-free asset, the only efficient portfolios are the portfolios on the efficient frontier. Investors choose their position on the efficient frontier depending on their relative risk aversion. The Capital Market Line (CML) In the presence of riskless lending and borrowing, the efficient frontier transforms from a curve to a line tangent to the previous curve. Investors will choose to invest in some combination of their tangency portfolio and the risk-free asset. Assuming investors have identical expectations regarding expected returns, standard deviations, and correlations of all assets, there will be only one tangency line, which is referred to as the capital market line (CML). The equation of the CML is: The Capital Asset Pricing Model (CAPM) The capital asset pricing model (CAPM) was developed by William Sharpe and John Lintner in the 1960s. It builds on the ideas of modern portfolio theory and the CML in that investors are assumed to hold some combination of the risk-free asset and the market portfolio. Its key assumptions are: Information is freely available. Frictionless markets. Fractional investments are possible. Perfect competition. Investors make their decisions solely based on expected returns and variances. Market participants can borrow and lend unlimited amounts at the risk-free rate. Homogenous expectations. Estimating and Interpreting Systematic Risk The expected returns of risky assets in the market portfolio are assumed to only depend on their relative contributions to the market risk of the portfolio. The systematic risk of each asset represents the sensitivity of asset returns to the market return and is referred to as the asset’s beta. Beta is computed as follows: Deriving the CAPM A straightforward CAPM derivation recognizes that expected return only depends on beta (company-specific risk can be diversified away) and is a linear function of beta. The CAPM equation is: This implies that the expected return of an investment depends on the risk-free rate RF, the MRP, [RM − RF], and the systematic risk of the investment, β. The expected return, E(Ri), can be viewed as the minimum required return, or the hurdle rate, that investors demand from an investment, given its level of systematic risk. Estimating hurdle rates accurately is very important. If investors use an inflated hurdle rate, they may incorrectly forgo valuable investment opportunities. If, on the other hand, the rate used is too low, investors may purchase overvalued assets. The graphical depiction of the above equation is known as the security market line (SML). EXAMPLE: Expected return on a stock Assume you are assigned the task of evaluating the stock of Sky-Air, Inc. To evaluate the stock, you calculate its required return using the CAPM. The following information is available: Using CAPM, calculate and interpret the expected return for Sky-Air. Answer: The expected return for Sky-Air is: Measures of Performance The Sharpe measure is equal to the risk premium divided by the standard deviation, or total risk: The Treynor measure is equal to the risk premium divided by beta, or systematic risk. The Jensen measure (or Jensen’s alpha or just alpha) is the asset’s excess return over the return predicted by the CAPM: In all three cases, for a given portfolio, the higher, the better. The two that are most similar are the Treynor and Sharpe measures. They both normalize the risk premium by dividing by a measure of risk. Investors can apply the Sharpe measure to all portfolios because it uses total risk, and it is more widely used than the other two measures. The Treynor measure is more appropriate for comparing well-diversified portfolios. Jensen’s alpha is the most appropriate for comparing portfolios that have the same beta. Tracking error is the term used to describe the standard deviation of the difference between the portfolio return and the benchmark return. This source of variability is another source of risk to use in assessing the manager’s success. The information ratio (IR) divides the portfolio expected return in excess of the benchmark expected return by the tracking error: The Sortino ratio is reminiscent of the Sharpe measure except for two changes. First, we replace the risk-free rate with a minimum acceptable return, denoted RMIN. This return could be determined by the needs of the investor or it can sometimes be set equal to the risk-free rate. Second, we replace standard deviation with downside deviation: THE ARBITRAGE PRICING THEORY AND MULTIFACTOR MODELS OF RISK AND RETURN Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 6. Arbitrage Pricing Theory The capital asset pricing model (CAPM) measures the expected return of a financial asset with respect to the broad market only. Arbitrage pricing theory (APT) is a type of multifactor model that expands upon the CAPM to consider any number of macroeconomic factors that may add additional explanatory power to the expected returns of a financial asset. There is not a set series of macroeconomic factors to consider, which presents analysts with a great deal of flexibility. APT also has simplified assumptions relative to the CAPM. According to arbitrage pricing theory, the expected return for security i can be modeled as shown here. The idea is to model systematic risk on a more granular level using a series of risk factors. Multifactor Model Inputs The first input is the expected return for the stock in question. This type of multifactor model will then offer a series of adjustments that attempt to capture known variables that would influence the returns of a stock (or portfolio). A beta (factor sensitivity) is needed for each variable included in the model, and a value is needed for each factor as well. The error term (ei) represents firm-specific return that is otherwise unexplained by the model. Calculating Expected Returns A single-factor model will only consider the impact of one factor on a dependent variable (a stock’s return). This leaves the potential for either company-specific risk or uncaptured systematic risk to influence asset returns. A multifactor model enables analysts to better model the impact of all systematic risk exposures to improve forecasting ability. Accounting for Correlation The part of an individual security’s risk that is uncorrelated with the volatility of the market portfolio is that security’s nonsystematic risk (or diversifiable risk). The part of an individual security’s risk that arises because of the positive covariance of that security’s returns with overall market returns is called its systematic risk. As the number of securities in a portfolio becomes large, the portfolio’s nonsystematic risk approaches zero. In other words, portfolio risk reduction through diversification comes from reducing nonsystematic risk. Therefore, when a risky security is added to a well- diversified (efficient) portfolio, the portfolio’s risk is only affected by the systematic risk of that security. Hedging Exposure to Multiple Factors Consider an investor who manages a portfolio with the following factor betas: GDP beta = 0.50 consumer sentiment beta = 0.30 Assume the investor wishes to pursue strategies to hedge exposure to GDP risk, or to consumer sentiment risk, or to both factor risks. The following explanation makes use of factor portfolios, which are well-diversified portfolios with betas equal to one for a single risk factor and betas equal to zero on all remaining factors. Now, assume the investor wishes to hedge away GDP factor risk yet maintain the 0.30 exposure to consumer sentiment. To do so, the investor should combine the original portfolio with a 50% short position in the GDP factor portfolio. The GDP factor beta on the 50% short position in the GDP factor portfolio equals –0.50, which perfectly offsets the 0.50 GDP factor beta on the original portfolio. The combined long and short positions hedge away GDP risk but retain the consumer sentiment exposure. The Fama-French Three-Factor Model A major weakness of APT is that it provides no guidance on which other factors to include in a multifactor model. In 1996, economists Eugene Fama and Kenneth French famously specified a multifactor model with three factors: (1) a risk premium for the market, (2) a factor exposure for “small minus big,” and (3) a factor exposure for “high minus low”.2 Small minus big (SMB) is the difference in returns between small firms and large firms. This factor adjusts for the size of the firm because smaller firms often have higher returns than larger firms. High minus low (HML) is the difference between the return on stocks with high book-to-market metrics and ones with low book-to-market values. A high book-to-market value means that the firm has a low price-to-book metric (book-to-market and price-to-book are inverses). This last factor basically means that firms with lower starting valuations are expected to potentially outperform those with higher starting valuations. The Fama-French three-factor model is as follows: PRINCIPLES FOR EFFECTIVE DATA AGGREGATION AND RISK REPORTING Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 7. Benefits of Risk Data Aggregation According to the Basel Committee on Banking Supervision, risk data aggregation means “defining, gathering and processing risk data according to the bank’s risk reporting requirements to enable the bank to measure its performance against its risk tolerance/appetite.” The aggregation process includes breaking down, sorting, and merging data and datasets. Risk management reports should reflect risks in a reliable way. Benefits that accrue from effective risk data aggregation and reporting include (1) an increased ability of managers and the board to anticipate problems, (2) enhanced ability to identify alternative routes to restore financial health in times of financial stress, (3) improved resolvability in the event of bank stress or failure, and (4) an enhanced ability to make strategic decisions, increasing the bank’s efficiency, reducing the chance of loss, and ultimately increasing bank profitability. Financial models are used by banks for everything from analyzing risk exposures to guiding daily operations. Even small errors that occur in the model development process may result in serious consequences for a bank. Models rely on data, so data acquisition is an important component of model risk, specifically input risk. Model developers must demonstrate that the data used in model development is consistent with the theory and methodologies behind the model. Models must be vetted and validated. Governance The governance principle (Principle 1) suggests that risk data aggregation should be part of the bank’s overall risk management framework. The board and senior management should assure that adequate resources are devoted to risk data aggregation and reporting. Data Architecture and Infrastructure The data architecture and IT infrastructure principle (Principle 2) states that a bank should design, build, and maintain data architecture and IT infrastructure that fully supports its risk data aggregation capabilities and risk reporting practices not only in normal times but also during times of stress or crisis, while still meeting the other principles. It stresses that banks should devote considerable financial and human resources to risk data aggregation and reporting. Risk Data Aggregation Capabilities Principles 3–6 specify standards and requirements for effective risk data aggregation. Banks should ensure that the data is accurate and has integrity (Principle 3), is complete (Principle 4), is timely (Principle 5), and is adaptable to the end user (Principle 6). In addition, the bank should not have high standards for one principle at the expense of another. Aggregated risk data should exhibit all of the features together, not in isolation. Effective Risk Reporting Practices Principles 7–11 specify standards and requirements for effective risk reporting practices. Risk reports should be accurate (Principle 7), comprehensive (Principle 8), and clear and useful (Principle 9). Principle 10 states that reports should be “appropriately frequent” (i.e., frequency depends on the role of the recipient—board members need reports less frequently than risk committee members). Reports should be distributed to relevant parties in a timely fashion while maintaining confidentially (Principle 11). ENTERPRISE RISK MANAGEMENT (ERM) AND FUTURE TRENDS Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 8. Enterprise Risk Management An integrated and centralized approach under enterprise risk management (ERM) is significantly more effective in managing a company’s risks than under the traditional silo approach of managing and centralizing risks within each risk/business unit. ERM is a comprehensive and integrated framework for managing a firm’s key risks to meet business objectives, minimize unexpected earnings volatility, and maximize firm value. ERM Motivations There are three primary motivations for a firm to implement an ERM initiative: (1) integration of risk organization, (2) integration of risk transfer, and (3) integration of business processes. The respective benefits are better organizational effectiveness, better risk reporting, and improved business performance. However, implementation of an integrated firm-wide initiative is costly (both capital and labor intensive) and time- consuming. This process could last several years and requires ongoing senior management and board support. ERM Best Practices Corporate governance is critical in the implementation of a successful ERM program and ensures that senior management and the board have the requisite organizational practices and processes to adequately control risks. A successful corporate governance framework requires that senior management and the board adequately define the firm’s risk appetite and risk and loss tolerance levels. In addition, management should remain committed to risk initiatives and ensure that the firm has the required risk management skills and organizational structure to successfully implement the ERM program. ERM Program Dimensions ERM is organized around the following five important dimensions: 1. Targets. Banks should set the correct risk targets. Targets include the following: a. Risk appetite. b. Strategic goals in light of the firm’s risk appetite. 2. Structure. As part of the ERM structure, the roles of relevant parties are defined (i.e., chief risk officer, global risk committee, other risk committees) along with a description of the firm’s governance structure. 3. Identification and metrics. Enterprise risks must be measured in terms of the impact on the firm, the severity of the risks, and, ideally, the frequency of occurrence. 4. ERM strategies. Firms must articulate the methods and strategies that will be used to manage risks at the whole-firm and business-line levels. 5. Culture. A firm must instill in its employees the importance of risk management through the goals, practices, and behaviors of those in top management positions on down through the ranks of the firm. Risk Culture Characteristics and Challenges The risk culture of a firm is the goals, customs, values, and beliefs (both implicit and explicit) that influence the behaviors of employees. These corporate norms guide individuals in their understanding and responses to risk. Firms need methods to measure progress in terms of risk culture. One method is to identify the key risk culture indicators of the firm. The Financial Stability Board (FSB) has specified four risk indicators: 1. Tone from the top of the organization. 2. Effective communication and challenge. 3. Incentives. 4. Accountability. Scenario Analysis and Stress Testing Sensitivity analysis involves changing one variable at a time and assessing the sensitivity of the model (e.g., assessing the impact on net income) to that one variable. Scenario analysis, on the other hand, looks at multiple variables at once and involves developing a narrative to explain why variables change and the effects of those changes. Sophisticated financial models are developed to assess the impact of various scenarios on the risks and performance of the enterprise. Since the financial crisis of 2007–2009, regulators have required banks to use scenario analysis and stress testing in capital planning. U.S. stress testing of banks began in 2009 with the initial Supervisory Capital Assessment Program (SCAP). Since 2011, the Federal Reserve has conducted annual stress tests. In addition, the Dodd-Frank Act required stress testing (Dodd-Frank Act stress tests or DFAST) and the Comprehensive Capital Analysis and Reviews (CCAR) are conducted at year-end for banks with $50 billion or more in assets. While the scenarios for DFAST and CCAR are the same (devised by supervisors), DFAST is more prescriptive, requires less reporting, and has limited capital action assumptions. Results from stress testing are used to help banks in capital planning and maintaining capital adequacy. STUDY SESSION 3: CASE STUDIES AND CODE OF CONDUCT LEARNING FROM FINANCIAL DISASTERS Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 9. Interest Rate Risk Interest rate risk is the potential for loss due to fluctuations in interest rate levels. The degree of sensitivity to interest rate risk is classically measured with duration. The magnitude of this risk can be illustrated using an example of the savings and loan (S&L) industry in the 1980s. All commercial banks, S&Ls included, accept short-term demand deposits from customers and use those funds to make long-term loans. Their goal is to capture the spread between the rate paid for short-term deposits (liabilities from the bank’s perspective) and the rate received on longer-term loans (assets from the bank’s perspective). When short-term interest rates were raised by the Federal Reserve (in response to elevated inflation), S&Ls lost their profit center. Many entered into riskier loans to make up the difference. The result was a collapse of their industry that required a federal bailout. Banks have risk mitigation tools in the form of duration matching between assets and liabilities and various derivatives products. Liquidity risk is the risk that an entity might not be able to meet short-term cash requirements. This risk can materialize from external market conditions, from internal operational issues, from structural (i.e., balance sheet) challenges, or from a mix of these three. The collapses of Lehman Brothers, Continental Illinois, and Northern Rock all illustrated the danger inherent with this risk. Each of these banks funded long-term assets (i.e., loans) with short-term funding sources. This created financial disasters when the short-term funding was no longer available due to external events. Banks must balance the need to reduce liquidity risk with the cost of doing so. Hedging Strategies Devising an effective hedging strategy is a challenging and potentially rewarding undertaking. It requires access to relevant data, access to appropriate statistical tools, and the right model for the analysis task at hand. Once a firm decides that it wants to hedge a known risk, it needs to decide if it wants to deploy a static or a dynamic strategy. A static hedging strategy involves buying a hedging instrument that closely matches the position to be hedged. A dynamic hedging strategy deploys a hedging instrument and then rebalances the hedged position on a frequent basis (e.g., daily, monthly, quarterly). In 1991, Metallgesellschaft Refining and Marketing (MGRM), an American subsidiary of Metallgesellschaft (MG), an international trading, engineering, and chemicals conglomerate, implemented a marketing strategy designed to insulate customers from price volatility in the petroleum markets for a fee. MGRM offered customers contracts to buy fixed amounts of heating oil and gasoline at a fixed price over a 5- or 10-year period. The fixed price was set at a $3 to $5 per barrel premium over the average futures price of contracts expiring over the next 12 months. Customers were given the option to exit the contract if the spot price rose above the fixed price in the contract, in which case MGRM would pay the customer half of the difference between the futures price and contract price. A customer might exercise this option if she did not need the product or if she were experiencing financial difficulties. In later contracts, the customer could receive the entire difference in exchange for a higher fixed contract price. The customer contracts effectively gave MGRM a short position in long-term forward contracts. MGRM hedged this exposure with long positions in near-term futures using a stack-and-roll hedging strategy. Gains and losses on forward contracts are realized at the agreement’s expiration, whereas futures contracts are marked to market such that the gains and losses are realized on a daily basis. In MGRM’s case, gains and losses on its customer contracts were realized if and when the customers took delivery, which would occur over a 5- to 10-year period. During 1993, oil prices dropped from a high of about $21 per barrel to about $14 per barrel, resulting in losses of $900 million on MGRM’s long positions, which were realized immediately as the futures contracts were marked to market. The offsetting gains on their customer contracts, however, would not be realized for years to come, which created potential short-term cash outflows, and resulted in funding liquidity risk. Declining oil prices also created margin calls that exacerbated the cash flow problem. Due to these losses, MG ordered MGRM to close out of its customer contracts. This forced the firm to unwind its positions at very unfavorable terms. The cash outflows might have been tolerable and possibly balanced out by cash inflows over the life of the hedge were it not for the sheer size of MGRM’s position, which would have taken 10 days to liquidate. To liquidate without affecting market prices would have taken 20 to 55 days. As a result, the company lacked liquidity to unwind its positions, if necessary, without significant market impact, and was therefore subject to trading liquidity risk. To make matters worse, MGRM was carrying a heavy debt load and had little equity to withstand losses and cash flow problems on positions of this size. Model Risk Sophisticated financial products use mathematical models to determine their current value. These models could be theoretical (e.g., capital asset pricing model [CAPM]) or statistically based (e.g., term structure of interest rates). The use of models introduces model risk, which potentially involves the following: 1. Using the wrong model for estimation 2. Incorrectly specifying a model 3. Using incomplete data 4. Deploying the wrong estimators 5. Making the wrong assumptions The Niederhoffer Case Victor Niederhoffer was a very successful hedge fund trader. He developed what he thought was a low-risk strategy to harvest put option premiums. He would write very large quantities of deeply out-of-the-money (OTM) put options on the S&P 500 Index. In October 1997, a crisis in Asia spilled over to the U.S. markets and produced a 7% drop in a single trading session. The result was a $50 million margin call, which Niederhoffer could not meet. His fund’s brokers liquidated all put contracts, which locked in substantial losses and wiped out the entire fund’s equity position. The Long-Term Capital Management (LTCM) Case LTCM was founded in 1994. The hedge fund’s principals included former Federal Reserve Board Vice-Chairman David Mullins, Nobel laureates Robert Merton and Myron Scholes, and a collection of highly experienced traders from Salomon Brothers’ bond arbitrage trading desk. Before LTCM’s collapse in the late 1990s, it had $4.8 billion in equity and $125 billion in assets. This translated into a 25:1 leverage ratio. A 1% return on from its core strategy (i.e., spread normalization) would feel like a 25% gain for the levered fund. This balance sheet leverage does not account for the true underlying economic leverage. The notional value of LTCM’s assets was over $1 trillion at this time! The staggering use of leverage was possible because financial institutions often waived initial margin requirements based on the reputation of the principals, freeing up capital to take on even more leverage. Long-Term Capital Management’s downfall was triggered by an action of the Russian government in August of 1998. In a surprise move, the Russians defaulted on their own debt and devalued their currency. This created a flight to quality (i.e., an extreme movement to assets perceived as safe) where investors rushed to buy the exact assets that LTCM had been shorting (i.e., U.S. Treasuries and German bunds). The result was a decline in the value of LTCM’s assets by just over 40% ($2 billion of their $4.8 billion in equity) in one month. The failure of LTCM was due to model error. Management did not properly anticipate increased correlations in the event of a global crisis. They actually adjusted correlations higher in their models, but the adjustment did not go anywhere close to the actual correlation spike caused by the cascading external economic shocks. They also did not properly forecast the volatility that actually appeared in the markets. The model risk led to a liquidity risk crisis for LTCM, which ultimately destroyed the company. The London Whale Case JPMorgan is one of the largest financial holding companies in the United States. It is also one of the largest derivatives dealers (particularly credit derivatives) in the world. In early 2012, its chief investment officer (CIO) was tasked with managing $350 billion in excess demand deposits. It used this money to make massive bets on synthetic credit derivatives that ultimately cost the bank $6.2 billion in trading losses and temporarily disrupted global markets. The London Whale case highlighted that when risk limits are breached or trades look unprofitable, risk managers should never adjust assumptions or valuation models to make bad decisions look better. The Barings Bank Case The bank was founded in London in 1762, and it was the world’s second-oldest merchant bank. In 1992, an employee named Nick Leeson moved to Singapore to become the local head of operations. His mission was to execute client trades on the Singapore stock exchange. From an accounting perspective, Leeson’s trading actions looked like they were making a large return for Barings Bank. The reality was that Leeson also controlled the back- office accounting of his own trades, and he managed the reporting through a hidden reconciliation account that was never reported to the home office. What appeared to be a £102 million profit in 1994 was actually a £200 million loss. This could have been prevented with better internal controls flowing out of a healthy skepticism at reported results that differed from what should have been expected given the types of trades placed. Financial Engineering The building blocks for financial engineering are forwards, futures, swaps, options, and securitized products. By using these tools, a risk manager could hedge either a granular risk exposure or a basket of risk exposures. Risk managers need to be careful about which goal a hedging strategy is pursuing. In its purest sense, a hedging strategy can be used for risk mitigation. Alternatively, some firms have used hedging strategies to enhance returns. This second strategy usually adds more layers of risk rather than mitigating current exposures. From considering cases on Bankers Trust, Orange County, and Sachsen Landesbank, risk managers should clearly see the need to fully understand hedging tools before deploying them. Reputation Risk A company’s reputation is a public perception of its fairness, commitment to ethical behavior, and treatment of stakeholders (i.e., customers, suppliers, counterparties). One trending area with growing reputational influence is environmental, social, and governance (ESG) monitoring. Reputation risk is the potential for negative operational outcomes due to a poor public perception (ESG or otherwise). In September 2015, the U.S. Environmental Protection Agency (EPA) announced that Volkswagen (VW) had been unethical in its environmental responsibilities. It violated the ESG ethos by programming the software on its vehicles to only control emissions during regulatory tests. The reputational damage to VW was fast and furious. Its share price was cut by one- third as the scandal unfolded. Volkswagen faced billions of dollars in potential fines on top of decreased sales as consumers responded to the allegations by switching brand loyalty to other vendors. Corporate Governance Corporate governance is a system of policies and procedures that direct how a firm is operated. In 1985, the highly leveraged merger of InterNorth and Houston Natural Gas gave birth to Enron. A subsequent wave of deregulation moved Enron into the role of being a gas broker. The company would routinely purchase gas from various vendors and sell it to a network of customers at predetermined prices. To cover its risk exposure to gas prices, Enron created a new market for energy derivatives. Reality caught up with Enron in December 2001, which is when it became the largest bankruptcy in U.S. history. This was a direct result of massive corporate governance failures and a textbook example of agency risk. As is typically the case, the result of this crisis was a new piece of regulation. Enron’s failure was the fuel needed to bring Sarbanes-Oxley (SOX) to life in 2002. Cyber Risk Cyber risk is the risk of financial or reputational loss resulting from a breach in internal technology infrastructures. The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the global leader in electronically transferring funds between financial institutions. In February 2016, hackers accessed the SWIFT system and stole $81 million from the Bangladesh Bank (the central bank of Bangladesh). The money was never recovered because it was transferred from the bank in the Philippines to a series of casinos and promptly withdrawn. This was a sophisticated cyberattack and it illustrates the stakes involved in ensuring security for IT systems. ANATOMY OF THE GREAT FINANCIAL CRISIS OF 2007– 2009 Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 10. Financial Crisis Overview and Contributing Factors In the run-up to the financial crisis of 2007–2009, interest rates in the United States were kept at historically low levels. The cheap cost of money made it easier for people to borrow and acquire real estate property, thus fueling a rapid and unsustainable increase in house prices. Many banks, especially the ones with exposure to subprime, experienced large losses and liquidity issues. Institutions became overly cautious, hoarding excess reserves and unwilling to lend those reserves to other cash-strapped institutions. Governments around the world intervened by lowering interest rates and providing liquidity support to encourage lending in an effort to prop up failing financial entities. Banks increasingly financed their long-term assets through short-term liabilities. This gave rise to a maturity mismatch between the duration of the assets and the liabilities, which exposed banks to significant liquidity risk. When the crisis struck and house prices stalled, those short-term liabilities could not be rolled over. At the peak of the crisis in September 2008, the large U.S. investment bank, Lehman Brothers, declared bankruptcy, which triggered a massive loss of confidence and froze the interbank lending market. Two of the large mortgage-backed securities (MBS) issuers in the United States, Fannie Mae and Freddie Mac, were nationalized, and the large financial services and insurance company, American International Group (AIG), was bailed out to prevent further systemic issues. Subprime Mortgages and Collateralized Debt Obligations The reduction in lending standards partly resulted from the move to the so-called originate-to-distribute (OTD) model. Under this model, lenders no longer hold the mortgages on their balance sheet but move them into bankruptcy-remote structured investment vehicles (SIVs) through securitization. Securitization involves the pooling of assets together in order to sell claims against them. An example of such structure is the collateralized debt obligation (CDO) whereby the pool is sliced into multiple tranches (e.g., senior, junior, and equity). Cash flows and defaults are determined as per the waterfall structure whereby senior tranches receive cash flows first but absorb losses last. The senior tranches were considered very safe and structured to have a AAA rating, even though the underlying mortgages consisted of NINJA and liar loans. The junior tranches of multiple CDO structures were then often bundled together and repackaged as CDO-squared (a CDO whose cash flows are backed by other CDO tranches, rather than mortgages). It is clear that the structures were very opaque and complex to value, even during normal times and even for sophisticated investors who did not have the expertise to understand what they were buying. The fact that senior CDO tranches were given a AAA rating demonstrates that rating agencies provided unrealistically high ratings, which were often based on historical data for prime mortgages and did not take into account the increasingly speculative nature of the marketplace. Short-Term Funding and Systemic Risk Banks created SIVs, which increasingly financed their purchases of long-term assets, such as mortgages, through the issuance of short-term liabilities. The two instruments used for short-term funding were asset-backed commercial paper (ABCP) and repurchase agreements (i.e., repos). Commercial paper is a short-term, unsecured form of financing primarily used by high-quality issuers. ABCP is a special case whereby the commercial paper is backed by some form of collateral, such as credit card loans or mortgages. Due to the short-term nature of commercial paper, there is an inherent assumption that the issuer will be able to roll over the obligation at maturity. Repurchase agreements (i.e., repos) are another source of short-term funding used by many financial institutions. In a repo, a bank will sell an asset but will also simultaneously agree to buy back the asset in the future at a slightly higher price. The difference between the repurchase price and the sales price is the interest cost for the duration of the borrowing, known as the repo rate. Because SIVs holding mortgages were primarily funded short term through ABCP and repos, they relied heavily on their ability to roll over these obligations at maturity. This exposed the SIVs to significant funding liquidity risk in the event of crisis. As house and mortgage-backed security prices declined, lenders started questioning the quality of assets residing within the SIV structures and became reluctant to extend further short- term loans. This eventually led to a complete shutdown of the ABCP and repo market by August 2007. The lesson learned is that even when a bank believes it has sufficient capital, overreliance on short-term funding sources is very dangerous because this type of funding can disappear overnight during times of crisis. Central Bank Intervention To prevent further systemic issues, the Federal Reserve and other central banks around the world intervened by providing liquidity support and lowering interest rates. Some of the actions taken by the Federal Reserve included the following: Providing long-term loans secured by high-quality collateral Allowing investment banks and securities firms to borrow directly from the Fed via the discount window (this was unavailable to investment banks precrisis) Providing liquidity against high-quality illiquid assets Providing funding to purchase asset-backed commercial paper Acquiring assets issued by Fannie Mae and Freddie Mac GARP CODE OF CONDUCT Cross-reference to GARP FRM Part I Foundations of Risk Management, Chapter 11. 1. Professional Integrity and Ethical Conduct GARP Members: 1.1. shall act professionally, ethically and with integrity in all dealings with employers, existing or potential clients, the public, and other practitioners in the financial services industry. 1.2. shall exercise reasonable judgment in the provision of risk services while maintaining independence of thought and direction. GARP Members must not offer, solicit, or accept any gift, benefit, compensation, or consideration that could be reasonably expected to compromise their own or another’s independence and objectivity. 1.3. must take reasonable precautions to ensure that the Member’s services are not used for improper, fraudulent or illegal purposes. 1.4. shall not knowingly misrepresent details relating to analysis, recommendations, actions, or other professional activities. 1.5. shall not engage in any professional conduct involving dishonesty or deception or engage in any act that reflects negatively on their integrity, character, trustworthiness, or professional ability or on the risk management profession. 1.6. shall not engage in any conduct or commit any act that compromises the integrity of GARP, the FRM® designation, or the integrity or validity of the examinations leading to the award of the right to use the FRM designation or any other credentials that may be offered by GARP. 1.7. shall be mindful of cultural differences regarding ethical behavior and customs, and avoid any actions that are, or may have the appearance of being unethical according to local customs. If there appears to be a conflict or overlap of standards, the GARP Member should always seek to apply the highest standard. 2. Conflict of Interest GARP Members shall: 2.1. act fairly in all situations and must fully disclose any actual or potential conflict to all affected parties. 2.2. make full and fair disclosure of all matters that could reasonably be expected to impair independence and objectivity or interfere with respective duties to their employer, clients, and prospective clients. 3. Confidentiality GARP Members: 3.1. shall not make use of confidential information for inappropriate purposes and unless having received prior consent shall maintain the confidentiality of their work, their employer or client. 3.2. must not use confidential information for personal benefit. 4. Fundamental Responsibilities GARP Members shall: 4.1. comply with all applicable laws, rules, and regulations (including this Code) governing the GARP Members’ professional activities and shall not knowingly participate or assist in any violation of such laws, rules, or regulations. 4.2. have ethical responsibilities and cannot outsource or delegate those responsibilities to others. 4.3. understand the needs and complexity of their employer or client, and should provide appropriate and suitable risk management services and advice. 4.4. be diligent about not overstating the accuracy or certainty of results or conclusions. 4.5. clearly disclose the relevant limits of their specific knowledge and expertise concerning risk assessment, industry practices, and applicable laws and regulations. 5. Best Practices GARP Members shall: 5.1. execute all services with diligence and perform all work in a manner that is independent from interested parties. GARP Members should collect, analyze and distribute risk information with the highest level of professional objectivity. 5.2. be familiar with current generally accepted risk management practices and shall clearly indicate any departure from their use. 5.3. ensure that communications include factual data and do not contain false information. 5.4. make a distinction between fact and opinion in the presentation of analysis and recommendations. Violations of the Code of Conduct Violations of the Code of Conduct may result in temporary suspension or permanent removal from GARP membership. In addition, violations could lead to a revocation of the right to use the FRM designation. Sanctions would be issued after a formal investigation is conducted by GARP. 1 https://www.bis.org/publ/bcbs195.pdf, page 3, footnote 5. 2 E. F. Fama and K. R. French, “Multifactor Explanations of Asset Pricing Anomalies,” The Journal of Finance 51, no. 1 (1996): 55–84. QUANTITATIVE ANALYSIS Study Sessions 4–7 STUDY SESSION 4: PROBABILITY AND STATISTICS FUNDAMENTALS OF PROBABILITY Cross-reference to GARP FRM Part I Quantitative Analysis, Chapter 1. Events and Event Spaces An event is a single outcome or a combination of outcomes for a random variable. Consider a random variable that is the result of rolling a fair six-sided die. The outcomes with positive probability (those that may happen) are the integers 1, 2, 3, 4, 5, and 6. For the event x = 3, we can write P(3) = 1/6 = 16.7%. The event space for a random variable is the set of all possible outcomes and combinations of outcomes. Consider a flip of a fair coin. The event space is heads, tails, heads and tails, and neither heads nor tails. Independent and Mutually Exclusive Events Two events are independent events if knowing the outcome of one does not affect the probability of the other. When two events are independent, the following two probability relationships must hold: 1. P(A) × P(B) = P(AB). The probability that both A and B will happen is the product of their unconditional probabilities. 2. P(A|B) = P(A). The conditional probability of A given that B occurs is simply the unconditional probability of A occurring. This means B occurring does not change the probability of A. Two events are mutually exclusive events if they cannot both happen. Consider the possible outcomes of one roll of a die. The events “x = an even number” and “x = 3” are mutually exclusive; they cannot both happen on the same roll. When events A and B are mutually exclusive, P(AB) is zero, so P(A or B) is simply P(A) + P(B). Conditionally Independent Events Two conditional probabilities, P(A|C) and P(B|C), may be independent or dependent regardless of whether the unconditional probabilities, P(A) and P(B), are independent or not. When two events are conditionally independent events, P(A|C) × P(B|C) = P(AB|C). Discrete Probability Function A discrete probability function is one for which there are a finite number of possible outcomes. The probability function gives us the probability of each possible outcome. For example, P(x) = x/25, defined over the outcomes {1, 2, 3, 4, 5}. Conditional and Unconditional Probabilities Sometimes we are interested in the probability of an event, given that some other event has occurred. As mentioned earlier, we refer to this as a conditional probability, P(A|B). Given a conditional probability and the unconditional probability of the conditioning event, we can calculate the joint probability of both events using P(AB) = P(A|B) × P(B). Rearranging P(AB) = P(A|B) × P(B), we get: Bayes’ Rule Bayes’ rule allows us to use information about the outcome of one event to improve our estimates of the unconditional probability of another event. From our rules of probability, we know that P(A|B) × P(B) = P(AB) and that P(B|A) × P(A) = P(AB), so we can write P(A|B) × P(B) = P(B|A) × P(A). Rearranging these terms, we can arrive at Bayes’ rule: Given the unconditional probabilities of A and B and the conditional probability of B given A, we can calculate the conditional probability of A given B. RANDOM VARIABLES Cross-reference to GARP FRM Part I Quantitative Analysis, Chapter 2. Random Variables and Probability Functions A probability mass function (PMF), f (x) = P(X = x), gives us the probability that the outcome of a discrete random variable, X, will be equal to a given number, x. For a Bernoulli random variable for which the P(x = 1) = p, the PMF is f (x) = px (1 − p)1 − x. This yields P(x = 1) = p and P(x = 0) = 1 − p. A cumulative distribution function (CDF) gives us the probability that a random variable will take on a value less than or equal to x [i.e., F(x) = P(X ≤ x)]. For the roll of a six-sided die, the CDF is F(x) = x/6, so that the probability of a roll of 3 or less is F(3) = 3/6 = 50. This illustrates an important relationship between a PMF and its corresponding CDF; the probability of an outcome less than or equal to x is simply the sum of the probabilities of all the possible outcomes less than or equal to x. For the roll of a six-sided die: F(3) = f (1) + f (2) + f (3) = 1/6 + 1/6 + 1/6 = 3/6 = 50%. Expectations The expected value is the weighted average of the possible outcomes of a random variable, where the weights are the probabilities that the outcomes will occur. The mathematical representation for the expected value of random variable X is: The following are two useful properties of expected values: 1. If c is any constant, then: E(cX) = cE(X) 2. If X and Y are any random variables, then: E(X + Y) = E(X) + E(Y) The population moments most often used are mean; variance; skewness; and kurtosis. The first moment, the mean of a random variable, is its expected value, E(X), which we discussed previously. The mean can be represented by the Greek letter µ (mu). The second central moment of a random variable is its variance, σ2. Variance is defined as: The third central moment of a distribution is: Skewness, a measure of a distribution’s symmetry, is the standardized third moment. We standardize it by dividing it by the standard deviation cubed. The fourth central moment of a distribution is: Kurtosis is the standardized fourth moment. Kurtosis is a measure of the shape of a distribution, in particular the total probability in the tails of the distribution relative to the probability in the rest of the distribution. Probability Density Functions A PMF to describe the probabilities of the possible outcomes for a discrete random variable. A continuous random variable can take on any of an infinite number of possible outcomes so that the probability of any single outcome is zero. We describe a continuous distribution function with a probability density function (PDF), rather than a PMF. A PDF allows us to calculate the probability of an outcome between two values (over an interval). Quantile Functions A quantile is the percentage of outcomes less than a given outcome. A quantile function, Q(x%), provides the value of an outcome that is greater than x% of all possible outcomes. Q(50%) is the median of a distribution. Fifty percent of the outcomes are greater than the median, and 50% of the outcomes are less than the median. The interquartile range is an interval that includes the central 50% of all possible outcomes. Linear Transformations of Random Variables A linear transformation of a random variable, X, takes the form Y = a + bX, where a and b are constants. The constant a shifts the location of the random variable, X, and b rescales the values of X. For a variable Y = a + bX (a linear transformation of X): the mean of Y is E(Y) = a + bE(X); the variance of Y is and the standard deviation is the skew of Y = skew X, for b > 0, and skew Y = –skew X for b < 0; and the kurtosis of Y = kurtosis X. COMMON UNIVARIATE RANDOM VARIABLES Cross-reference to GARP FRM Part I Quantitative Analysis, Chapter 3. The Uniform Distribution The continuous uniform distribution is defined over a range that spans between some lower limit, a, and some upper limit, b, which serve as the parameters of the distribution. Outcomes can only occur between a and b, and because we are dealing with a continuous distribution, even if a < x < b, P(X = x) = 0. The mean and variance, respectively, of a uniform distribution are: The Bernoulli Distribution A Bernoulli random variable only has two possible outcomes. The outcomes can be defined as either a success or a failure. The probability of success, p, may be denoted with the value 1 and the probability of failure, 1 − p, may be denoted with the value 0. Bernoulli distributed random variables are commonly used for assessing the probability of binary outcomes, such as the probability that a firm will default on its debt over some interval. The Binomial Distribution A binomial random variable may be defined as the number of successes in a given number of Bernoulli trials, whereby the outcome can be either success or failure. The probability of success, p, is constant for each trial and the trials are independent. Under these conditions, the binomial probability function defines the probability of exactly x successes in n trials. It can be expressed using the following formula: For a given series of n trials, the expected number of successes, or E(X), is given by the following formula: The intuition is straightforward; if we perform n trials and the probability of success on each trial is p, we expect np successes. The variance of a binomial random variable is given by: The Poisson Distribution The Poisson distribution is a discrete probability distribution with a number of real- world applications. For example, the number of defects per batch in a production process or the number of 911 calls per hour are discrete random variables that follow a Poisson distribution. While the Poisson random variable X refers to the number of successes per unit, the parameter lambda (λ) refers to the average or expected number of successes per unit. The mathematical expression for the Poisson distribution for obtaining X successes, given that λ successes are expected, is: An interesting feature of the Poisson distribution is that both its mean and variance are equal to the parameter, λ. The Normal Distribution The normal distribution has the following key properties: X is normally distributed with mean µ and variance σ2. Skewness = 0, meaning the normal distribution is symmetric about its mean, so that P(X ≤ µ) = P(µ ≤ X) = 0.5, and mean = median = mode. Kurtosis = 3; this is a measure of how the distribution is spread out with an emphasis on the tails of the distribution. A linear combination of normally distributed independent random variables is also normally distributed. The probabilities of outcomes further above and below the mean get smaller and smaller but do not go to zero (the tails get very thin but extend infinitely). Many of these properties are evident from examining the graph of a normal distribution’s PDF as illustrated in Figure 2.1. Figure 2.1: Normal Distribution PDF In practice, we will not know the actual values for the mean and standard deviation of the distribution, but will have estimated them as X and s. The three confidence intervals of most interest are given by the following: The 90% confidence interval for X is X − 1.65s to X + 1.65s. The 95% confidence interval for X is X − 1.96s to X + 1.96s. The 99% confidence interval for X is X − 2.58s to X + 2.58s. EXAMPLE: Confidence intervals The average return of a mutual fund is 10.5% per year and the standard deviation of annual returns is 18%. If returns are approximately normal, what is the 95% confidence interval for the mutual fund return next year? Answer: Here µ and σ are 10.5% and 18%, respectively. Thus, the 95% confidence interval for the return, R, is: 10.5 ± 1.96(18) = −24.78% to 45.78% Symbolically, this result can be expressed as: P(−24.78 < R < 45.78) = 0.95 or 95% The interpretation is that the annual return is expected to be within this interval 95% of the time, or 95 out of 100 years. A standard normal distribution (i.e., z-distribution) is a normal distribution that has been standardized so it has a mean of zero and a standard deviation of 1 [i.e., N~(0,1)]. To standardize an observation from a given normal distribution, the z-value of the observation must be calculated. The z-value represents the number of standard deviations a given observation is from the population mean. Standardization is the process of converting an observed value for a random variable to its z-value. The following formula is used to standardize a random variable: The Lognormal Distribution The lognormal distribution is generated by the function ex, where x is normally distributed. Because the natural logarithm, ln, of ex is x, the logarithms of lognormally distributed random variables are normally distributed, thus the name. The lognormal distribution is skewed to the right. The lognormal distribution is bounded from below by zero so that it is useful for modeling asset prices that never take negative values. Student’s t-Distribution Student’s t-distribution is similar to a normal distribution, but has fatter tails (i.e., a greater proportion of the outcomes are in the tails of the distribution). It is the appropriate distribution to use when constructing confidence intervals based on small samples (n < 30) from a population with unknown variance and a normal, or approximately normal, distribution. It may also be appropriate to use the t-distribution when the population variance is unknown and the sample size is large enough that the central limit theorem will assure that the sampling distribution is approximately normal. Student’s t-distribution has the following properties: It is symmetrical. It is defined by a single parameter, the degrees of freedom (df), where the degrees of freedom are equal to the number of sample observations minus 1, n − 1, for sample means. It has a greater probability in the tails (fatter tails) than the normal distribution. As the degrees of freedom (the sample size) gets larger, the shape of the t- distribution more closely approaches a standard normal distribution. The Chi-Squared Distribution Hypothesis tests concerning population parameters and models of random variables that are always positive are often based on a chi-squared distribution, denoted χ2. The chi-squared distribution is asymmetrical, bounded below by zero, and approaches the normal distribution in shape as the degrees of freedom increase. The F-Distribution Hypotheses concerning the equality of the variances of two populations are tested with an F-distributed test statistic. An F-distributed test statistic is used when the populations from which samples are drawn are normally distributed and that the samples are independent. Mixture Distributions The distributions discussed, as well as other distributions, can be combined to create unique PDFs. It may be helpful to create a new distribution if the underlying data you are working with does not currently fit a predetermined distribution. In this case, a newly created distribution may assist with explaining the relevant data. MULTIVARIATE RANDOM VARIABLES Cross-reference to GARP FRM Part I Quantitative Analysis, Chapter 4. Probability Matrices A probability mass function (PMF) for a bivariate random variable describes the probability that two random variables each take a specific value. The PMF of a bivariate random variable is: A probability matrix illustrates the following properties of a PMF: The probability matrix describes the outcome probabilities as a function of the coordinates x1 and x2. All probabilities are positive or zero and are less than or equal to 1. The sum across all possible outcomes for X1 and X2 equals 1. EXAMPLE: Applying a probability matrix Suppose that a company’s common stock return is related to earnings announcements. Earnings announcements are either positive, neutral, or negative and are labeled as 1, 0, and −1, respectively. Assume that the company’s monthly stock return must be one of three possible outcomes, −3%, 0%, or 3%. An analyst estimates the probability matrix in Figure 2.2 for earnings announcements and stock returns. Compute the probability of a negative earnings announcement. Figure 2.2: Probability Matrix for Bivariate Random Variables Answer: The sum of all probabilities in the first row of the probability matrix states that there is a 40% probability of a negative announcement. Also, there is a 25% probability of a negative announcement and a −3% return, a 15% probability of a negative announcement and a 0% return, and a 0% probability of a negative announcement and a 3% return. Marginal and Conditional Distributions A marginal distribution defines the distribution of a single component of a bivariate random variable (i.e., a univariate random variable). Thus, the notation for the marginal PMF is the same notation for a univariate random variable: The computation of a marginal distribution can be shown using the previous example of earnings announcements and monthly stock returns. Summing across columns constructs the marginal distribution of the row variables in a probability matrix. Summing across rows constructs the marginal distribution for the column variables in a probability matrix. A conditional distribution sums the probabilities of the outcomes for each component conditional on the other component being a specific value. A conditional PMF is defined based on the conditional probability for a bivariate random variable X1 given X2 as: The numerator in this equation is the joint probability of two events occurring, and the denominator is the marginal probability that X2 = x2. Expectation of a Bivariate Random Function The first moment of a bivariate discrete random variable is referred to as an expectation of a function. The expectation of a bivariate random function g(X1,X2) is a probability weighted average of the function of the outcomes g(x1,x2). Covariance and Correlation Between Random Variables Covariance is the expected value of the product of the deviations of the two random variables from their respective expected values. Common notations for the covariance between random variables X and Y are Cov(X,Y) and σXY. Covariance measures how two variables move with each other or the dependency between the two variables. The covariance between X1 and X2 is calculated as: To make the covariance of two random variables easier to interpret, it may be divided by the product of the bivariate random variables’ standard deviations. The resulting value is called the correlation coefficient, or simply, correlation. Correlation measures the strength of the linear relationship between two variables and ranges from −1 to +1 for two variables (i.e., −1 ≤ Corr(X1, X2) ≤ +1). Linear Transformations The first effect of a linear transformation on the covariance of two random variables is that b determines the correlation between the components. The correlation between X1 and X2 will be 1 if b > 0, 0 if b = 0, and –1 if b < 0. A second effect of linear transformations on covariance is that the amount or scale of a has no effect on the variance, and the scale of b determines the scale or changes in the variance by b2. A third effect of linear transformations on covariance is that the scale of covariance is determined by two variables, b and d, as follows: The fourth effect of linear transformations on covariance between random variables relates to coskewness and cokurtosis. Variance of Weighted Sum of Bivariate Random Variables When measuring the variance of two random variables, the covariance or comovement between the two variables is a key component. The variance of two random variables, X1 and X2, is computed by summing the individual variances and two times the covariance: If a and b represent the weight of investment in asset X1 and X2, respectively, then the variance of a two-asset portfolio is computed as follows: In a two-asset portfolio context, this equation is most commonly written as: Conditional Expectations In the context of portfolio risk management, a conditional expectation of a random variable is computed based on a specific event occurring. A conditional PMF is used to determine the conditional expectation based on weighted averages. A conditional distribution is defined based on the conditional probability for a bivariate random variable X1 given X2. Independent and Identically Distributed Random Variables Independent and identically distributed (i.i.d.) random variables are generated from a single univariate distribution such as the normal distribution. Features of i.i.d. sequence of random variables include the following: Variables are independent of all other components. Variables are all from a single univariate distribution. Variables all have the same moments. Expected value of the sum of n i.i.d. random variables is equal to nµ. Variance of the sum of n i.i.d. random variables is equal to nσ2. Variance of the sum of i.i.d. random variables grows linearly. Variance of the average of multiple i.i.d. random variables decreases as n increases. STUDY SESSION 5: SAMPLE MOMENTS AND HYPOTHESIS TESTING SAMPLE MOMENTS Cross-reference to GARP FRM Part I Quantitative Analysis, Chapter 5. Mean and Variance Measures of central tendency identify the center, or average, of a data set. This central point can then be used to represent the typical, or expected, value in the data set. The first moment of the distribution of data is the mean. To compute the population mean, µ, all the observed values in the population are summed and divided by the number of observations in the population, N. Note that the population mean is unique in that a given population has only one mean. The population mean is expressed as: The population mean is unknown because not all of the random numbers of the population are observable. Therefore, we create samples of data to estimate the true population mean. The hat notation above the µ, denotes that the sample mean, is an estimate of the true mean. The sample mean is an estimate based on a known data set where all data points are observable. Thus, the sample mean is simply an estimate of the true population mean. Note the use of n, the sample size, versus N, the population size. The mean and variance of a distribution are defined as the first and second moments of the distribution, respectively. The variance of a random variable is defined as: Point Estimates and Estimators Sample parameters can be used to draw conclusions about true population parameters which are unknown. Point estimates are single (sample) values used to estimate population parameters, and the formula used to compute a point estimate is known as an estimator. Biased Estimators The bias of an estimator measures the difference between the expected value of the estimator, and the true population value, θ. Therefore, the estimator bias is computed as: The sample mean is an unbiased estimator. Conversely, the sample variance is a biased estimator. When the sample size n is large, the bias is small. The fact that the bias is known allows us to determine an unbiased estimator for the sample variance as: Best Linear Unbiased Estimator The best linear unbiased estimator (BLUE) is the best estimator of the population mean available because it has the minimum variance of any linear unbiased estimator. When data is i.i.d., the sample mean is considered to be BLUE. Law of Large Numbers If the law of large numbers (LLN) applies to estimators, then the estimators are consistent. The first property of a consistent estimator is that as the sample size increases, the finite sample bias is reduced to zero. The second property of a consistent estimator is as the sample size increases, the variance of the estimator approaches zero. Central Limit Theorem The central limit theorem (CLT) states that for simple random samples of size n from a population with a mean µ and a finite variance σ2, the sampling distribution of the sample mean, µ, approaches a normal probability distribution with mean µ and variance equal to σ2/n as the sample size becomes large. The CLT requires only one additional assumption from the LLN that the variance is finite. Skewness and Kurtosis Skewness, or skew, refers to the extent to which a distribution is not symmetrical. Nonsymmetrical distributions may be either positively or negatively skewed and result from the occurrence of outliers in the data set. Outliers are observations with extraordinarily large values, either positive or negative. A positively skewed distribution is characterized by many outliers in the upper region, or right tail. A positively skewed distribution is said to be skewed right because of its relatively long upper (right) tail. A negatively skewed distribution has a disproportionately large amount of outliers that fall within its lower (left) tail. A negatively skewed distribution is said to be skewed left because of its long lower tail. Kurtosis is a measure of the degree to which a distribution is spread out compared to a normal distribution. Leptokurtic describes a distribution that has fatter tails than a normal distribution, whereas platykurtic refers to a distribution that has thinner tails than a normal distribution. A distribution is mesokurtic if it has the same kurtosis as a normal distribution. A distribution is said to exhibit excess kurtosis if it has either more or less kurtosis than the normal distribution. The computed kurtosis for all normal distributions is three. Statisticians, however, sometimes report excess kurtosis, which is defined as kurtosis minus three. Thus, a normal distribution has excess kurtosis equal to zero, a leptokurtic distribution has excess kurtosis greater than zero, and platykurtic distributions will have excess kurtosis less than zero. Median and Quantile Estimates To determine the median and other quantiles, arrange the data from the highest to the lowest value, or lowest to highest value, and find the middle observation. The middle of the observations will depend on whether the total sample size is an odd or even number. The median is estimated when the total number of observations in the sample size is odd as: The median is estimated when the total number of observations in the sample size is even as: Estimating Quartiles In addition to the median, the two most commonly reported quantiles are the 25th and 75th quantiles. The estimation procedure for these quantiles is similar to the median process. The data is first sorted and then the α-quantile is estimated using the data point in location α × n. If this data value is not an integer value, then the general rule is to average the points immediately above and below α × n. Covariance and Correlation Between Random Variables The covariance between two random variables is a statistical measure of the degree to which the two variables move together. The covariance captures the linear relationship between one variable and another. A positive covariance indicates that the variables tend to move together; a negative covariance indicates that the variables tend to move in opposite directions. The sample covariance estimator can be calculated as: The correlation coefficient, which converts the covariance into a measure that is easier to interpret: EXAMPLE: Correlation Using our previous example, compute and interpret the correlation of the returns for Stocks A and B, given that σ2(RA) = 0.0028 and σ2(RB) = 0.0124 and recalling that Cov(RA,RB) = 0.0058. Answer: First, it is necessary to convert the variances to standard deviations. σ(RA) = (0.0028)1/2 = 0.0529 σ(RB) = (0.0124)1/2 = 0.1114 Now, the correlation between the returns of Stock A and Stock B can be computed as follows: Coskewness and Cokurtosis Coskewness measures the likelihood of large directional movements occurring for one variable when the other variable is large. Coskewness is zero when there is no relationship between the sign of one variable when large moves occur with the other variable. The cokurtosis of a bivariate normal depends on the correlation. Cokurtosis for the symmetric case, k(X,X,Y,Y), ranges between +1 and +3, with the smallest value of 1 occurring when the correlation is equal to zero and the cokurtosis increases as the correlation moves away from zero. Cokurtosis for the asymmetrical cases range from – 3 to +3 and is a linear relationship that is upward sloping as the correlation increases from –

Use Quizgecko on...
Browser
Browser