BE Fall 2024 - Lecture 3 - Decision Under Risk PDF

Summary

This lecture provides an overview of behavioral economics, focusing on decision-making under risk. It explores concepts like rational decision-making, behavioral anomalies, and the Prospect Theory which is an important framework in economics.

Full Transcript

BEHAVIORAL ECONOMICS LECTURE 3: DECISION UNDER Dr Mickaël Mangot RISK AND UNCERTAINTY 1 LECTURE OUTLINE Introduction I. Rational decision-making under risk and uncertainty II. Behavioral anomalies in decisions under risk and...

BEHAVIORAL ECONOMICS LECTURE 3: DECISION UNDER Dr Mickaël Mangot RISK AND UNCERTAINTY 1 LECTURE OUTLINE Introduction I. Rational decision-making under risk and uncertainty II. Behavioral anomalies in decisions under risk and uncertainty III. Explaining anomalies: Prospect Theory Conclusion 3 models: EV -> Expected utility -> Prospect Theory Form belief and judgement -> Preference -> Translated into utility Diff between expected value and expected utility - Product of probability and actual o/c => Expected value (don’t care about utility) - Cross probability estimates and utility you get from each outcome = > Expected utility (each o/c has a specific probability) > Expected value is the a subset of expected utility > EV is for people who are risk neutral because it relies on utility function that is linear (aka risk neutral) > As soon as you are not risk neutral, EV is no longer relevant 2 INTRODUCTION Decisions under risk are influenced by multiple psychological mechanisms: ▪ How people informally perceive (risk perception) or formally assess risks (risk assessment) ▪ How people insert their risk estimates into their decisions (risk weighting) ▪ How people personally feel about risk and value the outcomes of the different possible scenarios (risk preferences or risk tolerance) 3 RATIONAL DECISION-MAKING UNDER RISK AND UNCERTAINTY 4 RATIONAL DECISION-MAKING UNDER RISK Risk is a type of situation when there are different possible scenarios with known probabilities Decision under risk: different decision criteria  Expected value: 𝐸𝑉 𝑔𝑎𝑚𝑏𝑙𝑒 = σ𝑖(𝑝𝑖. 𝑂𝑖)  Expected utility Risk preferences: people may like, dislike or be indifferent to risk 5 EXERCISE: STRUCTURED FINANCIAL PRODUCTS B Your personal financial adviser Independent proposes you two structured financial No products whose final return depend on outcomes Crash the performance of two underlying Crash assets A and B: Product P1 pays an annual return of 10% if Crash 0,01 0,09 0,1 none of assets A and B crashes during the year (i.e., with a decrease in price by more A than 20%). Otherwise, it pays a return of - No crash 0,09 0,81 0,9 5%. Product P2 pays an annual return of 8% if 0,1 0,9 1 not both assets A and B crash during the year (i.e., with a decrease in price by more than 20%). Otherwise, it pays a return of - 10%. B P(crashA)=P(crashB)=0,1. Dependent outcomes No Crash What is the expected value of the Crash two products? Crash 0,08 0,02 0,1 Consider two cases: A Case #1: assets A and B are independent No crash 0,02 0,88 0,9 Case #2: assets A and B are dependent with P(crashA/crashB) = P(crashB/crashA)=0,8 0,1 0,9 1 6 Independent case: EV of P1 = 0.1(0.81) - 0.05(0.19) = 7.15% EV of P2 = 0.08(0.99) - 0.1(0.01) = 7.82% Dependent case: EV of P1 = 0.1(0.88) -0.05(0.12) = 8.2% EV of P2 = 0.92(0.08) + 0.08(-0.1) = 6.56% Switched the outcome (disjunction and conjunction), so if you are risk neutral, switch the choice depending on correlation THE SAINT-PETERSBURG PARADOX “A casino offers a game of chance for a single player in which a fair coin is tossed at each stage. The pot starts at 1 dollar and is doubled every time a head appears. The first time a tail appears, the game ends and the player wins whatever is in the pot. Thus the player wins 1 dollar if a tail appears on the first toss, 2 dollars if a head appears on the first toss and a tail on the second, 4 dollars if a head appears on the first two tosses and a tail on the third, 8 dollars if a head appears on the first three tosses and a tail on the fourth, and so on.” What would be a fair price to pay the casino for entering the game? This theory suggests that because of the utility function = log function, you have to pay up to max??? The expected value of this gamble is +∞ Daniel Bernouilli But I’m sure you’re not disposed to pay +∞ for that… (1700-1782) 7 EXPECTED UTILITY THEORY Expected Utility Theory: proposed by Daniel Bernouilli in 1738, axiomatized by Von Neumann-Morgenstern in 1940, and 10 years later by Savage and Samuelson Expected Utility: 𝐸𝑉 𝑔𝑎𝑚𝑏𝑙𝑒 = σ𝑖(𝑝𝑖. 𝑈(𝑂𝑖)) Rational choice under risk: choose the option with the highest expected utility EU helps to explain why people only accept to pay a low price for the Saint- Petersburg gamble: their utility is not proportional to gains (they are not risk- neutral) Instead the marginal utility of gains is generally decreasing (they are risk-averse)  With a logarithmic utility: EU(SP gamble)=1/2*ln(2)+1/4*ln(4)+….≈ 0,605 ≈ ln (1,83) Von Neumann-Morgenstern: there are four axioms of the expected utility theory that define preferences of a rational decision maker. They are 4 axioms:  completeness,  transitivity, as long as you do that you can calculate expected utility function & choices  independence continuity. good normative model but not good descriptive model because of - continuity 8 Rational decision-making rule under risk: Choosing the option in the choice set that maximizes one’s expected utility 9 ASSUMPTIONS OF EU THEORY To be rational, risky decisions in the EU framework imply 1. Perfect risk assessment 2. Unbiased risk-weighting in the decision-making process 3. Consistent risk preferences you shouldn’t change your belief you are not changing your decision 10 RISK AVERSION = concave utility function for the same EV of 2 portfolios, you go for the option with lesser risk Utility, U U(12000) Risk-averse U(10000) 2 investment options option 1: 10,000 guaranteed 0.5*U(8000)+0,5*U(12000) option 2: 50% 8000 + 50% 12000 U(8000) both have the same expected value, but utility of option 1 (U(10000)) > option 2 (U(0.5*U(8000)+0,5*U(12000)) prefer risky one in convex case certainty equivalent - always inferior to EV of guaranteed when you are risk neutral you ask for a compensation for risk above the certainty equivalent risk neutral in linear case $8000 $10000 $12000 Wealth 11 UTILITY FUNCTIONS AND RISK PREFERENCES richer you are better you feel U’>0 U’>0 U’>0 no diff in marginal utility marginal utility decreases U’’0 accelerating 12 THE CERTAINTY EQUIVALENT The certainty equivalent of a gamble is the amount of money such that you are indifferent between playing the gamble and receiving the amount for sure. The certainty equivalent of a gamble G is the number CE that satisfies this equation: u(CE) = EU(G). Remarks: When risk averse, the CE is inferior to the expected value of the gamble When risk seeking, the CE is superior to the expected value of the gamble 13 THE DEGREE OF RISK AVERSION The degree of risk aversion can be measured in two ways, by calculating  the coefficient of absolute risk aversion (a measure of the degree of concavity of the utility function, i.e. the speed at which marginal utility is decreasing): Or  The coefficient of relative risk aversion (the rate at which marginal utility decreases when wealth is increased by one percent, i.e. the wealth elasticity of marginal utility) 14 UTILITY FUNCTIONS COMMONLY USED IN ECONOMICS AND FINANCE The quadratic functions The exponential functions (with a>0) The power functions 15 depending on what assumptions you have about behaviour of ppl, you choose diff utility function Utility functions Absolute risk aversion A' Relative risk aversion R' Quadratic Increasing A'>0 Increasing R'>0 Exponential Constant A'=0 Constant R'=0 Logarithmic Decreasing A' diff preferences Scenario #2 Facing a lethal disease that has infected 600 people, you, as a doctor, have to choose between two treatments (C vs D):  if option C is chosen, then 400 people die  if option D is chosen, then there is a 33% chance that no people will die and a 66% probability that all 600 will die According to Kahneman and Tversky (1981): In scenario 1, most people choose A (72%) In scenario 2, most people choose D (78%) Yet options A and C are identical while B and D are also identical!! 28 THE IMPORTANCE OF FRAMING RISK AVERSION AND FEELING POOR It is well known that poor people, who can least afford to play the lottery, are most likely to do so. Haisley et al. (2008) wanted to know whether manipulating manipulating people’s perceptions of their income can affect their demand for lottery tickets. Half of the participants were made to feel rich by answering a question about their yearly income on a scale from “$60k.” The other half were made to feel poor by answering the same question on a scale from “$1M.” At the conclusion of the study, participants who were made to feel relatively poor were more likely to choose lottery tickets than cash as a reward for their participation. Remark: Framing effects should not be confused with wealth effects, which occur when people’s risk aversion changes when they go from being poor to being rich (or the other way around), and which can be represented using a single utility function. It would be normal, for instance, if your curve got flatter and flatter as your wealth increased, thereby making you less risk averse. 29 BEHAVIORAL ANOMALIES: STATUS-QUO BIAS AND EXTREMENESS AVERSION A B C D Good Case Some options have been (50%) 900 1100 1260 1380 found to be more Bad Case (50%) 900 800 700 600 attractive than others:  The status-quo option 60% Proportion of people preferring fund C to fund B  The median option 50% manipulate choices so that people go for the middle for risky options 40% preferences change based on context and menu effect Hence, a mutual fund will 30% reference point used: good case scenario use payoff of B, bad be considered more case scenario use payoff of D to feel good - this is why it leads to medium option 20% attractive by investors - limits regret in any case when positioned as 10% peple tend to reframe the option so you feel pleasure -> medium option allows for this but extreme option does not median (Benartzi and 0% Thaler, 2002) Choice BCD Choice BC Choice ABC 30 EXPLAINING ANOMALIES: THE PROSPECT THEORY 31 FROM NORMATIVE TO DESCRIPTIVE MODELS OF DECISION-MAKING UNDER RISK in the 70s -> model to explain empirical observations In front of failures of the EU model to explain actual behaviors, researchers have proposed several descriptive models which outcompete the EU model They imply departures from rationality in the form of inconsistent preferences and/or irrational risk-assessment or risk-weighting The most used model is Prospect Theory by Kahneman and Tversky (1979, 1987) which can be considered as an extension of the EU model with some parametric restrictions relaxed. 32 THE PATH TO PROSPECT THEORY In parallel to work in economics by Allais, psychologists began to explore the possibility that people might hold “subjective probabilities” that need not correspond to objective probabilities. Preston and Baratta (1948) and Edwards (1955) used experiments to study how subjective probabilities compare to objective probabilities. Later, Edwards (1962) described a process by which objective probabilities were replaced by decision weights which need not respect EU’s linearity requirements. At the same time, economists were speculating on deviations from a globally concave utility function defined over final wealth states. Friedman and Savage (1948) pointed to the existence of decision makers who simultaneously purchase lottery tickets at less than fair odds and insurance for moderate risks. To explain this behavior, they suggested that the utility function over final wealth states might have both concave and convex regions. Motivated by Friedman and Savage (1948), Markowitz (1952) suggested an alternative solution: perhaps instead of utility being defined over final wealth states, utility is defined over gains and losses relative to present wealth. In their model of prospect theory, Kahneman and Tversky (1979) combined and built upon early thinking on both lines of inquiry. THE TWO BUILDING BLOCKS OF PROSPECT THEORY The prospect theory, originally proposed by Daniel Kahneman and Amos Tversky (1979) is made of two building blocks:  A value function that is not constantly concave  A non-linear probability weighting function In the years that followed, the literature found promise in exploring both nonlinear probability weighting and a value function defined over gains and losses, although often independently 34 THE VALUE FUNCTION why can’t expected utility framework explain? - allais paradox: if we make a tiny switch (make o/c very close to zero), people’s preferences change -> preference reversal - ellsberg paradox: we have a preference for risk vs. uncertainty -> we like knowing probability of diff scenarios, if we don’t know that, we will apply a discount - asian disease problem: take risk in a certain framing but do not do so in another scenario (lives gained vs loss) Prospect L is evaluated according to Kahneman and Tversky (1979) suggest that a natural candidate for the reference point, especially in simple experimental gambles, is one’s initial wealth. Given that the xn’s are defined as increments to wealth, this implies r = 0 Kahneman and Tversky (1979) argue that the value function, v(·), is an increasing function with three key features: 1. Zero value of reference point: v(0) = 0. 2. Diminishing sensitivity: v′′(x) < 0 for x > 0, but v′′(x) > 0 for x < 0. 3. Loss aversion: For x > 0, v(x) < −v(−x) and v′(x) < v′(−x). THE VALUE FUNCTION PROSPECT THEORY (KAHNEMAN & TVERSKY, 1979) value function is not utility function (concave) - how you value gains & losses based on reference point (0 point) > reference point could be expectations, benchmark comparison, past situation, aspiration level, need (pay for food), social comparison (comparing to approximate Satisfaction people around you) >> with this, you can explain so many observations - your perception of gains & losses is not linear all the time, nor is it concave - it is only concave in gain domain, convex in loss domain > concave = risk averse; convex = risk seeking > this accounts for asian disease problem -> we don’t want to take risk by going for the most unsure vaccine in the gain domain, but we wanted to do so in the loss domain - value function isn’t symmetric - the feeling of a loss is a bigger magnitude is bigger than gain > we do not like losing > we like gaining Losses Gains Dissatisfaction 36 APPLICATION TO RISKLESS CHOICE Although prospect theory was initially formulated as a model for decision making under risk, it was quickly recognized that a reference- dependent value function would have a number of direct applications for riskless choice (Thaler, 1980). A simple modelling is that the decision maker evaluates bundle x according to  U(x|r) ≡ u(x) + v(x|r) where u(·) represents intrinsic utility and v(·) represents reference- dependent sensations of gain and loss. LOSS AVERSION AMONG PGA GOLF PLAYERS makes sense to put more effort for par than for birdie - loss aversion - par: if you putt and you lose -> you will lose - birdie: if you putt and you lose -> you will gain Source: Pope and Schweitzer (2011) 38 BUNCHING AT MARATHON running like hell in the last part of the marathon to be below the person Allen et al. (2017) investigate bunching behavior in the domain of marathon running. They posit that marathon runners have reference points of round-number finishing times—e.g., finishing in better than 4 hours (4:00:00), or better than 3.5 hours (3:30:00). Using data covering 9,789,093 finishing times from 6888 marathons run over the period 1970–2013, they find clear evidence of bunching at finishing times just better than various salient round numbers. Indeed, a simple glance at their raw data reveals striking evidence of bunching. The most extreme bunching occurs in the minutes just before 4:00:00—300,324 finishing times fall in the interval 3:57:00–3:59:59, whereas only 212,477 finishing times fall in the interval 4:00:00–4:02:59. Their analysis finds statistically significant bunching around most other 10-minute marks as well THE PROBABILITY WEIGHTING FUNCTION The idea of probability weighting predates prospect theory. The real contribution of Kahneman and Tversky (1979) to the literature on probability weighting was to propose - based on their experimental evidence - several features for the function π(p). These features include: Overweighting (π(p) > p) of small probabilities and underweighting (π(p) < p) of large probabilities. Subcertainty: π(p) + π(1 − p) < π(1) (which is needed to accommodate the Allais common- consequence paradox). Subproportionality: For any p,q,r ∈ (0,1), π(pq)/π(p) < π(pqr)/π(pr) (which is needed to accommodate the Allais common-ratio paradox). Kahneman and Tversky (1979) did not provide a parameterized functional form, but rather provided a figure like that depicted in Panel A of the following slide that satisfies these properties. Note that this function is discontinuous at p = 0 and p = 1, suggesting that certain outcomes are treated in a discontinuously different way from probabilistic outcomes. The literature quickly abandoned this discontinuity and instead focused on inverse-S shaped weighting functions as depicted in Panel B. this is used in the past and explains allias padadox well (as soon as you create extreme event with samll probability, changes probability) - discontinuity: at 0, probability is 0. after you jump from 0, probability jumps —> this is an issue —> so it changed to inverse s shaped this is the form that is most used now - the overweighting of small events only happens with very extreme o/c 2 theories 1. prospect theory 2. cumulative prospect theory this makes sense if you apply it in lottery -> you will process lottery winning probability as if it is winning 1/1000 instead of 1/million - small improbable outcomes are overweighted, underweighting of medium probability Objective probabilities CPT’s subjective probabilities Unfavorable Favorable 0,001% 0,04% 0,09% 0,01% 0,17% 0,36% 0,1% 0,84% 1,45% 1% 4,0% 5,5% 2% 6,2% 8,1% 5% 11,1% Text 13,2% 10% 17,0% 18,6% 15% 21,7% 22,7% 25% 29,4% 29,1% 33,33% 34,9% 33,6% 45% 42,3% 39,5% 50% 45,4% 42,1% 60% 51,8% 47,4% 66% 55,9% 50,9% 75% 62,6% 56,8% 90% 77,5% 71,2% 95% 85,0% 79,3% 99% 94,5% 91,2% 42 FROM PROSPECT THEORY TO CUMULATIVE PROSPECT THEORY Cumulative prospect theory (CPT) is a model for descriptive decisions under risk and uncertainty which was introduced by Amos Tversky and Daniel Kahneman in 1992 (Tversky, Kahneman, 1992). It is a further development and variant of prospect theory. The difference between this version and the original version of prospect theory is that weighting is applied to the cumulative probability distribution function, as in Quiggin (1982)’s rank-dependent expected utility theory but not applied to the probabilities of individual outcomes. This leads to the overweighting of extreme events which occur with small probability, rather than to an overweighting of all small probability events. In order to avoid violations of dominance, Quiggin (1982) proposed that instead of the weight ωn being a simple transformation of the probability pn, it is a transformation of the cumulative probability of obtaining at most xn. It is called rank-dependent probability weighting (RDPW). In CPT, probability weighting is applied differentially to gains and losses. 43 PROBABILITY WEIGHTING IN CPT 44 THE DECISION PROCESS IN PROSPECT THEORY The theory describes the decision processes in two stages: ▪During an initial phase termed editing phase, outcomes of a decision are ordered according to a certain heuristic. In particular, people decide which outcomes they consider equivalent, set a reference point and then consider lesser outcomes as losses and greater ones as gains. The editing process can be viewed as composed of coding, combination, segregation, cancellation, simplification and detection of dominance. ▪ Remark: the editing phase is a form of mental accounting ▪In the subsequent evaluation phase, people behave as if they would compute a value (utility), based on the potential outcomes and their respective probabilities, and then choose the alternative having a higher utility. 45 SEGREGATING OR BUNDLING OUTCOMES ? you want to maximise your pleasure and the rules of pleasure can be summarised by value function HEDONIC EDITING to max pleasure, should you sell all gains simultaneously or separately? - you should so so separately because of concavity if it is loss - you should sell simultaenously because of convexity large gain and small loss - together (or you face risk where loss will loom bigger than gain) large loss and small gain - separate Text 46 EXERCISE: APPLICATION TO TAXES how to make it less horrible -> shouldn’t make the loss salient to people (increase bit by bit), can tell them benefits of tax rise, focus on residual income after tax and how it is increasing) A) If you are a politician known for favoring high taxes, should you encourage voters to integrate or segregate? How should you shape your message? B) If you are a politician known for favoring low taxes, should you encourage voters to integrate or segregate? How should you shape your message? diminish taxes - focus on the amount of taxes 47 APPLICATION: LOTTERIES AS REWARDS insurances - insure yourself against improbable risk - in your head, tend to overvalue probability of their occurance - if you don’t pay attention to probability function and only pay attention to value function, you shouldn’t pay for insurance -> because you’ll just feel small losses every month ?? - probability weighting function is very clear Behavioral economists have found that using lotteries can be an effective way to incentivize behavioral change. Thus, a person may be more likely to fill in a survey or take a pill if offered a lottery ticket with a 1/1000 probability of winning $1000 than if offered a cash payment of $1. This might seem counterintuitive, given that people are often risk averse. Question: use the probability-weighting function to explain why lottery tickets can be so appealing 48 Question: How does prospect theory help to account for The Allais Paradox ? The Asian Disease Problem ? The extremeness aversion ? 49 anomalies and why? READING: FUTURE PROSPECTS 1. investing in stock market returns will get higher return than other assets like bonds - prices are not driven by demand but they are driven by features of stocks - if you are loss averse and myopic (focus on ST returns), you dislike the possibility of making super strong loss during ST -> people are afraid of participating in stock market (e.g. when they know past events of stock performing badly in very short time-frame like COVID), only pay attention to ST loss but not LT gain -> equilibrium price is high 2. lower risk stocks will get more return than higher risk stocks in the long run - according to the theory it should be the opposite: returns are compensation for everything you dislike and people are risk-averse - people dislike the fact that you can’t have a super high return for low risk stocks (they will always exhibit low return compared to market), and they overweigh the probability of getting very high return for high risk stocks Questions: > in this framework, consider that markets are equilibrium always - people prefer one big gain than many small gains 1) The journalist claims that according to Expected Utility Theory gains and losses should have the same impact on decisions. Is it true? Think about the curvature of the utility function… 2) Based on prospect theory, how to explain that stocks pay very high returns? 3) If people overweight small probabilities, which securities should exhibit the highest expected returns: stocks with positive skewness or stocks with negative skewness? 50 CONCLUSION 51 CONCLUSION We have seen in lectures 2 and 3 that decisions under risk deviate from rationality axioms at three levels: ▪ Risk assessment (incorrect belief formations) ▪ Risk weighting (distorted probability weighting) ▪ Risk preferences (inconsistent preferences under risk) Prospect Theory is so far the (descriptive) decision model that accounts for the largest number of empirical observations but other models coexist (e.g., rank-dependent expected utility) PT can be considered as a generalization of EU theory PT outperforms EU theory in predicting choices because it adds additional features that seem to be crucial in human decision-making: i) diminishing sensitivity (applied to probabilities) and loss aversion 52 EU VS PROSPECT THEORY Hey and Orme (1994) compare EU with a model closely related to cumulative prospect theory (‘Rank Dependence with the Power Weighting Function’) as rival explanations of a body of experimental data. On the basis of likelihood ratio tests, they conclude that the prospect theory model is among a set of generalisations of EU that perform significantly better than EU itself. The empirical success of prospect theory might suggest the interpretation that it represents how true preferences (the EU component of the theory) are distorted by biases (the non-EU component). EU model explains behavior pretty well because at it picks up three effects that one would expect to have a strong influence on most people’s choices:  The utility function picks up the effect that, other things being equal, an uncertain prospect is more likely to be chosen, the better are the outcomes to which it can lead.  By taking account of probabilities, EU picks up the effect that, other things being equal, an uncertain prospect is more likely to be chosen, the higher the probabilities of its better outcomes and the lower the probabilities of its worse outcomes.  By allowing the utility function to be non-linear, EU can pick up the effect that, when expected value is held constant, people tend to prefer less risky prospects to more risky ones The EU representation combines these three effects in a simple functional form. If we ask why prospect theory performs better than EU, there is an equally obvious answer—that it takes account, not only of the psychological effects that are picked up by EU, but also of two others.  One such effect is loss aversion: other things being equal, outcomes evoke different psychological responses if they are perceived as gains than if they are perceived as losses.  The other effect is a stimulus–response effect of diminishing sensitivity: other things being equal, people are more responsive to given increments of probability when probabilities are closer to the end-points of the scale on which probability is measured. According to Kahneman and Tversky, prospect theory rests on two fundamental properties of human psychology, both of which involve reference- dependence: loss aversion and diminishing sensitivity. Diminishing sensitivity explains both the inverse-S shape of the probability transformation function and the properties of the ‘value function’—the analogue of the utility function in EU 53

Use Quizgecko on...
Browser
Browser