QTA 15 Multivariate Common Variables PDF

Summary

This document discusses multivariate random variables, including concepts such as expectations and moments. It covers their application to different scenarios, such as computing the expected value of a function of two random variables and the variance of a weighted sum of two random variables.

Full Transcript

Reading 15: Multivariate Random Variables After completing this reading, you should be able to: Explain how a probability matrix can be used to express a probability mass function (PMF). Compute the marginal and conditional distributions of a discr...

Reading 15: Multivariate Random Variables After completing this reading, you should be able to: Explain how a probability matrix can be used to express a probability mass function (PMF). Compute the marginal and conditional distributions of a discrete bivariate random variable. Explain how the expectation of a function is computed for a bivariate discrete random variable. Define covariance and explain what it measures. Explain the relationship between the covariance and correlation of two random variables and how these are related to the independence of the two variables. Explain the effects of applying linear transformations on the covariance and correlation between two random variables. Compute the variance of a weighted sum of two random variables. Compute the conditional expectation of a component of a bivariate random variable. Describe the features of an iid sequence of random variables. Explain how the iid property is helpful in computing the mean and variance of a sum of iid random variables. Multivariate Random Variables Multivariate random variables accommodate the dependence between two or more random variables. The concepts under multivariate random variables (such as expectations and moments) are analogous to those under univariate random variables. 70 © 2014-2024 AnalystPrep. Multivariate Discrete Random Variables Multivariate random variables involve defining several random variables simultaneously on a sample space. In other words, multivariate random variables are vectors of random variables. For instance, a bivariate random variable X can be a vector with two components X1 and X2 with the corresponding realizations being x1 and x 2 , respectively. The PMF or PDF for a bivariate random variable gives the probability that the two random variables each take a certain value. If we wish to plot these functions, we would need three factors: X1 , X2 , and the PMF/PDF. This is also applicable to the CDF. The Probability Mass Function (PMF) The PMF of a bivariate random variable is a function that gives the probability that the components of X=x takes the values X1 = x 1 and X2 = x2. That is: fX1, X2(x 1 , x2 ) = P(X1 = x 1 , X2 = x2 ) The PMF explains the probability of realization as a function of x 1 and x 2. The PMF has the following properties: 1. f X1,X2 (x1 , x 2 ) ≥ 0 2. ∑x1 ∑ x2 fX1 ,X2(x 1 , x2 ) = 1 Example: Trinomial Distribution The trinomial distribution is the distribution of n independent trials where each trial results in one of the three outcomes (a generalization of the binomial distribution). The first, second and the third components are X1 , X2 and n − X1 − X2 respectively. However, the third component is redundant provided that we know X1 and X 2. The trinomial distribution has three parameters: 71 © 2014-2024 AnalystPrep. 1. n, representing the total number of the trials 2. p1 , representing the probability of realizing X1 3. p2 , representing the probability of realizing X2 Intuitively, the probability of observing n − X1 − X 2 is: 1 − p1 − p2 The PMF of the trinomial distribution, therefore, is given by: n! fX1, X2(x 1 , x 2 ) = px1 px2 (1 − p1 − p 2 )n−x1−x2 x 1 !x2 !(n − x 1 − x2 )!1 2 The Cumulative Distribution Function (CDF) The CDF of a bivariate discrete random variable returns the total probability that each component is less than or equal to a given value. It is given by: FX1,X2 (x1 , x 2 ) = P(X1 < x 1 , X2 < x2 ) = ∑ ∑ f (X1,X2)(t1 , t2 ) t1 ϵR(X1 ) t2 ϵR (X2 ) t1 ≤x1 t2 ≤x2 In this equation, t1 contains the values that X1 may take as long as t1 ≤ x 1. Similarly, t2 contains the values that X2 may take as long as t2 ≤ x2 72 © 2014-2024 AnalystPrep. Probability Matrices The probability matrix is a tabular representation of the PMF. Example: Probability Matrix In financial markets, market sentiments play a role in determining the return earned on a security. Suppose the return earned on a bond is in part determined by the rating given to the bond by analysts. For simplicity, we are going to assume the following: There are only three possible returns :10%, 0%, or -10% Analyst ratings (sentiments) can be positive, neutral, or negative We can represent this in a probability matrix as follows: 73 © 2014-2024 AnalystPrep. Bond Return (X 1 ) −10% 0% 10% Analyst Positive +1 5% 5% 30% (X2 ) Neutral 0 10% 10% 15% Negative −1 20% 5% 0% Each cell represents the probability of a joint outcome. For example, there’s a 5% probability of a negative return (-10%) if analysts have positive views about the bond and its issuer. In other words, there’s a 5% probability that the bond will decline in price with a positive rating. Similarly, there’s a 10% chance that the bond’s price will not change (and hence a zero return) given a neutral rating. The Marginal Distribution The marginal distribution gives the distribution of a single variable in a joint distribution. In the case of bivariate distribution, the marginal PMF of X1 is computed by summing up the probabilities for X1 across all the values in the support of X 2. The resulting PMF of X1 is denoted by f X1 (x 1 ), i.e., the marginal distribution of X1. f X1(x 1 ) = ∑ f X1,X2 (x 1 ,x 2 ) x2 ϵR(X2 ) Intuitively, the PMF of X2 is given by: f X2(x 2 ) = ∑ f X1,X2 (x 1 ,x 2 ) x1 ϵR(X1 ) Example: Computing the Marginal Distribution Using the probability matrix, we created above, we can come up with marginal distributions for both X1 (return) and X2 (analyst ratings) as follows: For X1 , 74 © 2014-2024 AnalystPrep. P(X1 = −10%) = 5% + 20% + 10% = 35% P(X1 = 0%) = 5% + 10% + 5% = 20% P(X1 = +10%) = 30% + 15% + 0% = 45% For X2 , P(X2 = +1) = 5% + 5% + 30% = 40% P(X2 = 0) = 10% + 10% + 15% = 35% P(X2 = −1) = 20% + 5% + 0% = 25% We wish to compute the marginal distribution of the returns. Now, In summary, for example, the marginal distribution of X1 is given below: Return(X1 ) −10% 0% 10% P(X1 = x 1 ) 35% 20% 45% Bond Return (X1 ) −10% 0% 10% f X2(x 2 ) Analyst Positive +1 5% 5% 30% 40% (X2 ) Neutral 0 10% 10% 15% 35% Negative −1 20% 5% 0% 25% fX1 (x1 ) 35% 20% 45% As you may have noticed, the marginal distribution satisfies the property of the ideal probability distribution. That is: ∑ f X1 (x 1 ) = 1 ∀X1 And f X1(x 1 ) ≥ 0 This is true because the marginal PMF is a univariate distribution. We can, in addition, use the marginal PMF to compute the marginal CDF. The marginal CDF is such that, P(X1 < x 1 ). That is: 75 © 2014-2024 AnalystPrep. FX1 (x1 ) = ∑ fX1 (t1 ) t1 ϵR(X1 ) t1 ≤x1 Independence of Random Variables Recall that if the two events A and B are independent then: P(A ∩ B) = P(A)P(B) This principle applies to bivariate random variables as well. If the distributions of the components of the bivariate distribution are independent, then: f X1,X2 (x1 , x 2 ) = f X1 (x1 )f X2(x 2 ) Example: Independence of Random Variables Now let’s use our earlier example on the return earned on a bond. If we assume that the two variables – return and ratings – are independent, we can calculate the joint distribution by the multiplying their marginal distributions. But are they really independent? Let’s find out! We have already established the joint and the marginal distributions, as reproduced in the following table. Bond Return (X1 ) −10% 0% 10% f X2(x 2 ) Analyst Positive +1 5% 5% 30% 40% (X2 ) Neutral 0 10% 10% 15% 35% Negative −1 20% 5% 0% 25% fX1 (x1 ) 35% 20% 45% So assuming that our two variables are independent, our joint distribution would be as follows: Bond Return (X 1 ) −10% 0% 10% Analyst Positive +1 14% 8% 18% (X2 ) Neutral 0 12.25% 7% 15.75% Negative −1 8.75% 5% 11.25% 76 © 2014-2024 AnalystPrep. We obtain the table above by multiplying the marginal PMF of the bond return by the marginal PMF of ratings. For example, the marginal probability that the bond return is 10% is 45% -- the sum of the third column. The marginal probability of a positive rating is 40% -- the sum of the first row. These two values when multiplied give us the joint probability on the upper left end of the table (18%). 45% ∗ 40% = 18% It is clear that the two variables are not independent because multiplying their marginal PMFs does not lead us back to the joint PMF. The Conditional Distributions The conditional distributions describe the probability of an outcome of a random variable conditioned on the other random variable taking a particular value. Recall that, given any two events A and B, then: P(A ∩ B) P(A│B) = P(B) This result can be applied in bivariate distributions. That is, the conditional distribution of X1 given X2 is defined as: f X1,X2 (x 1, x 2 ) f X1│X2(x 1 │X2 = x2 ) = f X2(x 2 ) From the result above, the conditional distribution is joint distribution divided by the marginal distribution of the conditioning variable. 77 © 2014-2024 AnalystPrep. Example: Calculating the Conditional Distribution Bond Return (X1 ) −10% 0% 10% f X2(x 2 ) Analyst Positive +1 5% 5% 30% 40% (X2 ) Neutral 0 10% 10% 15% 35% Negative −1 20% 5% 0% 25% fX1 (x1 ) 35% 20% 45% Suppose we want to find the distribution of bond returns conditional on a positive analyst rating. The conditional distribution is: f X1,X2 (x1 , X2 = 1) fX1, X2(x 1 , X2 = 1) f(X1│X2)(x 1 │X2 = 1) = = f X2(x 2 = 1) 40% With this, we can proceed to determine specific conditional probabilities: 78 © 2014-2024 AnalystPrep. Returns(X1 ) −10% 0% 10% 5% 5% 30% f (X1│X2)(x 1 │X2 = x2 ) = 12.5% = 12.5% = 75% 40% 40% 40% = P(X1 = x 1 |X2 = 1) What we have done is to take the joint probabilities where there’s a positive analyst rating and then divided these values by the marginal probability of a positive rating (40%) to produce the conditional distribution. Note that the conditional PMF obeys the laws of probability, i.e., 1. f (X1 │X2)(x 1 │X2 = x2 ) ≥ 0(nonnegativity) 2. ∑∀(X1│X2) f(X1│X2)(x 1 │X2 = x2 ) = 1 Conditional Distribution for a Set of Outcomes Conditional distributions can be computed for one variable, while conditioning on more than one variable. For example, assume that we need to compute the conditional distribution of the bond returns given that analyst ratings are non-negative. Therefore, our conditioning set is {+1,0}: X2 ∈ {+1, 0} The conditional PMF must sum across all outcomes in the set that is conditioned on S {+1,0}: ∑X2ϵC f(X1,X2 )(x1 , x 2 ) f (X1│X2) (x 1 │x 2 ϵS) = ∑X2ϵC f(X2) (x 2 ) The marginal probability that X2 ∈ {+1 , 0} is the sum of the marginal probabilities of these two outcomes: f x2(+1) + fx2 (0) = 75% 79 © 2014-2024 AnalystPrep. Bond Return (X1 ) −10% 0% 10% f X2(x 2 ) Analyst Positive +1 5% 5% 30% 40% (X2 ) Neutral 0 10% 10% 15% 35% Negative −1 20% 5% 0% 25% fX1 (x1 ) 35% 20% 45% Thus, the conditional distribution is given by: 5%+10% ⎧ ⎪ = 20% ⎪ ⎪ 75% ⎪ 5%+10% f (X1│X2 )(x1 │x 2 ϵ {+1, 0}) = ⎨ 75% = 20% ⎪ ⎪ ⎪ ⎩ 30%+15% = 60% ⎪ 75% Independence and Conditional Distribution of Random Variables Recall that the conditional distribution is given by: f X1,X2 (x1 , x 2 ) f(X1│X2)(x 1 │X2 = x2 ) = fX2 (x 2 ) This can be rewritten into: f(X1,X2) (x1 , x 2 ) = f (X1│X2 )(x1 │X2 = x 2 )fX2 (x2 ) Or f (X1,X2)(x 1 , x2 ) = f X2│X1 (x 2 │X1 = x 1 )f X1(x 1 ) Also, if the distributions of the components of the bivariate distributions are independent, then: f (X1,X2) (x 1 , x2 ) = f X1 (x 1 )f X2(x 2 ) If we substitute this in the above results we get: f X1(x 1 )fX2 (x 2) = f (X1│X2) (x1 │X2 = x 2 )fX2 (x2 ) ⇒ f X1(x 1 ) = f (X1│X2)(x1 │X2 = x 2 ) 80 © 2014-2024 AnalystPrep. Applying again to f X1,X2 (x1 , x 2 ) = f(X2│X1)(x 2 │X1 = x 1 )f X1(x 1 ) we get: fX2 (x2 ) = f (X2 │X1) (x 2 │X1 = x 1 ) Expectations The expectation of a function of a bivariate random variable is defined in the same way as that of the univariate random variable. Consider the function g(X1 , X2 ). The expectation is defined as: E(g(X 1, X2 )) = ∑ ∑ g(x1 , x 2 )fX1, X2(x 1 ,x 2 ) x1 ϵR(X1 ) x2 ϵR(X2 ) g(x 1 , x2 depends on both x1 and x2 ) and it may be a function of one component only. Just like the univariate random variable, E(g(X1 , X2 )) ≠ g(E(X1 ), E(X2 )) for a nonlinear function g(x1 , x 2 ). Example: Calculating the Expectation Consider the following probability mass function: X1 1 2 X2 3 10% 15% 4 70% 5% Given that g(x 1 , x2 ) = x x12 , calculate E(g(x 1 , x2 )) Solution 81 © 2014-2024 AnalystPrep. Using the formula: E(g(X 1, X2 )) = ∑ ∑ g(x1 , x 2 )fX1, X2(x 1 , x 2 ) x1 ϵR(X1 ) x2 ϵR(X2 ) In this case we need: E(g(X1 , X2 )) = ∑ ∑ g(x 1 , x 2 )f X1,X2 (x 1 , x2 ) x1 ϵ{1, 2} x2 ϵ{3,4} = 1 3 (0.10) + 14 (0.7) + 2 3 (0.15) + 24 (0.05) = 2.80 Moments Just like the univariate random variables, we shall use the expectations to define the moments. The first moment is defined as: E(X) = [E(X1 ), E(X2 )] = [μ1 , μ2 ] The second moment involves the covariance between the components of the bivariate distribution X1 and X2. The second moment is given by: Var(X1 + X 2 ) = Var(X1 ) + Var(X2 ) + 2Cov(X1 X2 ) The Covariance between X1 and X2 is defined as: Cov(X1 , X2 ) = E[(X 1 − E[X1 ])]E[(X2 − E[X 2])] = E[X1 X2 ] − E[X1 ]E[X2 ] Note that Cov(X1 , X1 ) = Var(X1 ) and that if X1 and X2 are independent then E[X1 X2 ] − E[X1 ]E[X2 ] = 0 and thus: Cov(X1 , X2 ) = E[X1 X2 ] − E[X1 ]E[X2 ] = E[X1 ]E[X2 ] − E[X1 ]E[X2 ] = 0 Most of the correlation between X1 and X 2 is reported. Now let Var(X1 ) = σ12, Var(X2 ) = σ22 and 82 © 2014-2024 AnalystPrep. Cov(X1 , X2 ) = σ12 then the correlation is defined as: Cov(X 1, X2 ) σ12 Corr(X1 , X2 ) = ρX1 X2 = = σ1 σ2 √σ12√ σ22 Therefore, we can write this in terms of covariance. That is: σ12 = ρX1X2 σ1 σ2 Correlation gives the measure of the strength of the linear relationship between the two random variables, and it is always between -1 and 1. That is −1 < Corr(X1 , X1 ) < 1 For instance, if X 2 = α + βX1 then: Cov(X1 , X2 ) = Cov(X1 , α + βX1 ) = βVar(X1 ) But we know that Var(α + βX1 ) = β2 Var(X1 ). So, βVar(X1 ) β Corr(X1 , X2 ) = ρX1 X2 = = √Var(X1 )√β 2Var(X1 ) |β| it is now evident that if β > 0, then ρX1X2 = 1 and when β ≤ 0 then ρX1 X2 = 0 Similarly, if we consider two scaled random variables a + bX 1 and c + dX 2 Then, Cov(a + bX1 , c + dX2 ) = bdCov(X 1 , X2 ) This implies that the scale factor in each random variable multiactivity affects the covariance. Using the above results, the corresponding correlation coefficient of aX1 and bX2 is given by : abCov(X 1, X2 ) ab Cov(X1 , X2 ) Corr(aX1 , bX2 ) = = |a||b| √Var(X1 )√Var(X2 ) √a2 Var(X1 )√b 2 Var(X 2 ) ab = ρ X1 X2 |a||b| 83 © 2014-2024 AnalystPrep. Application of Correlation: Portfolio Variance and Hedging The variance of the underlying securities and their respective correlations are the necessary ingredients if the variance of a portfolio of securities is to be determined. Assuming that we have two securities whose random returns are XA and XB and their means are μA and μ B with standard deviations of σA and σB. Then, the variance of XA plus XB can be computed as follows: σA+B 2 = σA2 + σB2 + 2ρAB σAσB If XA and XB have a correlation of ρAB between them, The equation changes to: σA2 +B = 2σ 2 (1 + ρAB), Where: σA2 = σB2 = σ 2 if both securities have an equal variance. If the correlation between the two securities is zero, then the equation can be simplified further. We have the following relation for the standard deviation: ρAB = 0 ⇒ σA+B = √2σ For any number of variables, we have that: n Y = ∑ Xi = 1nXi i=1 n n σY2 = ∑ ∑ ρij σi σj i=1 j=1 In case all the Xi ’s are uncorrelated and all variances are equal to σ, then we have: σY = √nσ if ρij = 0 ∀ i≠j 84 © 2014-2024 AnalystPrep. This is what is called the square root rule for the addition of uncorrelated variables. Suppose that Y, XA , and XB are such that: Y = aXA + bXB Therefore, with our standard notation, we have that: σY2 = a2 σA2 + b 2σB2 + 2abρAB σAσB … … … … … Eq 1 The major challenge during hedging is a correlation. Suppose we are provided with $1 of a security A. We are to hedge it with $h of another security B. A random variable p will be introduced to our hedged portfolio. h is, therefore, the hedge ratio. The variance of the hedged portfolio can easily be computed by applying Eq1: P = X A + hXB σP2 = σA2 + h2 σB2 + 2hρAB σA σB The minimum variance of a hedge ratio can be determined by determining the derivative with respect to h of the portfolio variance equation and then equate it to zero: dσ 2P = 2hσB2 + 2ρABσA σB = 0 dh σA ⇒ h ∗ = −ρAB σB To determine the minimum variance achievable, we substitute h* to our original equation: min[σP2 ] = σA2 (1 − ρ2AB ) The Covariance Matrix The covariance matrix is a 2x2 matrix that displays the covariance between the components of X. For instance, the covariance matrix of X is given by: 85 © 2014-2024 AnalystPrep. σ12 σ12 Cov(X) = [ ] σ12 σ22 The Variance of Sums of Random Variables The variance of the sum of two random variables is given by: Var(X1 + X 2 ) = Var(X1 ) + Var(X2 ) + 2Cov(X1 X2 ) If the random variables are independent, then Cov(X1 X2 ) = 0 and thus: Var(X1 + X2 ) = Var(X 1) + Var(X2 ) In case of weighted random variables, the variance is given by: Var(aX1 + bX2 ) = a2 Var(X1 ) + b2 Var(X2 ) + 2abCov(X1 X2 ) Conditional Expectation A conditional expectation is simply the mean calculated after a set of prior conditions has happened. It is the value that a random variable takes “on average” over an arbitrarily large number of occurrences – given the occurrence of a certain set of "conditions." A conditional expectation uses the same expression as any other expectation and is a weighted average where the probabilities are determined by a conditional PMF. For a discrete random variable, the conditional expectation is given by: E(X1 │X 2 = x 2 ) = ∑ x 1if(X1 |X2 = x 2 ) i Example: Calculating the Conditional Expectation In the bond return/rating example, we may wish to calculate the expected return on the bond given a positive analyst rating, i.e., E(X1 │X2 = 1) If you recall, the conditional distribution is as follows: 86 © 2014-2024 AnalystPrep. Returns(X1 ) −10% 0% 10% 5% 5% 30% f (X1│X2)(x 1 │X2 = 1) = 12.5% = 12.5% = 75% 40% 40% 40% = P(X1 = x 1 |X2 = 1) The conditional expectation of the return is determined as follows: E(X1 │X2 = 1) = −0.10 × 0.125 + 0 × 0.125 + 0.10 × 0.75 = 0.0625 = 6.25% Conditional Variance We can calculate the conditional variance by substituting the expectation in the variance formula with the conditional expectation. We know that: Var(X1 ) = E[(X1 − E(X1 ))2 ] = E(X 1 )2 − [E(X)]2 Now the conditional variance of X1 conditional on X2 is given by: Var(X1 │X2 = x 2) = E(X 21 |X2 = x2 ) − [E(X1 |X2 = x2 )]2 Returning to our example above, the conditional variance Var(X1 |X2 = 1) is given by: Var(X1 │X2 = 1) = E(X21 |X2 = 1) − [E(X1 |X2 = 1)]2 Now, E(X1 |X2 = 1) = 0.0625 We need to calculate: E(X21 │X2 = 1) = (−0.10)2 × 0.125 + 0 2 × 0.125 + 0.102 × 0.75 = 0.00875 So that 2 2 Var(X1 │X2 = 1) = σ(X = 0.00875 − [0.0625] = 0.004844 = 0.484% 1 │X2 = 1) 87 © 2014-2024 AnalystPrep. If we wish to find the standard deviation of the returns, we just find the square root of the variance: σ(X1│X2=1) = √0.004844 = 0.06960 = 6.96% Continuous Random Variables Before we continue, it is essential to note that continuous random variables make use of the same concepts and methodologies as discrete random variables. The main distinguishing factor is that instead of PMFs, continuous random variables use PDFs. The Joint PDF The joint (bivariate) distribution function gives the probability that the pair (X1 , X2 ) takes values in a stated region A. It is given by: 88 © 2014-2024 AnalystPrep. b d P(a < X1 < b,c < X2 < d) = ∫ ∫ f X1,X2 (x 1 , x2 )dx1 dx2 a c The joint pdf is always nonnegative, and the double integration yield a value of 1. That is: fX1, X2(x 1 , x2 ) ≥ 0 And b d ∫ ∫ f X1,X2 (x1 , x 2 )dx1 dx2 = 1 a c Example: Calculating the Joint Probability Assume that the random variables (X1 ) and (X2 ) are jointly distributed as: 89 © 2014-2024 AnalystPrep. f X1,X2 (x1 , x 2 ) = k(x 1 + 3x 2 ) 0 < x 1 < 2 , 0 < x2 < 2 Calculate the probability P(X 1 < 1 , X2 > 1). Solution We need to first calculate the value of k. Using the principle: b d ∫ ∫ f X1,X2 (x1 , x 2 )dx1 dx2 = 1 a c We have 2 2 2 2 1 ∫ ∫ k(x 1 + 3x 2 )dx1dx2 = ∫ k[( x 21 + 3x1 x 2 )] dx2 = 1 0 0 0 2 0 2 2 =∫ k(2 + 6x 2 )dx2 = k[2x2 + 3x22 ]0 = 1 0 1 16k = 1 ⇒ k = 16 So, 1 fX1, X2(x 1 , x2 ) = (x 1 + 3x 2) 16 Therefore, 1 2 1 P(X1 < 1, X2 > 1) = ∫ ∫ (x1 + 3x2 )dx1 dx2 = 0.3125 0 1 16 Joint Cumulative Distribution Function (CDF) The joint cumulative distribution is given by: x1 x2 F(X1 < x1 , X2 < x 2) = ∫ ∫ f X1,X2 (t1 , t2 )dt 1 dt2 −∞ −∞ Note that the lower bound of the integral can be adjusted so that it is the lower value of the 90 © 2014-2024 AnalystPrep. interval. Using the example above, we can calculate F(X1 < 1, X2 < 1) in a similar way as above. The Marginal Distributions For the continuous random the marginal distribution is given by: ∞ f X1(x 1 ) = ∫ fX1, X2(x 1 , x2 )dx2 −∞ Similarly, ∞ f X2(x 2 ) = ∫ fX1, X2(x 1 , x2 )dx1 −∞ Note that if we want to find the marginal distribution of X1 we integrate X2 out and vice versa. Example: Computing the Marginal Distribution Consider the example above. We have that 1 f X1,X2 (x1 , x 2) = (x1 + 3x 2 ) 0 < x 1 < 2, 0 < x 2 < 2 16 We wish to find the marginal distribution of X1. This implies that we need to integrate out X2. So, 2 2 1 1 3 f X1 (x 1 ) = ∫ (x 1 + 3x2 )dx2 = [x 1 x 2 + x22 ] 0 16 16 2 0 1 1 = [2x 1 + 6] = (x 1 + 3) 16 8 1 1 ⇒ f X1 (x 1 ) = [2x 1 + 6] = (x 1 + 3) 16 8 Note that we can calculate f X2 (x 2 ) in a similar manner. Conditional Distributions 91 © 2014-2024 AnalystPrep. The conditional distribution is analogously defined as that of discrete random variables. That is: f X1,X2 (x1 , x 2 ) f(X1│X2)(x 1 │X2 = x2 ) = fX2 (x 2 ) The conditional distributions are applied in the field of finance, such as risk management. For instance, we may wish to compute the conditional distribution of interests rates, X1 given that the investors X2 experience a huge loss. Independent, Identically Distributed (IID) Random Variables A collection of random variables is independent and identically distributed (iid) if each random variable has the same probability distribution as the others and all are mutually independent. Example: Consider successive throws of a fair coin: 92 © 2014-2024 AnalystPrep. The coin has no memory, so all the throws are "independent". The probability of head vs. tail in every throw is 50:50; so the coin is equally likely and stays fair; the distribution from which every throw is drawn is normal and stays the same, and thus each outcome is "identically distributed" iid variables are mostly applied in time series analysis. Mean and Variance of iid Variables Consider the iid variables generated by a normal distribution. They are typically defined as: x iii d ∼ N(μ , σ 2 ) The expected mean of these particular iid is given by: n n n E (∑ Xi) = ∑ E(Xi ) = ∑ μ = nμ i i i Where E(Xi) = μ The result above is valid since the variables are independent and have similar moments. Maintaining this line of thought, the variance of iid random variables is given by: n n n n Var (∑ Xi ) = ∑ Var (Xi) + 2 ∑ ∑ Cov(Xj, Xk ) i i j=1 k=j+1 n n n n = ∑ σ2 + 2 ∑ ∑ 0 = ∑ σ 2 = nσ 2 i j=1 k=j+1 i The independence property is important because there’s a difference between the variance of the sum of multiple random variables and the variance of a multiple of a single random variable. If X1 and X2 are iid with variance σ 2, then, Var(X1 + X2 ) = Var(X1 ) + Var(X2 ) = σ 2 + σ 2 = 2σ 2 Var(X1 + X2 ) ≠ Var(2X1 ) 93 © 2014-2024 AnalystPrep. In the case of a multiple of a single variable, X1, with variance σ 2, Var(2X1 ) = 4Var(X1 ) = 4 × σ 2 = 4σ 2 94 © 2014-2024 AnalystPrep. Practice Question A company is reviewing fire damage claims under a comprehensive business insurance policy. Let X be the portion of a claim representing damage to inventory and let Y be the portion of the same application representing damage to the rest of the property. The joint density function of X and Y is: 6[1 − (x + y)], x > 0, y > 0 x + y < 1 f(x, y) = { 0, elsewhere What is the probability that the portion of a claim representing damage to the rest of the property is less than 0.3? A. 0.657 B.0.450 C. 0.415 D. 0.752 The correct answer is A. First, we should find the marginal PMF of Y: 1−y 1−y x2 f Y (y) = ∫ 6 [1 − (x + y)] ∂x = [6(x − − xy] 0 2 0 Substitute the limits as usual to get: (1 − y)2 6 [(1 − y) − − y(1 − y)] 2 At this we can factor out (1 − y) and solve what remains in the square bracket: 95 © 2014-2024 AnalystPrep. (1 − y) 1−y 6(1 − y) [1 − − y] = 6(1 − y) [ ] 2 2 Of course you can cancel 2 with 6 at this point: 1 −y 6(1 − y) [ ] = 3(1 − y)[1 − y] = 3(1 − 2y + y 2) = 3 − 6y + 3y 2 2 So, f Y (y) = 3 − 6y + 3y 2 , 0 < y < 1 We need P(Y < 0.3) , So, 0.3 P(Y < 0.3) = ∫ (3 − 6y + 3y2 )dy = 0.9 − 0.27 + 0.027 = 0.657 0 96 © 2014-2024 AnalystPrep.

Use Quizgecko on...
Browser
Browser