Podcast
Questions and Answers
A discrete random variable X can take on values $x_1, x_2, x_3,...$. Which of the following conditions must be satisfied by its probability density function (PDF), f(x)?
A discrete random variable X can take on values $x_1, x_2, x_3,...$. Which of the following conditions must be satisfied by its probability density function (PDF), f(x)?
- f(x) ≥ 0 for all x, and the sum of f(x) over all possible values of x equals 1. (correct)
- f(x) > 0 for all x, and the sum of f(x) over all possible values of x is less than 1.
- f(x) can be negative for some x, as long as the sum of f(x) over all possible values of x equals 1.
- f(x) ≥ 1 for at least one x, and the sum of f(x) over all possible values of x equals infinity.
Consider a scenario where you are analyzing the number of heads obtained in two independent coin tosses. Let X be a random variable representing the number of heads. What constitutes the sample space Ω for X?
Consider a scenario where you are analyzing the number of heads obtained in two independent coin tosses. Let X be a random variable representing the number of heads. What constitutes the sample space Ω for X?
- Ω = {HH, HT, TH, TT}
- Ω = {0, 1, 2} (correct)
- Ω = {1/4, 1/2, 3/4, 1}
- Ω = {0, 1}
Which of the following scenarios is best modeled by a Binomial distribution?
Which of the following scenarios is best modeled by a Binomial distribution?
- The probability of one team winning a sports tournament
- The number of cars passing a certain point on a highway in one hour.
- The time until the next customer arrives at a store.
- The number of defective items in a batch of 100, given a fixed probability of an item being defective. (correct)
Suppose X follows a Binomial distribution with parameters n (number of trials) and p (probability of success). If n = 1, what specific distribution does X follow?
Suppose X follows a Binomial distribution with parameters n (number of trials) and p (probability of success). If n = 1, what specific distribution does X follow?
Events arriving according to a Poisson Distribution have what key property?
Events arriving according to a Poisson Distribution have what key property?
The number of customers arriving at a help desk in one hour is modeled using a Poisson distribution. What key assumption must hold true?
The number of customers arriving at a help desk in one hour is modeled using a Poisson distribution. What key assumption must hold true?
Which of the following real-world scenarios can be appropriately modeled using a Poisson distribution?
Which of the following real-world scenarios can be appropriately modeled using a Poisson distribution?
If $Z_1, Z_2, ..., Z_n$ are independent Bernoulli random variables with the same probability of success p, what is the distribution of the sum $Z_1 + Z_2 + ... + Z_n$?
If $Z_1, Z_2, ..., Z_n$ are independent Bernoulli random variables with the same probability of success p, what is the distribution of the sum $Z_1 + Z_2 + ... + Z_n$?
In the context of RVs (random variables) used for measuring durations, which field commonly uses them to analyze 'durations in unemployment'?
In the context of RVs (random variables) used for measuring durations, which field commonly uses them to analyze 'durations in unemployment'?
Which of the following is NOT a property of a cumulative distribution function (CDF), $F(x)$?
Which of the following is NOT a property of a cumulative distribution function (CDF), $F(x)$?
Given a continuous random variable $X$ with probability density function $f(x)$, how is the cumulative distribution function $F(x)$ defined?
Given a continuous random variable $X$ with probability density function $f(x)$, how is the cumulative distribution function $F(x)$ defined?
For a discrete random variable, how is the cumulative distribution function (CDF) calculated?
For a discrete random variable, how is the cumulative distribution function (CDF) calculated?
Given a random variable $X$, the mean or expected value, $E[X]$, represents:
Given a random variable $X$, the mean or expected value, $E[X]$, represents:
The variance of a random variable (RV) is described as:
The variance of a random variable (RV) is described as:
Which of the following is true regarding the expected value (mean) and variance of a random variable?
Which of the following is true regarding the expected value (mean) and variance of a random variable?
For a continuous random variable X, how is the expected value E[X] calculated?
For a continuous random variable X, how is the expected value E[X] calculated?
If a random variable Y has a moment generating function (MGF) denoted as $M_Y(\lambda)$, what does $\frac{d^i M_Y(\lambda)}{d\lambda^i}|_{\lambda=0}$ represent?
If a random variable Y has a moment generating function (MGF) denoted as $M_Y(\lambda)$, what does $\frac{d^i M_Y(\lambda)}{d\lambda^i}|_{\lambda=0}$ represent?
Suppose $Y$ is a random variable with moment generating function $M_Y(\lambda)$, and $g(Y) = aY + b$. What is the moment generating function of $g(Y)$, denoted as $M_{g(Y)}(\lambda)$?
Suppose $Y$ is a random variable with moment generating function $M_Y(\lambda)$, and $g(Y) = aY + b$. What is the moment generating function of $g(Y)$, denoted as $M_{g(Y)}(\lambda)$?
If $X$ and $Y$ are independent random variables, and $Z = X + Y$, how is the moment generating function of $Z$, denoted as $M_Z(\lambda)$, related to the moment generating functions of $X$ and $Y$, $M_X(\lambda)$ and $M_Y(\lambda)$ respectively?
If $X$ and $Y$ are independent random variables, and $Z = X + Y$, how is the moment generating function of $Z$, denoted as $M_Z(\lambda)$, related to the moment generating functions of $X$ and $Y$, $M_X(\lambda)$ and $M_Y(\lambda)$ respectively?
Random variables can be defined by which of the following?
Random variables can be defined by which of the following?
Suppose Y follows an exponential distribution, $Y \sim Exp[\beta]$. Given its MGF, how can the mean and variance of Y be determined?
Suppose Y follows an exponential distribution, $Y \sim Exp[\beta]$. Given its MGF, how can the mean and variance of Y be determined?
Given a random variable $X$ with PDF $f(x)$ for $a \le X \le b$, and a monotonic function $Y = h(X)$, what does the Change of Variable formula allow us to determine?
Given a random variable $X$ with PDF $f(x)$ for $a \le X \le b$, and a monotonic function $Y = h(X)$, what does the Change of Variable formula allow us to determine?
In the Change of Variable formula, $g(y) = f(x) \cdot |\frac{dx}{dy}|$, what is the significance of the absolute value?
In the Change of Variable formula, $g(y) = f(x) \cdot |\frac{dx}{dy}|$, what is the significance of the absolute value?
Suppose a random variable $X$ has PDF $f(x)$ and distribution function $F(x)$. If $Y = h(X)$ where $h(X)$ is monotonic and increasing, how is the distribution function of $Y$, $G(y)$, related to $F(x)$?
Suppose a random variable $X$ has PDF $f(x)$ and distribution function $F(x)$. If $Y = h(X)$ where $h(X)$ is monotonic and increasing, how is the distribution function of $Y$, $G(y)$, related to $F(x)$?
If random variables W and Z are transformed to X = W − E[W] and Y = Z − E[Z], how does the correlation coefficient rWZ relate to rXY?
If random variables W and Z are transformed to X = W − E[W] and Y = Z − E[Z], how does the correlation coefficient rWZ relate to rXY?
What key property characterizes the variance-covariance matrix of a random vector?
What key property characterizes the variance-covariance matrix of a random vector?
Given two random variables X and Y, if Cov(X, Y) = 0, what can be inferred about their relationship?
Given two random variables X and Y, if Cov(X, Y) = 0, what can be inferred about their relationship?
For random variables X and Y, if E[X] = 2, E[Y] = 3, E[XY] = 10, E[X^2] = 9 and E[Y^2] = 16, what is the covariance of X and Y?
For random variables X and Y, if E[X] = 2, E[Y] = 3, E[XY] = 10, E[X^2] = 9 and E[Y^2] = 16, what is the covariance of X and Y?
Using the information from the previous question where E[X] = 2, E[Y] = 3, E[XY] = 10, E[X^2] = 9 and E[Y^2] = 16, what is the correlation coefficient between X and Y?
Using the information from the previous question where E[X] = 2, E[Y] = 3, E[XY] = 10, E[X^2] = 9 and E[Y^2] = 16, what is the correlation coefficient between X and Y?
Given the joint PDF $f(x, y) = e^{-(x+y)}$ for $x ≥ 0, y ≥ 0$, what is the marginal PDF of X?
Given the joint PDF $f(x, y) = e^{-(x+y)}$ for $x ≥ 0, y ≥ 0$, what is the marginal PDF of X?
Given a joint PDF $f(x, y) = \frac{1}{(b-a)(d-c)}$ for $a ≤ X ≤ b$ and $c ≤ Y ≤ d$, describe the distribution of X and Y.
Given a joint PDF $f(x, y) = \frac{1}{(b-a)(d-c)}$ for $a ≤ X ≤ b$ and $c ≤ Y ≤ d$, describe the distribution of X and Y.
If the joint PDF of X and Y is given by $f(x, y) = \frac{3}{7} (x^2 + \frac{2}{3}xy)$ for $0 ≤ X ≤ 1$ and $1 ≤ Y ≤ 2$, are X and Y independent?
If the joint PDF of X and Y is given by $f(x, y) = \frac{3}{7} (x^2 + \frac{2}{3}xy)$ for $0 ≤ X ≤ 1$ and $1 ≤ Y ≤ 2$, are X and Y independent?
If random variables X and Y are statistically independent, which of the following statements is always true?
If random variables X and Y are statistically independent, which of the following statements is always true?
Given the joint PDF $f(x, y) = 8xy$ for $0 ≤ x ≤ 1$ and $0 ≤ y ≤ x$, which step is necessary to find the marginal density of X?
Given the joint PDF $f(x, y) = 8xy$ for $0 ≤ x ≤ 1$ and $0 ≤ y ≤ x$, which step is necessary to find the marginal density of X?
What does a covariance of zero between two random variables X and Y generally indicate?
What does a covariance of zero between two random variables X and Y generally indicate?
If the joint PDF of random variables X and Y is given by $f(x, y)$, how is the conditional probability $Pr[c ≤ Y ≤ d | X = x]$ calculated?
If the joint PDF of random variables X and Y is given by $f(x, y)$, how is the conditional probability $Pr[c ≤ Y ≤ d | X = x]$ calculated?
Given the joint PDF $f(x,y)$, how do you determine if random variables X and Y are statistically independent?
Given the joint PDF $f(x,y)$, how do you determine if random variables X and Y are statistically independent?
Suppose E[X] = 2, E[Y] = 3, and E[XY] = 8. What is the covariance between X and Y, Cov[X, Y]?
Suppose E[X] = 2, E[Y] = 3, and E[XY] = 8. What is the covariance between X and Y, Cov[X, Y]?
What is the significance of the conditional PDF $fY|X(y|x)$?
What is the significance of the conditional PDF $fY|X(y|x)$?
Given random variables X and Y, and knowing that E[X] and E[Y] exist, which expression correctly represents Cov[X, Y]?
Given random variables X and Y, and knowing that E[X] and E[Y] exist, which expression correctly represents Cov[X, Y]?
Which of the following is true regarding marginal PDFs?
Which of the following is true regarding marginal PDFs?
Given a joint PDF $f(x, y)$, how is the marginal PDF of X, denoted as $f_X(x)$, calculated?
Given a joint PDF $f(x, y)$, how is the marginal PDF of X, denoted as $f_X(x)$, calculated?
For a joint PDF $f(x, y)$, what does $\int_{a}^{b} f_X(x) dx$ represent?
For a joint PDF $f(x, y)$, what does $\int_{a}^{b} f_X(x) dx$ represent?
If $f(x, y)$ is a joint PDF, which of the following relationships between marginal and joint PDFs is correct?
If $f(x, y)$ is a joint PDF, which of the following relationships between marginal and joint PDFs is correct?
What condition must be met for the conditional PDF $f_{X|Y}(x|y)$ to be defined?
What condition must be met for the conditional PDF $f_{X|Y}(x|y)$ to be defined?
How is the conditional PDF $f_{Y|X}(y|x)$ defined in terms of the joint PDF $f(x, y)$ and the marginal PDF $f_X(x)$?
How is the conditional PDF $f_{Y|X}(y|x)$ defined in terms of the joint PDF $f(x, y)$ and the marginal PDF $f_X(x)$?
Which of the following statements is true regarding conditional PDFs?
Which of the following statements is true regarding conditional PDFs?
Given the conditional PDF $f_{X|Y}(x|y)$, which of the following represents the probability that $a \le X \le b$ given that $Y = y$?
Given the conditional PDF $f_{X|Y}(x|y)$, which of the following represents the probability that $a \le X \le b$ given that $Y = y$?
Flashcards
Bernoulli Random Variable
Bernoulli Random Variable
A random variable that can only take two values, typically 0 or 1.
Probability Density Function (PDF)
Probability Density Function (PDF)
A function that gives the probability that a discrete random variable is exactly equal to some value.
Valid PDF Requirements
Valid PDF Requirements
States that the PDF must be non-negative for all values and sum to 1 over all possible values.
Binomial Distribution
Binomial Distribution
Signup and view all the flashcards
What does Binomial Distribution describe?
What does Binomial Distribution describe?
Signup and view all the flashcards
Assumptions of Binomial Distribution
Assumptions of Binomial Distribution
Signup and view all the flashcards
Bernoulli RV as a Binomial Distribution
Bernoulli RV as a Binomial Distribution
Signup and view all the flashcards
Poisson Distribution
Poisson Distribution
Signup and view all the flashcards
Lifetime RVs
Lifetime RVs
Signup and view all the flashcards
Cumulative Distribution Function (CDF)
Cumulative Distribution Function (CDF)
Signup and view all the flashcards
CDF Properties
CDF Properties
Signup and view all the flashcards
CDF Calculation
CDF Calculation
Signup and view all the flashcards
Interval Probability with CDF
Interval Probability with CDF
Signup and view all the flashcards
Moments of a PDF
Moments of a PDF
Signup and view all the flashcards
Expected Value (Mean)
Expected Value (Mean)
Signup and view all the flashcards
Expected Value Calculation
Expected Value Calculation
Signup and view all the flashcards
Marginal PDFs
Marginal PDFs
Signup and view all the flashcards
Calculating fX(x)
Calculating fX(x)
Signup and view all the flashcards
Calculating fY(y)
Calculating fY(y)
Signup and view all the flashcards
Purpose of Marginal PDF
Purpose of Marginal PDF
Signup and view all the flashcards
Conditional PDFs
Conditional PDFs
Signup and view all the flashcards
Formula for fX|Y(x|y)
Formula for fX|Y(x|y)
Signup and view all the flashcards
Formula for fY|X(y|x)
Formula for fY|X(y|x)
Signup and view all the flashcards
Purpose of Conditional PDF
Purpose of Conditional PDF
Signup and view all the flashcards
Moment Generating Function (MGF)
Moment Generating Function (MGF)
Signup and view all the flashcards
MGF of a Linear Transformation
MGF of a Linear Transformation
Signup and view all the flashcards
MGF of Independent RV Sum
MGF of Independent RV Sum
Signup and view all the flashcards
PDF and MGF Relationship
PDF and MGF Relationship
Signup and view all the flashcards
Change of Variable Formula
Change of Variable Formula
Signup and view all the flashcards
Change of Variable Formula (Equation)
Change of Variable Formula (Equation)
Signup and view all the flashcards
Monotonic Function
Monotonic Function
Signup and view all the flashcards
CDF Transformation
CDF Transformation
Signup and view all the flashcards
rWZ = rXY
rWZ = rXY
Signup and view all the flashcards
Variance-Covariance Matrix
Variance-Covariance Matrix
Signup and view all the flashcards
Cov(X, Y)
Cov(X, Y)
Signup and view all the flashcards
Correlation Coefficient
Correlation Coefficient
Signup and view all the flashcards
Bivariate Uniform Distribution
Bivariate Uniform Distribution
Signup and view all the flashcards
Independent Random Variables
Independent Random Variables
Signup and view all the flashcards
Joint PDF
Joint PDF
Signup and view all the flashcards
Conditional Probability (Y|X)
Conditional Probability (Y|X)
Signup and view all the flashcards
Conditional PDF (fY|X(y|x))
Conditional PDF (fY|X(y|x))
Signup and view all the flashcards
Statistical Independence
Statistical Independence
Signup and view all the flashcards
Independence and Conditional PDF
Independence and Conditional PDF
Signup and view all the flashcards
Covariance (Cov[X, Y])
Covariance (Cov[X, Y])
Signup and view all the flashcards
Covariance Formula
Covariance Formula
Signup and view all the flashcards
Independence and Covariance
Independence and Covariance
Signup and view all the flashcards
Expected Value of Product (Independent RVs)
Expected Value of Product (Independent RVs)
Signup and view all the flashcards
Study Notes
- Mathematical Economics and Econometrics (ECON1049) is the name of the the course
- The course is in Semester 2
Probability: Random variables and their properties
- Introduction; Examples and Properties of Random Variables will be covered
- The normal family of distributions plus Chebychev's Inequality will be covered
- Bivariate Random Variables - Correlation and Independence are included in the syllabus
Statistics: Estimation and hypothesis testing
- Samples and Sampling Distributions are included
- Estimation and Hypothesis Testing will be covered
- Applications of Tests and Confidence Intervals form part of the syllabus
Econometrics - The linear regression model
- Introduction to the Linear Model and Ordinary Least Squares (OLS) will be discussed
- Properties of the OLS estimators will be covered
- Extensions to the multivariable model, including specification will be studied
Properties of Random Variables
- An experiment where the results vary each time it is repeated is called a random experiment
- A set with all possible outcomes of a random experiment is a sample space
- An event is a subset of the sample space
- A random (or stochastic) variable (RV) is a variable whose value is determined via a random experiment
- Random variables are either discrete or continuous
- Continuous variables can take any value in a given range
- Discrete variables are constrained to take particular values
Discrete Random Variables
- A Bernoulli random variable is the simplest example of a discrete RV
- The probability the variable takes on the value one describes the behavior of a Bernoulli random variable
- If the coin is fair, then the probability that X equals one (head) is one-half; Pr[X = 1] = 1/2
- The probability that a Bernoulli RV equals 1 can be specified by 0; in principle, the 6 can be any number between 0 and 1.
- Pr[X = xk] = f (xk) k = 1, 2, . . . where X is a discrete random variable and x is a possible value
- The probability density function (PDF) is given by Pr[X = x] = f(x)
- For x = xk it becomes f (xk), while for other values of x, f(x) = 0
- For a discrete RV with sample space Ω the PDF f(x) is valid if i) f(x) ≥ 0, ii) ∑ f(x) = 1 for all values in the sample space
The coin is tossed twice with the sample space S = {HH, HT,TH,TT}
- X represents the number of heads that can come up
- With HH (i.e., 2 heads ), X = 2 while for TH(1 head ), X = 1
- The objective is to find its PDF
Binomial Distribution if
- Pr [X = x] = f (x) = x!( − x)!p² (1 − p)"−*; Ω = (0,1,2,...,n) and p > 0
- X ~ Bin [n, p] assigns probabilities to the number of successes in n independent trials with a prob of success p
- It is used in Decision/Game Theory and Economic phenomena where there is a discrete choice
- An important special case is the Bernoulli RV Bin [1, p]
- If each of Z1, …, Zn is B[1, p] (and independent of one another - independence is defined precisely later) then Z₁ + Z2 + .. + Zn is B[n, p]
Poisson Distribution if
- Pr [X = x] = f (x) = /x!; Ω = (0, 1, 2, ...., ∞) and μ > 0
- X ~ Po [μ] assigns probabilities to counts
- Examples: the number of people joining a queue in a given amount of time, or the number of people visiting a web page
Continuous Random Variables
- With a discrete RV, the PDF governs how probabilities are assigned to individual values
- With a continuous RV X, it gives the probabilities that X lies between two different values
- If X is a continuous RV on Ω = (-8,8), then the probability that X takes the value between a and b can be given by its PDF f(x) in the form of: Pr [a ≤ X ≤ b] = (integral from a to b) f (x)dx for -∞≤ a ≤ b ≤ ∞.
- Where the function f(x) has the properties: i) f (x) ≥ 0 ; ii) (integral from -∞ to ∞) f(x)dx = 1.
- probabilities for continuous RVs are only non-zero for intervals, not point outcomes
- Due to the number of possible values, a continuous random variable X can take on each value with probability zero
- Random variables that take on numerous values are best treated as continuous
Uniform Distribution if
- f (x) = 1/(b-a) ; Ω = [a, b]
- The notation is X ~ U[a, b]
Exponential Distribution if
- f (x) = (1/β)e-x/β; Ω = [0, ∞) and β > 0
- The notation is X ~ Exp [β]
- Such RVs are used to measure lifetimes, such as durations in unemployment in Labour Economics or survival times in Health Economics
Cumulative Distribution Function
- The cumulative distribution function (CDF) or briefly the distribution function, for a random variable X is defined by F(x) = Pr(X < x)
- x is any real number, i.e., -∞ < x < ∞
F(x) has the following properties
- F(x) is nondecreasing [i.e., F(x) < F(y) if x ≤ y ]
- limx→∞ F(x) = 0; limx→∞ F(x) = 1
- F(x) is continuous from the right [i.e., limp→0+ F(x + h) = F(x) for all x]
The CDF can be expressed as a function of the PDF, defined as
- F (x) = Pr [−∞ ≤ X ≤ x] = (integral from -∞ to x)) f (u) du (CONTINUOUS)
- F (x) = Pr [-∞ ≤ X ≤ x] = ∑f(u) (DISCRETE) where u<x
- For the continuous case, interval probabilities are Pr [a ≤ X ≤ b] = (integral from a to b) (f (x)dx = (integral from -∞ to b) (f(x)dx - (integral from -∞ to a) (f (x) dx = F (b) - F (a),
Moments
- Main characteristics of a PDF (and the RV it assigns probabilities for) can be summarized by moments
- The first moment is known as the mean/expected value
- The second moment relates to the spread or variance of the RV
- For a RV X its mean is a measure of the average value/location
- Variance is a measure of dispersion around that mean
- The mean (or variance) are not RVs, but are fixed numbers that are functions of the PDF
Expected Value
- The expected value or mean of a RV X, E [X] (or μ), is defined as: E [X] = (integral from −∞ to ∞) x f (x)dx (CONTINUOUS); E [X] = (summation from x=−∞ to ∞) xf (x) (DISCRETE)
- If g (X) is a continuous function of X then we define: E [g (X)] = (integral from −∞ to ∞) g(x) f (x) dx or E [g(X)] = (summation from x=−∞ to ∞) g (x) f (x)
- If a and b are constant: E [aX + b] = (integral from minus infinity to infinity) (ax + b) f (x) dx = a * (integral from minus infinity to infinity) x f (x) dx + b *(integral from minus infinity to infinity) f (x) dx= aE [X] + b, which is also true if X is a discrete RV
- If the continuous RV X has a PDF which is symmetric around a point a, then E [X] = a
Variance
- The variance, denoted V [X] (or σ²X), is defined as V [X] = E [(X – E [X])²]
- The variance is a nonnegative number
- The positive square root of the variance is called the standard deviation and is given by σX = √Var[X] = E [(X – E[X])²]
- Note that it is possible to define the variance as E [(X - E [X])²] = E [(X² – 2E[X]X + E[X]²)] = E [X²] – E[2E[X]X] + E[E[X]2] =E [X2] – 2E[X]E[X] + E[X]² = E [X2] – E[X]^2
- If a and b are any constants, V(aX + b) = a² V(X)
Moments
- The rth moment of X about the origin, also called the rth raw moment (or just rth moment), is defined as μ'r = Ε[Χ^r]
- The rth moment of a random variable X about the mean µ, also called the rth central moment, is defined as μr = Ε [(X − μ)^r]
- Expected value is the first moment while variance is the second central moment
- Higher order moments can be calculated to define other features of the distribution of a RV
- The third moment of the standardized version of a RV is viewed as a measure of skewness
- The fourth moment of the standardized version of a RV is called a measure of kurtosis
Moment Generating Functions
- Moments of even simple RVs can be difficult to calculate, especially the higher order moments
- The mean of RV e^λy, which is a function of RV Y (with a PDF f(y)) = E (e^λy) = (integral from -∞ to ∞) e^λy f(y)dy, and this is a function of λ which we denotes by Mγ(λ
- Note the exponential function definition, e^x (λ)= (summation from i=0 to ∞)((λx)^i)/i! = 1+ λy + (λy)^2/2!+..+ ((λy)^i)/i!..
- My(λ) = integral from -∞ to ∞)e^λy f(y)dy = (integral from -∞ to ∞) (1+ λy + (λy)^2/2!+..+ ((λy)^i)/i!.. )f(y)dy = 1+ λE[Y]+ (λ^2/(2!))E[Y^2]+..+(λ^i/(i!))*E[Y^i]....
- So that if we take the ith derivative (so that every term to the left of E[Yi] disappears) and evaluate at λ = 0 (so that every term to the right of E[Yi] evaluates to zero) we get (d^i My(λ))/(dλ) = E[Y^i] given λ = 0
- The moment generating function (MGF) of Y is My (λ)
The MGF satisfies two very important properties:
- (i) Let Y be a random variable having MGF My(λ), then for g(Y) = aY + b, Mg(Y) (λ) = ebλ My (αλ)
- (ii) If X and Y are independent and Z = X + Y, then Mz(λ) = Μx(λ)Μγ (λ), i.e. if we add RVs together the resulting MGF is the product of the MGFs of the individual RVs we are adding
- The MGF is a fundamental characteristic of an RV
- There is a one-to-one correspondence between the PDF and MGF
The Change of Variable Formula
- It is useful in properties of functions of RVs
- Suppose that a RV X has PDF f (x) for a ≤ X ≤ b and distribution function F (x) and let Y = h (X) where h (.) is a monotonic (whether increasing or decreasing) function
- The density and distribution functions of Y, g (y) and G (y) then the Change of Variable formula states: g (y) = f (x) * abs(dx/dy)
- Assume first that h(.) is increasing (dh(x)/dx > 0)then h (a) < Y <h (b)
- The dist formula is: G (y) =Pr [Y ≤ y] = Pr [h (X) ≤ y] = Pr [X ≤ h¯¹ (y)] = F (h-1 (y)) = F(x), where x = h¯¹ (y)
- Get the density, differentiate, g (y) d/dy G (y) = d/dx F (x)* dx/dy = f (x) dy
- The density is then g (y) = d/dy G (y) = d/dx (1 – F (x))* dx/dy =− f (x) dy dx /dy
- Brings both results together is: g (y) = f (x)|dx/dy|
The Normal Family of RVs
- A continuous RV X has a normal (or Gaussian) distribution if its PDF is the form f(x) = 1/(sqrt(2πσ² )) * exp{- 1/(202) * (x- μ)^2}; -∞ ≤ X ≤ 8.
- Then this RV X is a normal random variable, denoted by X ~ Ν(μ,σ²)
- This distribution is completely characterized by the two above parameters which can meet the following; Ε[X] = (integral(-inf to inf)) xf(x)dx = μ, V[X] = (integral(-inf to inf))(x-μ)^2 f(x)dx = σ²
CDF is often written as
- Pr[X ≤ x] = Φ(x) = (integral from -∞ to x) 1/(sqrt(2π ))* e^(-u^2)/2 *du
- The density function for the normal distribution is bell shaped and symmetric about μ. For Standard normal RV has mean 0 and variance 1, often denoted by Z ~ N (0,1).
Main properties:
- Pr[IX-μI > σ] = 0.3173, Pr[IX-μI > 2σ] = 0.0455 etc.
- For Standard normal RV Z for Useful expression any C >0: Pr[Z > c] = Pr[Z > c] + Pr[Z Pr =2P[Z>c] = 2[1 − Ф(c)]
- If X distributed: If X ~ Ν (μ, σ²) then aX + b is also normally distributed ~ N (αμ + b, a²σ²)
- We calculated E [aX + b] = aE [X] + b and V [aX + b] = a²V [X], however the result here is much stronger in that says linear function a normal RV Normal
- IF X1 + X2 normally distributed: We can show if X₁ ~ N (μ₁, σ₁) also that is X2 ~ Ν (μ2, σ2,) then X1 = X2 are also independent, Y=aX1 +aX2~ N(αιμι + a2μ2, ασ + ασξ), and linear independent, normal RVs normal RV.
- IF- X ~ Ν (μ, σ2,) Then If X ~N(μ, σ²) = Pr[- a < X < b] = Pr[- a < X < b] / sigm - μ/sigma.
- The formula for the change in probability when you apply property 3 in the eq. is that Z= X - μ /sigma - ~N (0,1)
- That can also be easilly converted also into 0.126
- In summary: The N(0,1) from Table of Probability
The Chi-Square Distribution
- Let Xi ~ N (0, 1) for =1, ...n Let Xi for independent.
- Equation is Q(n) = n (sigma) xi ^2
Then formula is used with Degrees of
- The Chi Square Distribution with writing: Q(n) ~ X(n)
Properties of this Distribution:
- Is has to be normally symmetric and that: X (sigma) with PDFS
- Show that E (Q(n). - n
- Independent fordeducement that square RVS
- Sque tables required different volume
The Student t Distribution
- LetZ Normally distribution. 0.1 with let Q distribution X(2) where Q independent then
Equation
- T=/ Square n = Qn
- Student With Distribution We T = (10)
Properites
- Dersity that has to be has tails
- Distribution to be T(0.1)
Student
- The graph of the equation freedom is showb here for the values freedom is shown
- 0.126 shadeed are is graph
F the the Distribution
- Square
And with distribution is then equation: Q1/n1 = F= Q2/n2
- Has with normally Properties
- The equation has with distribution is F
Chebyshev's Inequality
- For any RV X w/ PDF f(x), standard mean and variance , the equation will true. Where there all bounds
- The formula show equation show true that where or will suppose
- Chebysheve suppose the is show equation we will be be useful data this section.
Bivariate RVs
- Let Y is equal where formula is
Equal the is then
- Multiple integrals can easily
Marginal PDS
Are The formula equation with the follows.
- (Xx = infinity)
Stastical Indepencdence
- The independent where and can
Conditional
formula has to be
Covarirance and Correclation
- We equation between the can show that the indepdent
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.