Test

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

A discrete random variable X can take on values $x_1, x_2, x_3,...$. Which of the following conditions must be satisfied by its probability density function (PDF), f(x)?

  • f(x) ≥ 0 for all x, and the sum of f(x) over all possible values of x equals 1. (correct)
  • f(x) > 0 for all x, and the sum of f(x) over all possible values of x is less than 1.
  • f(x) can be negative for some x, as long as the sum of f(x) over all possible values of x equals 1.
  • f(x) ≥ 1 for at least one x, and the sum of f(x) over all possible values of x equals infinity.

Consider a scenario where you are analyzing the number of heads obtained in two independent coin tosses. Let X be a random variable representing the number of heads. What constitutes the sample space Ω for X?

  • Ω = {HH, HT, TH, TT}
  • Ω = {0, 1, 2} (correct)
  • Ω = {1/4, 1/2, 3/4, 1}
  • Ω = {0, 1}

Which of the following scenarios is best modeled by a Binomial distribution?

  • The probability of one team winning a sports tournament
  • The number of cars passing a certain point on a highway in one hour.
  • The time until the next customer arrives at a store.
  • The number of defective items in a batch of 100, given a fixed probability of an item being defective. (correct)

Suppose X follows a Binomial distribution with parameters n (number of trials) and p (probability of success). If n = 1, what specific distribution does X follow?

<p>Bernoulli distribution (A)</p> Signup and view all the answers

Events arriving according to a Poisson Distribution have what key property?

<p>The probability of two events occurring at the same time is zero. (D)</p> Signup and view all the answers

The number of customers arriving at a help desk in one hour is modeled using a Poisson distribution. What key assumption must hold true?

<p>Customers arrive independently of each other. (C)</p> Signup and view all the answers

Which of the following real-world scenarios can be appropriately modeled using a Poisson distribution?

<p>The number of raindrops falling on a specific area during a thunderstorm. (D)</p> Signup and view all the answers

If $Z_1, Z_2, ..., Z_n$ are independent Bernoulli random variables with the same probability of success p, what is the distribution of the sum $Z_1 + Z_2 + ... + Z_n$?

<p>Binomial with parameters <em>n</em> and <em>p</em> (A)</p> Signup and view all the answers

In the context of RVs (random variables) used for measuring durations, which field commonly uses them to analyze 'durations in unemployment'?

<p>Labour Economics (D)</p> Signup and view all the answers

Which of the following is NOT a property of a cumulative distribution function (CDF), $F(x)$?

<p>$\lim_{x \to \infty} F(x) = 0$ (A)</p> Signup and view all the answers

Given a continuous random variable $X$ with probability density function $f(x)$, how is the cumulative distribution function $F(x)$ defined?

<p>$F(x) = \int_{-\infty}^{x} f(u) du$ (C)</p> Signup and view all the answers

For a discrete random variable, how is the cumulative distribution function (CDF) calculated?

<p>Summing the probabilities of all values less than or equal to x. (C)</p> Signup and view all the answers

Given a random variable $X$, the mean or expected value, $E[X]$, represents:

<p>The average value or location of $X$. (C)</p> Signup and view all the answers

The variance of a random variable (RV) is described as:

<p>A measure of dispersion around the mean of the RV. (A)</p> Signup and view all the answers

Which of the following is true regarding the expected value (mean) and variance of a random variable?

<p>Both the expected value and variance are fixed numbers derived from the PDF. (D)</p> Signup and view all the answers

For a continuous random variable X, how is the expected value E[X] calculated?

<p>$\int_{-\infty}^{\infty} x f(x) dx$ (C)</p> Signup and view all the answers

If a random variable Y has a moment generating function (MGF) denoted as $M_Y(\lambda)$, what does $\frac{d^i M_Y(\lambda)}{d\lambda^i}|_{\lambda=0}$ represent?

<p>The $i^{th}$ raw moment of Y, E[$Y^i$]. (A)</p> Signup and view all the answers

Suppose $Y$ is a random variable with moment generating function $M_Y(\lambda)$, and $g(Y) = aY + b$. What is the moment generating function of $g(Y)$, denoted as $M_{g(Y)}(\lambda)$?

<p>$e^{\lambda b}M_Y(a\lambda)$ (C)</p> Signup and view all the answers

If $X$ and $Y$ are independent random variables, and $Z = X + Y$, how is the moment generating function of $Z$, denoted as $M_Z(\lambda)$, related to the moment generating functions of $X$ and $Y$, $M_X(\lambda)$ and $M_Y(\lambda)$ respectively?

<p>$M_Z(\lambda) = M_X(\lambda) \cdot M_Y(\lambda)$ (A)</p> Signup and view all the answers

Random variables can be defined by which of the following?

<p>Probability Density Functions, Cumulative Distribution Functions, or Moment Generating Functions. (C)</p> Signup and view all the answers

Suppose Y follows an exponential distribution, $Y \sim Exp[\beta]$. Given its MGF, how can the mean and variance of Y be determined?

<p>Mean and variance are found by differentiating the MGF and setting $\lambda = 0$. (D)</p> Signup and view all the answers

Given a random variable $X$ with PDF $f(x)$ for $a \le X \le b$, and a monotonic function $Y = h(X)$, what does the Change of Variable formula allow us to determine?

<p>The density function $g(y)$ of the transformed variable $Y$. (A)</p> Signup and view all the answers

In the Change of Variable formula, $g(y) = f(x) \cdot |\frac{dx}{dy}|$, what is the significance of the absolute value?

<p>It ensures that the density function $g(y)$ is always positive, regardless of whether h(.) is increasing or decreasing. (D)</p> Signup and view all the answers

Suppose a random variable $X$ has PDF $f(x)$ and distribution function $F(x)$. If $Y = h(X)$ where $h(X)$ is monotonic and increasing, how is the distribution function of $Y$, $G(y)$, related to $F(x)$?

<p>$G(y) = F(h^{-1}(y))$ (D)</p> Signup and view all the answers

If random variables W and Z are transformed to X = W − E[W] and Y = Z − E[Z], how does the correlation coefficient rWZ relate to rXY?

<p>rWZ = rXY (C)</p> Signup and view all the answers

What key property characterizes the variance-covariance matrix of a random vector?

<p>It is symmetric. (A)</p> Signup and view all the answers

Given two random variables X and Y, if Cov(X, Y) = 0, what can be inferred about their relationship?

<p>X and Y are uncorrelated. (A)</p> Signup and view all the answers

For random variables X and Y, if E[X] = 2, E[Y] = 3, E[XY] = 10, E[X^2] = 9 and E[Y^2] = 16, what is the covariance of X and Y?

<p>4 (A)</p> Signup and view all the answers

Using the information from the previous question where E[X] = 2, E[Y] = 3, E[XY] = 10, E[X^2] = 9 and E[Y^2] = 16, what is the correlation coefficient between X and Y?

<p>0.80 (D)</p> Signup and view all the answers

Given the joint PDF $f(x, y) = e^{-(x+y)}$ for $x ≥ 0, y ≥ 0$, what is the marginal PDF of X?

<p>$e^{-x}$ (B)</p> Signup and view all the answers

Given a joint PDF $f(x, y) = \frac{1}{(b-a)(d-c)}$ for $a ≤ X ≤ b$ and $c ≤ Y ≤ d$, describe the distribution of X and Y.

<p>X and Y have a bivariate uniform distribution. (A)</p> Signup and view all the answers

If the joint PDF of X and Y is given by $f(x, y) = \frac{3}{7} (x^2 + \frac{2}{3}xy)$ for $0 ≤ X ≤ 1$ and $1 ≤ Y ≤ 2$, are X and Y independent?

<p>No, because the joint PDF cannot be factored into separate functions of x and y. (D)</p> Signup and view all the answers

If random variables X and Y are statistically independent, which of the following statements is always true?

<p>The conditional PDF fX|Y(x|y) is equal to the marginal PDF fX(x). (A)</p> Signup and view all the answers

Given the joint PDF $f(x, y) = 8xy$ for $0 ≤ x ≤ 1$ and $0 ≤ y ≤ x$, which step is necessary to find the marginal density of X?

<p>Integrate f(x, y) with respect to y over the interval [0, x]. (B)</p> Signup and view all the answers

What does a covariance of zero between two random variables X and Y generally indicate?

<p>X and Y are uncorrelated. (D)</p> Signup and view all the answers

If the joint PDF of random variables X and Y is given by $f(x, y)$, how is the conditional probability $Pr[c ≤ Y ≤ d | X = x]$ calculated?

<p>Integrating the conditional PDF $fY|X(y|x)$ from c to d. (A)</p> Signup and view all the answers

Given the joint PDF $f(x,y)$, how do you determine if random variables X and Y are statistically independent?

<p>By checking if $f(x, y) = fX(x) * fY(y)$. (A)</p> Signup and view all the answers

Suppose E[X] = 2, E[Y] = 3, and E[XY] = 8. What is the covariance between X and Y, Cov[X, Y]?

<p>2 (D)</p> Signup and view all the answers

What is the significance of the conditional PDF $fY|X(y|x)$?

<p>It describes the probability distribution of Y given a specific value of X. (D)</p> Signup and view all the answers

Given random variables X and Y, and knowing that E[X] and E[Y] exist, which expression correctly represents Cov[X, Y]?

<p>E[(X - E[X])(Y - E[Y])] (B)</p> Signup and view all the answers

Which of the following is true regarding marginal PDFs?

<p>The marginal PDF of Y assigns probabilities to a range of Y values irrespective of X. (B)</p> Signup and view all the answers

Given a joint PDF $f(x, y)$, how is the marginal PDF of X, denoted as $f_X(x)$, calculated?

<p>$f_X(x) = \int_{-\infty}^{\infty} f(x, y) dy$ (D)</p> Signup and view all the answers

For a joint PDF $f(x, y)$, what does $\int_{a}^{b} f_X(x) dx$ represent?

<p>The probability that $a \le X \le b$ regardless of the value of Y. (C)</p> Signup and view all the answers

If $f(x, y)$ is a joint PDF, which of the following relationships between marginal and joint PDFs is correct?

<p>$f(x, y) = f_X(x) f_Y(y)$ only if X and Y are independent (A)</p> Signup and view all the answers

What condition must be met for the conditional PDF $f_{X|Y}(x|y)$ to be defined?

<p>$f_Y(y) &gt; 0$ (B)</p> Signup and view all the answers

How is the conditional PDF $f_{Y|X}(y|x)$ defined in terms of the joint PDF $f(x, y)$ and the marginal PDF $f_X(x)$?

<p>$f_{Y|X}(y|x) = f(x, y) / f_X(x)$ (D)</p> Signup and view all the answers

Which of the following statements is true regarding conditional PDFs?

<p>The conditional PDF $f_{X|Y}(x|y)$ describes the probability distribution of X given a value of Y. (D)</p> Signup and view all the answers

Given the conditional PDF $f_{X|Y}(x|y)$, which of the following represents the probability that $a \le X \le b$ given that $Y = y$?

<p>$\int_{a}^{b} f_{X|Y}(x|y) dx$ (D)</p> Signup and view all the answers

Flashcards

Bernoulli Random Variable

A random variable that can only take two values, typically 0 or 1.

Probability Density Function (PDF)

A function that gives the probability that a discrete random variable is exactly equal to some value.

Valid PDF Requirements

States that the PDF must be non-negative for all values and sum to 1 over all possible values.

Binomial Distribution

A discrete random variable representing the number of successes in a sequence of n independent trials, each with probability p of success.

Signup and view all the flashcards

What does Binomial Distribution describe?

Describes the number of successes in n independent trials.

Signup and view all the flashcards

Assumptions of Binomial Distribution

Each trial is independent and has the same probability of success (p).

Signup and view all the flashcards

Bernoulli RV as a Binomial Distribution

A special case of the binomial distribution with only one trial (n=1).

Signup and view all the flashcards

Poisson Distribution

A discrete random variable that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event.

Signup and view all the flashcards

Lifetime RVs

RVs used to measure durations or survival times, like unemployment or health studies.

Signup and view all the flashcards

Cumulative Distribution Function (CDF)

The probability that a random variable X will take a value less than or equal to x: F(x) = P(X ≤ x).

Signup and view all the flashcards

CDF Properties

CDF values range from 0 to 1, and the function never decreases as x increases.

Signup and view all the flashcards

CDF Calculation

Continuous: Integral of PDF from -∞ to x. Discrete: Sum of PDF for all values ≤ x.

Signup and view all the flashcards

Interval Probability with CDF

For a ≤ X ≤ b, it's F(b) - F(a). The difference between CDF values at the interval's bounds.

Signup and view all the flashcards

Moments of a PDF

Summarize a PDF, including the mean (average value) and variance (spread). Fixed numbers, not random variables.

Signup and view all the flashcards

Expected Value (Mean)

Average value of a random variable X. Calculated differently for continuous and discrete RVs.

Signup and view all the flashcards

Expected Value Calculation

Continuous: Integral of xf(x). Discrete: Sum of xf(x) over all x. Weighted average by probability.

Signup and view all the flashcards

Marginal PDFs

PDFs derived from a joint PDF, describing the probability distribution of one variable regardless of the other.

Signup and view all the flashcards

Calculating fX(x)

The marginal PDF of X is found by integrating the joint PDF f(x, y) over all possible values of Y.

Signup and view all the flashcards

Calculating fY(y)

The marginal PDF of Y is found by integrating the joint PDF f(x, y) over all possible values of X.

Signup and view all the flashcards

Purpose of Marginal PDF

Assigns probabilities to a range of values for one variable, irrespective of the values of the other variable.

Signup and view all the flashcards

Conditional PDFs

PDFs that describe the probability distribution of one variable given a specific value of the other variable.

Signup and view all the flashcards

Formula for fX|Y(x|y)

fX|Y(x|y) = f(x, y) / fY(y). It describes the probability distribution of X given Y = y.

Signup and view all the flashcards

Formula for fY|X(y|x)

fY|X(y|x) = f(x, y) / fX(x). It describes the probability distribution of Y given X = x.

Signup and view all the flashcards

Purpose of Conditional PDF

Used to find the probability of X falling within a range, given a specific value for Y.

Signup and view all the flashcards

Moment Generating Function (MGF)

A function MY(λ) that generates the moments of a random variable Y. It's defined as MY(λ) = E[e^(λY)].

Signup and view all the flashcards

MGF of a Linear Transformation

If g(Y) = aY + b, then Mg(Y)(λ) = e^(λb) * MY(aλ).

Signup and view all the flashcards

MGF of Independent RV Sum

If X and Y are independent and Z = X + Y, then MZ(λ) = MX(λ) * MY(λ).

Signup and view all the flashcards

PDF and MGF Relationship

A one-to-one correspondence exists between the PDF and the MGF of a random variable.

Signup and view all the flashcards

Change of Variable Formula

A formula to find the PDF of a transformed random variable Y = h(X) given the PDF of X and a monotonic function h.

Signup and view all the flashcards

Change of Variable Formula (Equation)

g(y) = f(x) * |dx/dy|, where Y = h(X) and h is monotonic.

Signup and view all the flashcards

Monotonic Function

A function that is either always increasing or always decreasing.

Signup and view all the flashcards

CDF Transformation

G(y) = F(h−1(y)), where G is the CDF of Y, F is the CDF of X, and Y = h(X).

Signup and view all the flashcards

rWZ = rXY

Covariance between X and Y after subtracting their means

Signup and view all the flashcards

Variance-Covariance Matrix

A matrix showing variances of random vector elements on the diagonal and covariances off-diagonal.

Signup and view all the flashcards

Cov(X, Y)

The covariance between random variables X and Y.

Signup and view all the flashcards

Correlation Coefficient

Measures the strength and direction of a linear relationship between two random variables.

Signup and view all the flashcards

Bivariate Uniform Distribution

A distribution where the probability of X and Y occurring is constant over a defined rectangular region

Signup and view all the flashcards

Independent Random Variables

Random variables whose joint PDF can be factored into product of marginal PDFs.

Signup and view all the flashcards

Joint PDF

A function to describe the relationship between two continuous variables

Signup and view all the flashcards

Conditional Probability (Y|X)

Probability of Y falling within a range [c, d] given X = x.

Signup and view all the flashcards

Conditional PDF (fY|X(y|x))

The probability density function of Y given X = x.

Signup and view all the flashcards

Statistical Independence

RVs X and Y are independent if their joint PDF equals the product of their marginal PDFs.

Signup and view all the flashcards

Independence and Conditional PDF

fX|Y(x|y) = fX(x) and fY|X(y|x) = fY(y).

Signup and view all the flashcards

Covariance (Cov[X, Y])

A measure of how much two random variables change together.

Signup and view all the flashcards

Covariance Formula

Cov[X, Y] = E[(X - E[X])(Y - E[Y])] = E[XY] - E[X]E[Y]

Signup and view all the flashcards

Independence and Covariance

If X and Y are statistically independent, their covariance is zero.

Signup and view all the flashcards

Expected Value of Product (Independent RVs)

When X and Y are independent: E[XY] = E[X]E[Y]

Signup and view all the flashcards

Study Notes

  • Mathematical Economics and Econometrics (ECON1049) is the name of the the course
  • The course is in Semester 2

Probability: Random variables and their properties

  • Introduction; Examples and Properties of Random Variables will be covered
  • The normal family of distributions plus Chebychev's Inequality will be covered
  • Bivariate Random Variables - Correlation and Independence are included in the syllabus

Statistics: Estimation and hypothesis testing

  • Samples and Sampling Distributions are included
  • Estimation and Hypothesis Testing will be covered
  • Applications of Tests and Confidence Intervals form part of the syllabus

Econometrics - The linear regression model

  • Introduction to the Linear Model and Ordinary Least Squares (OLS) will be discussed
  • Properties of the OLS estimators will be covered
  • Extensions to the multivariable model, including specification will be studied

Properties of Random Variables

  • An experiment where the results vary each time it is repeated is called a random experiment
  • A set with all possible outcomes of a random experiment is a sample space
  • An event is a subset of the sample space
  • A random (or stochastic) variable (RV) is a variable whose value is determined via a random experiment
  • Random variables are either discrete or continuous
  • Continuous variables can take any value in a given range
  • Discrete variables are constrained to take particular values

Discrete Random Variables

  • A Bernoulli random variable is the simplest example of a discrete RV
  • The probability the variable takes on the value one describes the behavior of a Bernoulli random variable
  • If the coin is fair, then the probability that X equals one (head) is one-half; Pr[X = 1] = 1/2
  • The probability that a Bernoulli RV equals 1 can be specified by 0; in principle, the 6 can be any number between 0 and 1.
  • Pr[X = xk] = f (xk) k = 1, 2, . . . where X is a discrete random variable and x is a possible value
  • The probability density function (PDF) is given by Pr[X = x] = f(x)
  • For x = xk it becomes f (xk), while for other values of x, f(x) = 0
  • For a discrete RV with sample space Ω the PDF f(x) is valid if i) f(x) ≥ 0, ii) ∑ f(x) = 1 for all values in the sample space

The coin is tossed twice with the sample space S = {HH, HT,TH,TT}

  • X represents the number of heads that can come up
  • With HH (i.e., 2 heads ), X = 2 while for TH(1 head ), X = 1
  • The objective is to find its PDF

Binomial Distribution if

  • Pr [X = x] = f (x) = x!( − x)!p² (1 − p)"−*; Ω = (0,1,2,...,n) and p > 0
  • X ~ Bin [n, p] assigns probabilities to the number of successes in n independent trials with a prob of success p
  • It is used in Decision/Game Theory and Economic phenomena where there is a discrete choice
  • An important special case is the Bernoulli RV Bin [1, p]
  • If each of Z1, …, Zn is B[1, p] (and independent of one another - independence is defined precisely later) then Z₁ + Z2 + .. + Zn is B[n, p]

Poisson Distribution if

  • Pr [X = x] = f (x) = /x!; Ω = (0, 1, 2, ...., ∞) and μ > 0
  • X ~ Po [μ] assigns probabilities to counts
  • Examples: the number of people joining a queue in a given amount of time, or the number of people visiting a web page

Continuous Random Variables

  • With a discrete RV, the PDF governs how probabilities are assigned to individual values
  • With a continuous RV X, it gives the probabilities that X lies between two different values
  • If X is a continuous RV on Ω = (-8,8), then the probability that X takes the value between a and b can be given by its PDF f(x) in the form of: Pr [a ≤ X ≤ b] = (integral from a to b) f (x)dx for -∞≤ a ≤ b ≤ ∞.
  • Where the function f(x) has the properties: i) f (x) ≥ 0 ; ii) (integral from -∞ to ∞) f(x)dx = 1.
  • probabilities for continuous RVs are only non-zero for intervals, not point outcomes
  • Due to the number of possible values, a continuous random variable X can take on each value with probability zero
  • Random variables that take on numerous values are best treated as continuous

Uniform Distribution if

  • f (x) = 1/(b-a) ; Ω = [a, b]
  • The notation is X ~ U[a, b]

Exponential Distribution if

  • f (x) = (1/β)e-x/β; Ω = [0, ∞) and β > 0
  • The notation is X ~ Exp [β]
  • Such RVs are used to measure lifetimes, such as durations in unemployment in Labour Economics or survival times in Health Economics

Cumulative Distribution Function

  • The cumulative distribution function (CDF) or briefly the distribution function, for a random variable X is defined by F(x) = Pr(X < x)
  • x is any real number, i.e., -∞ < x < ∞

F(x) has the following properties

  • F(x) is nondecreasing [i.e., F(x) < F(y) if x ≤ y ]
  • limx→∞ F(x) = 0; limx→∞ F(x) = 1
  • F(x) is continuous from the right [i.e., limp→0+ F(x + h) = F(x) for all x]

The CDF can be expressed as a function of the PDF, defined as

  • F (x) = Pr [−∞ ≤ X ≤ x] = (integral from -∞ to x)) f (u) du (CONTINUOUS)
  • F (x) = Pr [-∞ ≤ X ≤ x] = ∑f(u) (DISCRETE) where u<x
  • For the continuous case, interval probabilities are Pr [a ≤ X ≤ b] = (integral from a to b) (f (x)dx = (integral from -∞ to b) (f(x)dx - (integral from -∞ to a) (f (x) dx = F (b) - F (a),

Moments

  • Main characteristics of a PDF (and the RV it assigns probabilities for) can be summarized by moments
  • The first moment is known as the mean/expected value
  • The second moment relates to the spread or variance of the RV
  • For a RV X its mean is a measure of the average value/location
  • Variance is a measure of dispersion around that mean
  • The mean (or variance) are not RVs, but are fixed numbers that are functions of the PDF

Expected Value

  • The expected value or mean of a RV X, E [X] (or μ), is defined as: E [X] = (integral from −∞ to ∞) x f (x)dx (CONTINUOUS); E [X] = (summation from x=−∞ to ∞) xf (x) (DISCRETE)
  • If g (X) is a continuous function of X then we define: E [g (X)] = (integral from −∞ to ∞) g(x) f (x) dx or E [g(X)] = (summation from x=−∞ to ∞) g (x) f (x)
  • If a and b are constant: E [aX + b] = (integral from minus infinity to infinity) (ax + b) f (x) dx = a * (integral from minus infinity to infinity) x f (x) dx + b *(integral from minus infinity to infinity) f (x) dx= aE [X] + b, which is also true if X is a discrete RV
  • If the continuous RV X has a PDF which is symmetric around a point a, then E [X] = a

Variance

  • The variance, denoted V [X] (or σ²X), is defined as V [X] = E [(X – E [X])²]
  • The variance is a nonnegative number
  • The positive square root of the variance is called the standard deviation and is given by σX = √Var[X] = E [(X – E[X])²]
  • Note that it is possible to define the variance as E [(X - E [X])²] = E [(X² – 2E[X]X + E[X]²)] = E [X²] – E[2E[X]X] + E[E[X]2] =E [X2] – 2E[X]E[X] + E[X]² = E [X2] – E[X]^2
  • If a and b are any constants, V(aX + b) = a² V(X)

Moments

  • The rth moment of X about the origin, also called the rth raw moment (or just rth moment), is defined as μ'r = Ε[Χ^r]
  • The rth moment of a random variable X about the mean µ, also called the rth central moment, is defined as μr = Ε [(X − μ)^r]
  • Expected value is the first moment while variance is the second central moment
  • Higher order moments can be calculated to define other features of the distribution of a RV
  • The third moment of the standardized version of a RV is viewed as a measure of skewness
  • The fourth moment of the standardized version of a RV is called a measure of kurtosis

Moment Generating Functions

  • Moments of even simple RVs can be difficult to calculate, especially the higher order moments
  • The mean of RV e^λy, which is a function of RV Y (with a PDF f(y)) = E (e^λy) = (integral from -∞ to ∞) e^λy f(y)dy, and this is a function of λ which we denotes by Mγ(λ
  • Note the exponential function definition, e^x (λ)= (summation from i=0 to ∞)((λx)^i)/i! = 1+ λy + (λy)^2/2!+..+ ((λy)^i)/i!..
  • My(λ) = integral from -∞ to ∞)e^λy f(y)dy = (integral from -∞ to ∞) (1+ λy + (λy)^2/2!+..+ ((λy)^i)/i!.. )f(y)dy = 1+ λE[Y]+ (λ^2/(2!))E[Y^2]+..+(λ^i/(i!))*E[Y^i]....
  • So that if we take the ith derivative (so that every term to the left of E[Yi] disappears) and evaluate at λ = 0 (so that every term to the right of E[Yi] evaluates to zero) we get (d^i My(λ))/(dλ) = E[Y^i] given λ = 0
  • The moment generating function (MGF) of Y is My (λ)

The MGF satisfies two very important properties:

  • (i) Let Y be a random variable having MGF My(λ), then for g(Y) = aY + b, Mg(Y) (λ) = ebλ My (αλ)
  • (ii) If X and Y are independent and Z = X + Y, then Mz(λ) = Μx(λ)Μγ (λ), i.e. if we add RVs together the resulting MGF is the product of the MGFs of the individual RVs we are adding
  • The MGF is a fundamental characteristic of an RV
  • There is a one-to-one correspondence between the PDF and MGF

The Change of Variable Formula

  • It is useful in properties of functions of RVs
  • Suppose that a RV X has PDF f (x) for a ≤ X ≤ b and distribution function F (x) and let Y = h (X) where h (.) is a monotonic (whether increasing or decreasing) function
  • The density and distribution functions of Y, g (y) and G (y) then the Change of Variable formula states: g (y) = f (x) * abs(dx/dy)
  • Assume first that h(.) is increasing (dh(x)/dx > 0)then h (a) < Y <h (b)
  • The dist formula is: G (y) =Pr [Y ≤ y] = Pr [h (X) ≤ y] = Pr [X ≤ h¯¹ (y)] = F (h-1 (y)) = F(x), where x = h¯¹ (y)
  • Get the density, differentiate, g (y) d/dy G (y) = d/dx F (x)* dx/dy = f (x) dy
  • The density is then g (y) = d/dy G (y) = d/dx (1 – F (x))* dx/dy =− f (x) dy dx /dy
  • Brings both results together is: g (y) = f (x)|dx/dy|

The Normal Family of RVs

  • A continuous RV X has a normal (or Gaussian) distribution if its PDF is the form f(x) = 1/(sqrt(2πσ² )) * exp{- 1/(202) * (x- μ)^2}; -∞ ≤ X ≤ 8.
  • Then this RV X is a normal random variable, denoted by X ~ Ν(μ,σ²)
  • This distribution is completely characterized by the two above parameters which can meet the following; Ε[X] = (integral(-inf to inf)) xf(x)dx = μ, V[X] = (integral(-inf to inf))(x-μ)^2 f(x)dx = σ²

CDF is often written as

  • Pr[X ≤ x] = Φ(x) = (integral from -∞ to x) 1/(sqrt(2π ))* e^(-u^2)/2 *du
  • The density function for the normal distribution is bell shaped and symmetric about μ. For Standard normal RV has mean 0 and variance 1, often denoted by Z ~ N (0,1).

Main properties:

  • Pr[IX-μI > σ] = 0.3173, Pr[IX-μI > 2σ] = 0.0455 etc.
  • For Standard normal RV Z for Useful expression any C >0: Pr[Z > c] = Pr[Z > c] + Pr[Z Pr =2P[Z>c] = 2[1 − Ф(c)]
  • If X distributed: If X ~ Ν (μ, σ²) then aX + b is also normally distributed ~ N (αμ + b, a²σ²)
  • We calculated E [aX + b] = aE [X] + b and V [aX + b] = a²V [X], however the result here is much stronger in that says linear function a normal RV Normal
  • IF X1 + X2 normally distributed: We can show if X₁ ~ N (μ₁, σ₁) also that is X2 ~ Ν (μ2, σ2,) then X1 = X2 are also independent, Y=aX1 +aX2~ N(αιμι + a2μ2, ασ + ασξ), and linear independent, normal RVs normal RV.
  • IF- X ~ Ν (μ, σ2,) Then If X ~N(μ, σ²) = Pr[- a < X < b] = Pr[- a < X < b] / sigm - μ/sigma.
  • The formula for the change in probability when you apply property 3 in the eq. is that Z= X - μ /sigma - ~N (0,1)
  • That can also be easilly converted also into 0.126
  • In summary: The N(0,1) from Table of Probability

The Chi-Square Distribution

  • Let Xi ~ N (0, 1) for =1, ...n Let Xi for independent.
  • Equation is Q(n) = n (sigma) xi ^2

Then formula is used with Degrees of

  • The Chi Square Distribution with writing: Q(n) ~ X(n)

Properties of this Distribution:

  • Is has to be normally symmetric and that: X (sigma) with PDFS
  • Show that E (Q(n). - n
  • Independent fordeducement that square RVS
  • Sque tables required different volume

The Student t Distribution

  • LetZ Normally distribution. 0.1 with let Q distribution X(2) where Q independent then

Equation

  • T=/ Square n = Qn
  • Student With Distribution We T = (10)

Properites

  • Dersity that has to be has tails
  • Distribution to be T(0.1)

Student

  • The graph of the equation freedom is showb here for the values freedom is shown
  • 0.126 shadeed are is graph

F the the Distribution

  • Square

And with distribution is then equation: Q1/n1 = F= Q2/n2

  • Has with normally Properties
  • The equation has with distribution is F

Chebyshev's Inequality

  • For any RV X w/ PDF f(x), standard mean and variance , the equation will true. Where there all bounds
  • The formula show equation show true that where or will suppose
  • Chebysheve suppose the is show equation we will be be useful data this section.

Bivariate RVs

  • Let Y is equal where formula is

Equal the is then

  • Multiple integrals can easily

Marginal PDS

Are The formula equation with the follows.

  • (Xx = infinity)

Stastical Indepencdence

  • The independent where and can

Conditional

formula has to be

Covarirance and Correclation

  • We equation between the can show that the indepdent

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Use Quizgecko on...
Browser
Browser