Podcast
Questions and Answers
A set of all entities or elements under study can be referred to as a sample population.
A set of all entities or elements under study can be referred to as a sample population.
False (B)
The process of obtaining a sample from the target population is called extracting.
The process of obtaining a sample from the target population is called extracting.
False (B)
If a random sample $X_1, ..., X_n$ is selected from a population with density function $f(x)$, the samples must be mutually dependent of each other but should have varying distributions.
If a random sample $X_1, ..., X_n$ is selected from a population with density function $f(x)$, the samples must be mutually dependent of each other but should have varying distributions.
False (B)
A statistic is function of an observable random variable, but is allowed to contain any unknown parameters.
A statistic is function of an observable random variable, but is allowed to contain any unknown parameters.
The joint distribution of a random sample $X_1, ..., X_n$ can be described as the deviation of a sample.
The joint distribution of a random sample $X_1, ..., X_n$ can be described as the deviation of a sample.
A random variable assigns a unique alphabetical value to each outcome of an experiment.
A random variable assigns a unique alphabetical value to each outcome of an experiment.
Discrete random variables take on values within a non-countable set.
Discrete random variables take on values within a non-countable set.
For a function to be a valid probability mass function (pmf), $P_X(x_i)$ must be less than zero for all $x_i$.
For a function to be a valid probability mass function (pmf), $P_X(x_i)$ must be less than zero for all $x_i$.
The cumulative distribution function $F_X(x)$ must be decreasing.
The cumulative distribution function $F_X(x)$ must be decreasing.
The probability density function (pdf) can always be obtained from the distribution function, but the distribution function cannot always be derived from the pdf.
The probability density function (pdf) can always be obtained from the distribution function, but the distribution function cannot always be derived from the pdf.
If $g(X)$ is a continuous random variable, its expected value $E[g(X)]$ can be found by summing $g(x_i)P[X = x_i]$ over all possible values $x_i$.
If $g(X)$ is a continuous random variable, its expected value $E[g(X)]$ can be found by summing $g(x_i)P[X = x_i]$ over all possible values $x_i$.
The first moment about a constant $c$ is calculated by $E[X + c]$.
The first moment about a constant $c$ is calculated by $E[X + c]$.
If two random variables X and Y are independent, then $f_{XY}(x, y) = f_X(x) + f_Y(y)$.
If two random variables X and Y are independent, then $f_{XY}(x, y) = f_X(x) + f_Y(y)$.
Independent random variables are always uncorrelated.
Independent random variables are always uncorrelated.
For a discrete uniform distribution, each value of the random variable is equally likely, and are not uniformly distributed throughout some interval.
For a discrete uniform distribution, each value of the random variable is equally likely, and are not uniformly distributed throughout some interval.
Flashcards
Target Population
Target Population
A set of all entities or elements under study.
Sample
Sample
A selection of individuals taken from the population.
Sampling
Sampling
The process of selecting a sample from the population.
Random Variable
Random Variable
Signup and view all the flashcards
Discrete Random Variable
Discrete Random Variable
Signup and view all the flashcards
Continuous Random Variable
Continuous Random Variable
Signup and view all the flashcards
Probability Mass Function (PMF)
Probability Mass Function (PMF)
Signup and view all the flashcards
Probability Density Function (PDF)
Probability Density Function (PDF)
Signup and view all the flashcards
Distribution Function
Distribution Function
Signup and view all the flashcards
Variance
Variance
Signup and view all the flashcards
Expectation
Expectation
Signup and view all the flashcards
Discrete Uniform Distribution
Discrete Uniform Distribution
Signup and view all the flashcards
Bernoulli Distribution
Bernoulli Distribution
Signup and view all the flashcards
Binomial Distribution
Binomial Distribution
Signup and view all the flashcards
Exponential Distribution
Exponential Distribution
Signup and view all the flashcards
Study Notes
- This module reviews statistical concepts, definitions, and theories from previous STAT courses, serving as a refresher for STAT 146.
- By the end of this unit, one should be able to define, compute, identify, describe, and explain various scenarios using random variables, expectations, and distributions.
Key Definitions
- Target Population: All entities or elements under study
- Sample: A subset of a target population
- Sampling: The process of selecting a target population sample
- Random Sample: A sample of size n (X1, ..., Xn) from a population with density function f(.), where samples are independent and have the same distribution
- Sampled Population: Population from which a random sample is obtained
- Statistic: Observable random variable function without unknown parameters
- Distribution Function: Probability that a random variable is less than or equal to x
- Random Variable: Rule assigning a numeric value to each experiment outcome
- Discrete: countable values
- Continuous: any value over an interval
PMF and PDF Functions
- Probability Mass Function (PMF): Px(xi) = P[X = x₁] ≥ 0, ∀xi; ΣPx(xi) = 1; used for discrete variables
- Probability Density Function (PDF): Used for continuous variables
- fx(x) ≥ 0, ∀x; integral of fx(x)dx = 1
Distribution Function Properties
- Fx(x) = P[w: −∞ < X(w) ≤ x] = P(X ≤ x), ∀x ∈ R
- Properties: 0 ≤ Fx(x) ≤ 1, ∀x ∈ R; non-decreasing; right continuous; limit of Fx(x) as x approaches infinity is 1 and negative infinity is 0
- PMF/PDF relation to Distribution Function:
- Discrete: Fx(x) = P[X ≤ x] = ∑xi≤xP[X = xi]; P[X = x] = F(x) – F(x¯)
- Continuous: Fx(x) = integral from -inf to x of fx(x)dx; fx(x) = dF(x)/dx
Expected Values
- g(X) is a random variable,
- If g(X) is discrete: E[g(X)] = Σ g(xi)P[X = xi]
- If g(X) is continuous: E[g(X)] = integral from -inf to inf of g(X)f(X)dX
Useful Expectations of a Random Variable
- E[X], denoted as μ, represents the first moment (mean) of X
- E[X-C] represents the first moment about c
- E[(X-C)^k] represents the kth moment about c
- E[(X-μ)^k] is the kth central moment of X.
- E[(X-μ)^2] is variance (σ^2)
- E[(X-μ)^3] is skewness of X (μ3)
- E[(X-μ)^4] is kurtosis of X (μ4)
- E[X^K] is the kth (raw) moment of X (mk)
- E[ | X | ^ K] is the kth absolute moment of X
Expectation Properties
- E[c] = c, where c is a constant
- E[cg(X)] = cE[g(X)]
- E[c₁g(X) + c₂] = c₁E[g(X)] + C₂
- E[c1g1(X) + c2g2(X)] = c1E[g1(X)] ± c2E[g2(X)]
- E[g₁(X)] ≤ E[g2(X)] if g₁(X) ≤ g2(X)
- E[g1(X) * g2(X)] = E[g1(X)]E[g2(X)] if g1(X) and g2(X) are independent
Variance
- V(X) = σ² = E[(X – μ)²] = E[X2] – (E[X])2
- V[c] = 0 where c is a constant
- V[cX] = c2V(X)
- V[C₁X] + V[c2] = c}V[X]
Multivariate Expectations
- Multivariate Expectations with Joint Density Function fx,y(x, y)
- E[X] = double integral of xfx,y(x,y)dydx
- E[Y] = double integral of yfx,y(x, y)dxdy
- E[g(X)] = double integral of (x) fx,y(x, y)dydx
- E[h(Y)] = double integral of (y) fx,y(x, y)dxdy
- E[g(X,Y)] = double integral of g(x, y) fx,y(x, y)dxdy
- Variables X and Y are independent if:
- f XY(x, y) = fX(x)fY(y);
- cov (X,Y) = 0;
- E(XY) = E(X)E(Y);
- Independent random variables are uncorrelated and uncorrelated random variables don't have to be independent
Discrete Distributions
Discrete Uniform
- Equally likely values and values are uniformly distributed
- P[X = x] = 1/N, x = 1,2,...N
- E[X] = (N+1)/2
- V[X] = (N^2 -1)/12
Bernoulli
- P[X = x] = p^x(1-p)^(1-x), x = 0,1,
- E[X] = p
- V[X] = p(1-p)
Binomial
- Random variable represents the number of success/failures in n trials
- E[X] = np
- V[X] = np(1 – p)
Hypergeometric
- Distribution represents the number of success/failures in n draws OR number of success/failures in a sample of size n
- E[X] = nD/N
- V[X] = n(D/N)((N-D)/(N-1))*((N-n)/(N-1))
Geometric
- Represents the number of trials until the first success
- E[X] = 1/p
- V[X] = (1-p)/(p^2)
Negative Binomial
- Represents the number of trials before the first success
- E[X] = r/p
- V[X] = r(1-p)/(p^2)
Poisson
-
of success/failure/events in an interval
- E[X] = lambda
- V[X] = lambda
Multivariate Hypergeometric
- (See document for equations)
Multinomial
- Defining the random variables is the same with the binomial distribution. But use this distribution if the experiment has more than 2 outcomes and the sampling is done with replacement
Continuous Distributions
Continuous Uniform
- E[X] = (a+b)/2
- V[X] = (b-a)^2 / 12
- Random variable is evenly distributed
Exponential
- Time until the first Poissonian event occurs
- Only parameter is the failure rate, lambda, which is constant
- E[X] = 1/lambda
- V[X] = 1/lambda^2
Gamma
- The length of time until the rth success
- (See document for equations)
Weibull
- Used in failure time data (analyzing potential failures) distribution but does not assume constant failure rate
Beta
- (See document for equations)
Normal
- Symmetric distribution used in statistics
- E[X] = mu
- V[X] = sigma^2
Chi-Square
- Has v=degrees of freedom
Functions
- Gamma
- Beta
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.