Asymptotic Theory in Econometrics Unit 6

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the distribution of the LM statistic?

  • Uniform distribution
  • $LM ext{ ∼ } χ^2q$ (correct)
  • $ar{LM} ext{ ∼ } rac{ ext{1}}{ χ^2q }$
  • Normal distribution

Under which conditions is OLS considered asymptotically efficient?

  • When estimators are inconsistent
  • In the presence of heteroskedasticity
  • Under the Gauss-Markov assumptions (correct)
  • When errors are normally distributed

What should one expect regarding the results from an F test and an LM test with a large sample?

  • The F test will always be more powerful
  • The LM test is irrelevant in large samples
  • They should be similar (correct)
  • They will be identical

What happens to the OLS efficiency conclusion if the error term is not homoskedastic?

<p>The conclusion about efficiency is invalid (D)</p> Signup and view all the answers

What is used to choose a critical value in the LM test?

<p>$χ^2$ distribution (C)</p> Signup and view all the answers

What does the Central Limit Theorem imply about OLS estimators as the sample size increases?

<p>They will be asymptotically normal (D)</p> Signup and view all the answers

According to asymptotic normality, what is the distribution of the standardized OLS estimator $Z$?

<p>$N(0, 1)$ (B)</p> Signup and view all the answers

Which assumptions must hold for the OLS estimators to be asymptotically normal under the Gauss-Markov assumptions?

<p>MLR.1 to MLR.5 assumptions must be satisfied (A)</p> Signup and view all the answers

In the context of asymptotic inference, what does $P(Z < z) o ext{Φ}(z)$ represent?

<p>The proportion of Z values below z approaches the standard normal CDF (B)</p> Signup and view all the answers

What does $σ̂^2$ represent in the context of asymptotic normality?

<p>A consistent estimator of the population variance (B)</p> Signup and view all the answers

What does the asymptotic standard error for coefficient estimates depend on?

<p>The sample size and constant terms (D)</p> Signup and view all the answers

In the context of the LM statistic, what is the purpose of the auxiliary regression?

<p>To test multiple exclusion restrictions (C)</p> Signup and view all the answers

What does the Lagrange Multiplier (LM) statistic primarily rely on for inference?

<p>Asymptotic normality (C)</p> Signup and view all the answers

What does the model's null hypothesis generally state when testing with the LM statistic?

<p>Certain coefficients are equal to zero (D)</p> Signup and view all the answers

What effect does sample size (N) have on the standard errors in large sample theory?

<p>Standard errors shrink inversely with N (B)</p> Signup and view all the answers

What does it mean for an estimator to be consistent?

<p>The distribution of the estimator approaches the parameter value as sample size increases. (C)</p> Signup and view all the answers

Under which conditions is the OLS estimator considered to be BLUE?

<p>When the Gauss-Markov assumptions MLR.1-MLR.5 hold true. (C)</p> Signup and view all the answers

What is the implication of having a variance tending to zero for an estimator?

<p>The estimator will consistently provide values closer to the parameter as sample size increases. (A)</p> Signup and view all the answers

What is the purpose of taking the probability limit (plim) in establishing consistency?

<p>To analyze the behavior of the estimator as sample size approaches infinity. (D)</p> Signup and view all the answers

What assumption regarding the matrix of independent variables X is made to establish the consistency of the OLS estimator?

<p>The second moments for the X must be finite. (D)</p> Signup and view all the answers

In the context of the SLR model, how is the OLS estimator expressed?

<p>As a sum of the products of deviations of X and Y from their means. (B)</p> Signup and view all the answers

What does the term 'asymptotic inference' refer to in statistical estimation?

<p>Estimation of parameters as sample size approaches infinity. (C)</p> Signup and view all the answers

Which of the Gauss-Markov assumptions is critical for both unbiasedness and consistency of OLS estimators?

<p>The expected value of the errors must be zero. (D)</p> Signup and view all the answers

What does the term plimw = 0 signify in the context of the document?

<p>The mean of the random variable w approaches zero as the sample size increases. (C)</p> Signup and view all the answers

Which of the following is NOT a condition required for unbiasedness in OLS?

<p>The error term must follow a normal distribution. (A)</p> Signup and view all the answers

What does inconsistency in OLS imply in terms of sample size?

<p>Inconsistency persists regardless of the amount of data added. (A)</p> Signup and view all the answers

What implication does the assumption of normality of errors have on sampling distributions?

<p>It allows for deriving t and F distributions for hypothesis testing. (B)</p> Signup and view all the answers

Which statement about OLS being BLUE is correct?

<p>It is guaranteed regardless of the normality assumption. (C)</p> Signup and view all the answers

Which of the following must be assumed about the variances for consistency in OLS?

<p>Var(x1) should be less than infinity. (A)</p> Signup and view all the answers

What characteristic of a normally distributed error term is significant in a regression model?

<p>It allows for the application of various statistical tests. (D)</p> Signup and view all the answers

What conclusion can be drawn if the distribution of the dependent variable is skewed?

<p>The assumption of normality of errors is likely violated. (D)</p> Signup and view all the answers

What happens to the sample moments as N increases?

<p>They converge in probability to the population counterparts. (A)</p> Signup and view all the answers

What does the equation plim$etâ$ = plim$eta$ + plim$ rac{1}{N}X' u$ indicate about the OLS estimator?

<p>It states that the OLS estimator converges to the true parameter in large samples. (B)</p> Signup and view all the answers

What condition must hold for Cov($x_1$, $u$) to ensure consistent estimation?

<p>Cov($x_1$, $u$) must be zero. (C)</p> Signup and view all the answers

In the context of the MLR model, what does $etâ = (X'X)^{-1}X'y$ denote?

<p>The estimator derived from the relationship between predictors and response. (A)</p> Signup and view all the answers

What is the implication of plim$ rac{1}{N}X' u = 0$?

<p>It indicates that the predictor variables are uncorrelated with the errors. (C)</p> Signup and view all the answers

What does the term $Var(w) = E(Var(w|X)) + Var[E(w|X)]$ represent?

<p>It shows the total variance of the errors in the OLS model. (C)</p> Signup and view all the answers

What is the role of OLS in estimating parameters?

<p>It provides a method to minimize the distance between observed and predicted values. (A)</p> Signup and view all the answers

Which mathematical concept is involved in the OLS estimator's calculation?

<p>Matrix inversion and multiplication. (B)</p> Signup and view all the answers

Which of the following statements about variance in the context of the OLS model is true?

<p>Variance of the errors should be constant for OLS assumptions. (B)</p> Signup and view all the answers

What does the notation $E[E(w|X)] = 0$ imply about the expected value of the error term?

<p>The expected value of the error term is zero when conditioned on the predictors. (B)</p> Signup and view all the answers

Flashcards

Consistency of OLS Estimator

Under the Gauss-Markov assumptions (MLR.1-MLR.5), the OLS estimator (β̂j) is consistent for the true parameter (βj) for all explanatory variables (j from 1 to K).

Consistency

As the sample size (N) gets larger, the distribution of an estimator converges to the true parameter value. This means the estimator's average (mean) approaches the true parameter, and its variability diminishes.

Gauss-Markov assumptions

A set of assumptions (MLR 1-5) necessary for OLS to be both unbiased and efficient (BLUE) in linear regression models.

Probability Limit (plim)

A mathematical concept representing the value a statistic converges to as the sample size (N) approaches infinity.

Signup and view all the flashcards

Law of Large Numbers

A statistical theorem stating that the average of a large sample will approach the true population average as the sample size increases.

Signup and view all the flashcards

OLS estimator (β̂)

The estimated value of the regression coefficients in an Ordinary Least Squares regression model.

Signup and view all the flashcards

BLUE

Best Linear Unbiased Estimator. The OLS estimator is the best possible estimator among all other linear unbiased estimators.

Signup and view all the flashcards

SLR model

Simple linear regression model, a specific form of linear regression with only one explanatory variable. used for illustrative purposes

Signup and view all the flashcards

Consistency of OLS

The OLS estimators converge to the true population parameters as the sample size (N) increases.

Signup and view all the flashcards

plim β̂1

The probability limit of the OLS estimator of β1, which represents the expected value of β̂1 as the sample size gets large.

Signup and view all the flashcards

plim β1

The true population parameter for β1.

Signup and view all the flashcards

β̂ = (X'X)^-1 X'y

The formula for the OLS estimator for the entire set of parameters.

Signup and view all the flashcards

MLR model (Multiple Linear Regression)

A statistical model where the dependent variable is linearly related to multiple independent variables.

Signup and view all the flashcards

Cov(x1, u) = 0

The condition where the independent variable (x1) is uncorrelated with the error term (u).

Signup and view all the flashcards

Var(x1)

The variance of the independent variable x1.

Signup and view all the flashcards

E(ui|xi) = 0

Conditional expectation of the error term given the independent variable is zero.

Signup and view all the flashcards

N

Sample size (number of observations).

Signup and view all the flashcards

plim

Probability Limit. The value a statistic approaches as the sample size increases.

Signup and view all the flashcards

LM Statistic

A statistical test used to check for violations of assumptions in regression models, especially non-linearity or heteroscedasticity.

Signup and view all the flashcards

LM Statistic Distribution

The LM statistic follows a chi-squared distribution with q degrees of freedom.

Signup and view all the flashcards

F Test vs. LM Test

Both the F-test and LM test check model assumptions, but they are not identical, especially for more complex tests. LM tests are less prone to errors in large samples.

Signup and view all the flashcards

Asymptotic Efficiency

OLS estimators have the smallest variance among all consistent estimators in large samples.

Signup and view all the flashcards

Gauss-Markov Assumptions

Important assumptions made in regression analysis that are needed to get the best possible results from an OLS model, such as homoscedasticity and no autocorrelation, etc.

Signup and view all the flashcards

Asymptotic Normality of OLS

Under specific conditions, the OLS estimators' calculated values approach a normal distribution as the sample size grows very large.

Signup and view all the flashcards

Central Limit Theorem (CLT)

The sample mean of a population with a finite variance will be approximately normally distributed for sufficiently large sample sizes, irrespective of the population's underlying distribution.

Signup and view all the flashcards

Asymptotic Standard Error

The estimated standard deviation of an estimator as the sample size gets very large.

Signup and view all the flashcards

OLS Estimator (β̂)

Estimate of a particular parameter in a linear regression model (β) obtained using the Ordinary Least Squares method.

Signup and view all the flashcards

Consistent Estimator

An estimator whose value approaches the true parameter value as the sample size grows very large..

Signup and view all the flashcards

Consistency

The OLS estimator's tendency to get closer to the true value as sample size increases.

Signup and view all the flashcards

Zero Conditional Mean

The error term's expected value is zero given all explanatory variables.

Signup and view all the flashcards

Zero Mean

The error term's expected value is zero overall.

Signup and view all the flashcards

Zero Correlation

No relationship between explanatory variables and the error term.

Signup and view all the flashcards

Asymptotic Inference

Using large sample properties to make inferences about a population.

Signup and view all the flashcards

Normality Assumption

Error terms follow a normal distribution.

Signup and view all the flashcards

Inconsistent Estimator

An estimator that does not converge to the true parameter value as sample size increases.

Signup and view all the flashcards

Probability Limit

The value a statistic approaches as the sample size tends to infinity.

Signup and view all the flashcards

Asymptotic Standard Error

The standard error of a coefficient in a regression model, used when the error term isn't normally distributed. It's an approximation for large samples.

Signup and view all the flashcards

LM Statistic

A test statistic used for testing multiple exclusion restrictions in regression models, relying on large sample properties.

Signup and view all the flashcards

LM Statistic (Auxiliary Regression)

A regression used to calculate the LM statistic. It involves regressing the residuals from a restricted model on all the original independent variables.

Signup and view all the flashcards

Restricted Model

A regression model where some coefficients are set to zero, used in hypothesis testing (typically, null hypothesis).

Signup and view all the flashcards

Null Hypothesis (H0)

A statement in hypothesis testing that proposes a situation of no effect or no difference, often stating coefficients are zero.

Signup and view all the flashcards

Study Notes

Unit 6: Asymptotic Theory and Properties

  • The unit covers large sample properties and asymptotic inference in econometrics.
  • Large sample properties include consistency.
  • Asymptotic inference is based on the Central Limit Theorem.

Large Sample Properties: Consistency

  • Ordinary Least Squares (OLS) is Best Linear Unbiased Estimator (BLUE) under Gauss-Markov assumptions, but not always in other cases.
  • In other cases, consistent estimators might be used.
  • Consistent estimators mean that, as the sample size (N) approaches infinity, the estimator's distribution collapses to the parameter value.
  • Consequently, the mean of the estimator approaches the parameter value, and the variance tends to zero as N tends to infinity.

Consistency of the OLS Estimator

  • Under Gauss-Markov assumptions (MLR.1-MLR.5), the OLS estimator is consistent for each parameter.
  • The same assumptions ensuring unbiasedness also imply consistency.
  • The probability limit (plim) is used to establish consistency.
  • The second moments of the independent variables (X) must be finite.

Proving Consistency – The SLR Model

  • The OLS estimator can be written using sample moments.
  • Applying the law of large numbers, sample moments converge in probability to population counterparts as N increases.
  • This shows the OLS estimator converges to the true parameter value.

Proving Consistency – The MLR Model

  • The OLS estimator is expressed using matrix notation.
  • Applying the law of large numbers, the estimator converges in probability to the true parameter value.

Convergence – The Full Proof

  • The OLS estimator is presented in terms of sample moments and the population parameter.
  • The variance calculation demonstrates how the estimator's variance goes to zero as the sample size increases.
  • In conclusion, the estimator converges in mean square to zero, resulting in the probability limit (plim) of the estimator being the true parameter value.

A Weaker Assumption

  • For unbiasedness, a zero conditional mean (E(u|X) = 0) of the error term was assumed.
  • Consistency only requires a zero mean and zero correlation between independent variables and the error (E(u)= 0 and Cov(Xj, u) = 0 ).
  • Without these assumptions, OLS is inconsistent.

Asymptotic Inference

  • Under Classical Linear Model (CLM) assumptions, sampling distributions are normally distributed. This allows the derivation of t and F distributions for testing.
  • Normality assumption implies normal distribution of y given x's.

Asymptotic Inference (Continued)

  • Clearly skewed variables (e.g., wages, arrests) cannot be normally distributed since the normal distribution is symmetric.
  • Normality is not necessary for OLS to be BLUE, only for inference (statistical significance).
  • Using Central Limit Theorem, OLS estimator is asymptotically normally distributed.

Asymptotic Normality - I

  • Under Gauss-Markov assumptions (MLR.1-MLR.5), the OLS estimators have asymptotic normal distributions.
  • The asymptotic variance of 𝛽j is σ2/aj, where σ2 is the error variance and aj is the plim(1/(X'X))j,i.
  • ô is a consistent estimator of σ2.

Asymptotic Normality - II (More Generally)

  • Asymptotic normality holds with independent, identically distributed (iid) errors and finite variance.
  • Other theorems (law of large numbers for plims and Central Limit Theorem for asymptotic normality), are used to derive the asymptotic results.

Asymptotic Normality - III

  • Since the t-distribution approximates normal distribution with large degrees of freedom, t-tests can be used asymptotically.
  • Homoskedasticity is still required for the asymptotic t-test, despite normality not being required.

Asymptotic Standard Errors

  • If the error term is not normally distributed, the standard error can be referred to as an asymptotic standard error.
  • The formula for the asymptotic standard error shows it shrinks proportionally to the inverse of the square root of the sample size (√N).

Lagrange Multiplier Statistic - I and II and III

  • The Lagrange multiplier (LM) statistic provides an alternative to the F-statistic for testing multiple exclusion restrictions.
  • It's based on an "auxiliary regression" and is sometimes called an NR² statistic.
  • The LM statistic can be calculated by running a restricted model, finding residuals from that model, and then regressing these residuals on all independent variables in the original model.
  • Results are similar to F-tests for large samples, however, they are not identical for testing a single exclusion restriction.

Asymptotic Efficiency

  • Other estimators may be consistent; however, OLS has smallest asymptotic variances under Gauss-Markov assumptions.
  • The conclusion of OLS' asymptotic efficiency depends on the assumptions, particularly homoskedasticity.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Use Quizgecko on...
Browser
Browser