Podcast
Questions and Answers
What does a positive autocorrelation in the residuals indicate?
What does a positive autocorrelation in the residuals indicate?
- The residuals are independent of each other.
- There is a constant variance in the residuals.
- The residuals show a predictable pattern that can be utilized. (correct)
- The residuals are randomly distributed over time.
Which value represents the change in $y$ (Δyt) for 1990M02?
Which value represents the change in $y$ (Δyt) for 1990M02?
- -4.0
- 2.3
- 4.0 (correct)
- -1.7
In the context of time series, what does yt-1 represent?
In the context of time series, what does yt-1 represent?
- The value of y one time period before t. (correct)
- The maximum value of y in the dataset.
- The current value of y at time t.
- The average of all previous y values.
Which condition is violated if there is autocorrelation present in the residuals?
Which condition is violated if there is autocorrelation present in the residuals?
Which is the correct interpretation of the change in yt from 1989M10 to 1989M11?
Which is the correct interpretation of the change in yt from 1989M10 to 1989M11?
What is the null hypothesis in the context of the Chow Test for this example?
What is the null hypothesis in the context of the Chow Test for this example?
How is the test statistic calculated in the Chow Test?
How is the test statistic calculated in the Chow Test?
What does a test statistic value greater than the critical value from the F-distribution signify?
What does a test statistic value greater than the critical value from the F-distribution signify?
In the Chow Test example, what is the value of $T$ for the whole sample period from 1981 to 1992?
In the Chow Test example, what is the value of $T$ for the whole sample period from 1981 to 1992?
What is the role of $k$ in the formula for the test statistic in the Chow Test?
What is the role of $k$ in the formula for the test statistic in the Chow Test?
What is the null hypothesis in the Goldfeld-Quandt test for heteroscedasticity?
What is the null hypothesis in the Goldfeld-Quandt test for heteroscedasticity?
What are the null hypotheses in the Breusch-Godfrey Test for autocorrelation?
What are the null hypotheses in the Breusch-Godfrey Test for autocorrelation?
What does the test statistic from the Breusch-Godfrey Test resemble?
What does the test statistic from the Breusch-Godfrey Test resemble?
Which of the following describes the calculation of the GQ test statistic?
Which of the following describes the calculation of the GQ test statistic?
What is a consequence of ignoring autocorrelation in regression analysis?
What is a consequence of ignoring autocorrelation in regression analysis?
What is the distribution of the GQ test statistic under the null hypothesis of homoscedasticity?
What is the distribution of the GQ test statistic under the null hypothesis of homoscedasticity?
Which aspect of the Goldfeld-Quandt test is potentially problematic?
Which aspect of the Goldfeld-Quandt test is potentially problematic?
Which method can be used to correct for the known form of autocorrelation?
Which method can be used to correct for the known form of autocorrelation?
What does it mean for an estimator to be referred to as BLUE?
What does it mean for an estimator to be referred to as BLUE?
In White’s test for heteroscedasticity, what is done after obtaining the residuals?
In White’s test for heteroscedasticity, what is done after obtaining the residuals?
Which of the following is a consequence of using OLS in the presence of heteroscedasticity?
Which of the following is a consequence of using OLS in the presence of heteroscedasticity?
What characterizes perfect multicollinearity in regression models?
What characterizes perfect multicollinearity in regression models?
How is the chi-squared statistic related to White’s test for heteroscedasticity?
How is the chi-squared statistic related to White’s test for heteroscedasticity?
How can we remove heteroscedasticity if its cause is known?
How can we remove heteroscedasticity if its cause is known?
What typically happens to R² when near multicollinearity is present?
What typically happens to R² when near multicollinearity is present?
What assumption makes White’s test preferable for detecting heteroscedasticity?
What assumption makes White’s test preferable for detecting heteroscedasticity?
What is a common problem that arises from high standard errors of individual coefficients due to multicollinearity?
What is a common problem that arises from high standard errors of individual coefficients due to multicollinearity?
What happens to the estimated standard errors if heteroscedasticity is present?
What happens to the estimated standard errors if heteroscedasticity is present?
What does the term 'heteroscedasticity' refer to in regression analysis?
What does the term 'heteroscedasticity' refer to in regression analysis?
What is the relationship between the error variance and another variable in the context of heteroscedasticity?
What is the relationship between the error variance and another variable in the context of heteroscedasticity?
What should be considered when applying a GLS procedure to correct autocorrelation?
What should be considered when applying a GLS procedure to correct autocorrelation?
What will be true about the disturbances in a regression equation after applying GLS?
What will be true about the disturbances in a regression equation after applying GLS?
Which assumption is NOT necessary for OLS to be considered BLUE?
Which assumption is NOT necessary for OLS to be considered BLUE?
What characteristic of OLS allows it to provide unbiased coefficient estimates?
What characteristic of OLS allows it to provide unbiased coefficient estimates?
What is a potential issue when the regression becomes very sensitive to small changes in specification?
What is a potential issue when the regression becomes very sensitive to small changes in specification?
Which method can be used to measure the extent of multicollinearity among variables?
Which method can be used to measure the extent of multicollinearity among variables?
Which of the following is NOT a suggested solution for multicollinearity problems?
Which of the following is NOT a suggested solution for multicollinearity problems?
What does the Ramsey’s RESET test help to identify?
What does the Ramsey’s RESET test help to identify?
If the RESET test statistic exceeds the critical value of a chi-squared distribution, what should be concluded?
If the RESET test statistic exceeds the critical value of a chi-squared distribution, what should be concluded?
Which of the following is true about high correlation between y and predictor variables?
Which of the following is true about high correlation between y and predictor variables?
What is one common approach to alleviate multicollinearity issues?
What is one common approach to alleviate multicollinearity issues?
What higher-order terms does the RESET test incorporate for testing mis-specification?
What higher-order terms does the RESET test incorporate for testing mis-specification?
Flashcards
Goldfeld-Quandt (GQ) Test
Goldfeld-Quandt (GQ) Test
A statistical test used to determine the presence of heteroscedasticity in a regression model. It involves splitting the sample into two sub-samples and comparing the variances of the residuals from each sub-sample.
GQ Test Statistic (GQ)
GQ Test Statistic (GQ)
The ratio of the two residual variances from the two sub-samples in the Goldfeld-Quandt test. The larger variance should be placed in the numerator.
Null Hypothesis in GQ Test
Null Hypothesis in GQ Test
The null hypothesis in the Goldfeld-Quandt test, stating that the variances of the disturbances in the regression model are equal across different values of the independent variable.
Alternative Hypothesis in GQ Test
Alternative Hypothesis in GQ Test
Signup and view all the flashcards
White's Test
White's Test
Signup and view all the flashcards
Auxiliary Regression in White's Test
Auxiliary Regression in White's Test
Signup and view all the flashcards
Test Statistic in White's Test
Test Statistic in White's Test
Signup and view all the flashcards
Degrees of Freedom in White's Test
Degrees of Freedom in White's Test
Signup and view all the flashcards
What is a lagged value?
What is a lagged value?
Signup and view all the flashcards
What is autocorrelation?
What is autocorrelation?
Signup and view all the flashcards
What is positive autocorrelation?
What is positive autocorrelation?
Signup and view all the flashcards
Why is autocorrelation a problem?
Why is autocorrelation a problem?
Signup and view all the flashcards
How can we detect autocorrelation?
How can we detect autocorrelation?
Signup and view all the flashcards
Chi-Square (χ²) Test
Chi-Square (χ²) Test
Signup and view all the flashcards
Homoscedasticity
Homoscedasticity
Signup and view all the flashcards
Heteroscedasticity
Heteroscedasticity
Signup and view all the flashcards
Ordinary Least Squares (OLS)
Ordinary Least Squares (OLS)
Signup and view all the flashcards
BLUE Estimator
BLUE Estimator
Signup and view all the flashcards
Consequences of OLS with Heteroscedasticity
Consequences of OLS with Heteroscedasticity
Signup and view all the flashcards
Generalized Least Squares (GLS)
Generalized Least Squares (GLS)
Signup and view all the flashcards
GLS Example
GLS Example
Signup and view all the flashcards
Chow Test
Chow Test
Signup and view all the flashcards
Chow Test Statistic
Chow Test Statistic
Signup and view all the flashcards
Chow Test: Null Hypothesis
Chow Test: Null Hypothesis
Signup and view all the flashcards
Chow Test: Alternative Hypothesis
Chow Test: Alternative Hypothesis
Signup and view all the flashcards
Chow Test: Degrees of Freedom
Chow Test: Degrees of Freedom
Signup and view all the flashcards
Breusch-Godfrey Test
Breusch-Godfrey Test
Signup and view all the flashcards
No Autocorrelation
No Autocorrelation
Signup and view all the flashcards
Breusch-Godfrey Test Statistic
Breusch-Godfrey Test Statistic
Signup and view all the flashcards
Consequences of Autocorrelation
Consequences of Autocorrelation
Signup and view all the flashcards
Generalized Least Squares (GLS) procedure
Generalized Least Squares (GLS) procedure
Signup and view all the flashcards
Multicollinearity
Multicollinearity
Signup and view all the flashcards
Perfect Multicollinearity
Perfect Multicollinearity
Signup and view all the flashcards
Near Multicollinearity
Near Multicollinearity
Signup and view all the flashcards
Correlation Matrix (for Multicollinearity)
Correlation Matrix (for Multicollinearity)
Signup and view all the flashcards
Variance Inflation Factor (VIF)
Variance Inflation Factor (VIF)
Signup and view all the flashcards
Ramsey's RESET Test
Ramsey's RESET Test
Signup and view all the flashcards
Dropping Variables (Multicollinearity)
Dropping Variables (Multicollinearity)
Signup and view all the flashcards
Ratio Transformation (Multicollinearity)
Ratio Transformation (Multicollinearity)
Signup and view all the flashcards
Collecting More Data (Multicollinearity)
Collecting More Data (Multicollinearity)
Signup and view all the flashcards
Autocorrelation
Autocorrelation
Signup and view all the flashcards
Study Notes
Classical Linear Regression Model Assumptions and Diagnostics
- Classical linear regression model assumes certain characteristics of the error term.
- The error term's expected value is zero (E(εt) = 0).
- The variance of the error term is constant (Var(εt) = σ²).
- Error terms are uncorrelated (cov(εi, εj) = 0 for i ≠ j).
- The X matrix is non-stochastic or fixed in repeated samples.
- Error terms follow a normal distribution (εt ~ N(0, σ²)).
Violation of Classical Linear Regression Model Assumptions
- Violations of these assumptions can lead to incorrect coefficient estimates, inaccurate standard errors, and inappropriate test statistics.
- Multiple violations of these assumptions are possible.
Investigating Violations of CLRM Assumptions
- Studying how to test for violations
- Identifying causes of violations
- Defining the consequences of violations.
Assumption 1: E(εt) = 0
- The mean of the error terms should be zero.
- Residuals are used to test this assumption as the error terms cannot be observed directly.
- The mean of residuals will be zero when there is a constant (intercept) term in the regression model.
Assumption 2: Var(εt) = σ²
- Constant variance of error terms, known as homoscedasticity.
- Non-constant variance is heteroscedasticity.
- Detection methods like graphical or formal tests (e.g. Goldfeld-Quandt test and White's test) can be used.
Detection of Heteroscedasticity: The Goldfeld-Quandt Test
- Splits the dataset into two sub-samples.
- Calculates residual variances on each sub-sample.
- The ratio of the variances forms the test statistic.
- This statistic is F(T₁-k, T₂-k) distributed under the null hypothesis of equal variances.
- Choice of split point is arbitrary, affecting the test.
Detection of Heteroscedasticity: The White's Test
- Assumes a general form for heteroscedasticity.
- Builds an auxiliary regression using squared and cross-products of predictor variables.
- The test statistic is the multiplied R-squared from the auxiliary regression with the sample size (TR²).
- This is a x²(m) distribution, where m is the number of regressors in the auxiliary regression.
Consequences of Using OLS in the Presence of Heteroscedasticity
- OLS estimation provides unbiased coefficient estimates but is no longer Best Linear Unbiased Estimator (BLUE).
- Standard errors calculated via the traditional OLS formula are unreliable.
- Inference made from conclusions based on OLS is potentially incorrect.
- R-squared might be inflated for positively correlated residuals.
Remedies for Heteroscedasticity
- If the form of heteroscedasticity is known, generalized least squares (GLS) can correct.
- Dividing by the variable influencing variances can result in a homoscedastic model.
Autocorrelation
- Models assume errors are uncorrelated (Cov(ε₁, εj) = 0, for i ≠ j).
- Autocorrelation (serial correlation) occurs when errors at one period (εt) are correlated with error terms of a past period (εt-1).
- Visual inspection of the residuals plot can indicate autocorrelation.
- Positive autocorrelation indicates cyclical patterns.
- Negative indicates alternating patterns.
Detecting Autocorrelation: The Durbin-Watson Test
- Tests for first-order autocorrelation.
- The test statistic (DW) ranges from 0 to 4.
- Values near 2 suggest little evidence of autocorrelation.
- Critical values (d_l and d_u) allow for rejection of the null hypothesis (no autocorrelation).
- Intermediate values of DW cannot assist in either rejecting or not rejecting the null hypothesis.
Another Test for Autocorrelation: The Breusch-Godfrey Test
- More general test for rth-order autocorrelation.
- The test runs a regression with residuals and lagged residuals.
- The test statistic (derived from the regression's R²) follows a x²(r) distribution.
- Values exceeding the critical value lead to rejection of the null hypothesis of no autocorrelation.
Consequences of Ignoring Autocorrelation
- Unbiased coefficient estimates may be inefficient (i.e., not BLUE even in large samples).
- Incorrect standard errors can lead to invalid inferences.
- R-squared values might be inflated (especially for positive autocorrelation).
Remedies for Autocorrelation
- If the form is known, GLS (like Cochrane-Orcutt) can account for it.
- However, these procedures rely on assumptions of the correlation's form.
- In some cases, modelling adjustments may be needed rather than simply correcting for autocorrelation.
Multicollinearity
- Explanatory variables are highly correlated.
- Perfect multicollinearity prohibits estimating all coefficients.
- Near multicollinearity results in high R-squared but unreliable coefficient standard errors and potentially erroneous inferences.
Measuring Multicollinearity
- Method 1: Correlation matrix (looking at correlations between variables).
- Method 2: Variance Inflation Factor (VIF).
- High correlation between a dependent and independent variable is not considered a violation of the model.
Solutions to the Problem of Multicollinearity
- Traditional methods (ridge regression, principal components) can worsen issues.
- If model validity is otherwise sound, multicollinearity might be ignored.
- Transformations of highly correlated variables, collecting more data, or shifting to a higher frequency can potentially resolve issues.
Adopting the Wrong Functional Form
- Linearity is not always the appropriate form (can be multiplicative).
- Ramsey's RESET test is a general test to assess model specification.
- Adding higher-order terms of fitted values to the model can be a way to detect misspecifications.
- Test statistic following a x²(p-1) distribution is created from the regression's R squared.
Testing the Normality Assumption
- Normality is inherent to many hypothesis testing procedures.
- Bera-Jarque test tests for departures from normality in residuals (by testing whether skewness and excess kurtosis are jointly zero).
- Skewness and excess kurtosis are standardized third and fourth moments of a distribution.
What if Non-Normality is Detected?
- Method that doesn't assume normality may be applied.
- One solution involves dummy variables to account for extreme residuals.
Omission of an Important Variable or Inclusion of an Irrelevant Variable
- Omission of a relevant covariate or inclusion of an irrelevant variable leads to biased and inconsistent estimates.
- Relevant model considerations include that any omitted relevant covariate must not be correlated with any other included variable.
Parameter Stability Tests
- Tests whether model parameters remain constant across different data periods.
- Splitting data into sub-samples and comparing the residual sum of squares (RSS) across models is used as a test.
- Chow test is an analysis of variance test that examines stability across sub-sample regressions.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.