Podcast
Questions and Answers
What does a positive autocorrelation in the residuals indicate?
What does a positive autocorrelation in the residuals indicate?
Which value represents the change in $y$ (Δyt) for 1990M02?
Which value represents the change in $y$ (Δyt) for 1990M02?
In the context of time series, what does yt-1 represent?
In the context of time series, what does yt-1 represent?
Which condition is violated if there is autocorrelation present in the residuals?
Which condition is violated if there is autocorrelation present in the residuals?
Signup and view all the answers
Which is the correct interpretation of the change in yt from 1989M10 to 1989M11?
Which is the correct interpretation of the change in yt from 1989M10 to 1989M11?
Signup and view all the answers
What is the null hypothesis in the context of the Chow Test for this example?
What is the null hypothesis in the context of the Chow Test for this example?
Signup and view all the answers
How is the test statistic calculated in the Chow Test?
How is the test statistic calculated in the Chow Test?
Signup and view all the answers
What does a test statistic value greater than the critical value from the F-distribution signify?
What does a test statistic value greater than the critical value from the F-distribution signify?
Signup and view all the answers
In the Chow Test example, what is the value of $T$ for the whole sample period from 1981 to 1992?
In the Chow Test example, what is the value of $T$ for the whole sample period from 1981 to 1992?
Signup and view all the answers
What is the role of $k$ in the formula for the test statistic in the Chow Test?
What is the role of $k$ in the formula for the test statistic in the Chow Test?
Signup and view all the answers
What is the null hypothesis in the Goldfeld-Quandt test for heteroscedasticity?
What is the null hypothesis in the Goldfeld-Quandt test for heteroscedasticity?
Signup and view all the answers
What are the null hypotheses in the Breusch-Godfrey Test for autocorrelation?
What are the null hypotheses in the Breusch-Godfrey Test for autocorrelation?
Signup and view all the answers
What does the test statistic from the Breusch-Godfrey Test resemble?
What does the test statistic from the Breusch-Godfrey Test resemble?
Signup and view all the answers
Which of the following describes the calculation of the GQ test statistic?
Which of the following describes the calculation of the GQ test statistic?
Signup and view all the answers
What is a consequence of ignoring autocorrelation in regression analysis?
What is a consequence of ignoring autocorrelation in regression analysis?
Signup and view all the answers
What is the distribution of the GQ test statistic under the null hypothesis of homoscedasticity?
What is the distribution of the GQ test statistic under the null hypothesis of homoscedasticity?
Signup and view all the answers
Which aspect of the Goldfeld-Quandt test is potentially problematic?
Which aspect of the Goldfeld-Quandt test is potentially problematic?
Signup and view all the answers
Which method can be used to correct for the known form of autocorrelation?
Which method can be used to correct for the known form of autocorrelation?
Signup and view all the answers
What does it mean for an estimator to be referred to as BLUE?
What does it mean for an estimator to be referred to as BLUE?
Signup and view all the answers
In White’s test for heteroscedasticity, what is done after obtaining the residuals?
In White’s test for heteroscedasticity, what is done after obtaining the residuals?
Signup and view all the answers
Which of the following is a consequence of using OLS in the presence of heteroscedasticity?
Which of the following is a consequence of using OLS in the presence of heteroscedasticity?
Signup and view all the answers
What characterizes perfect multicollinearity in regression models?
What characterizes perfect multicollinearity in regression models?
Signup and view all the answers
How is the chi-squared statistic related to White’s test for heteroscedasticity?
How is the chi-squared statistic related to White’s test for heteroscedasticity?
Signup and view all the answers
How can we remove heteroscedasticity if its cause is known?
How can we remove heteroscedasticity if its cause is known?
Signup and view all the answers
What typically happens to R² when near multicollinearity is present?
What typically happens to R² when near multicollinearity is present?
Signup and view all the answers
What assumption makes White’s test preferable for detecting heteroscedasticity?
What assumption makes White’s test preferable for detecting heteroscedasticity?
Signup and view all the answers
What is a common problem that arises from high standard errors of individual coefficients due to multicollinearity?
What is a common problem that arises from high standard errors of individual coefficients due to multicollinearity?
Signup and view all the answers
What happens to the estimated standard errors if heteroscedasticity is present?
What happens to the estimated standard errors if heteroscedasticity is present?
Signup and view all the answers
What does the term 'heteroscedasticity' refer to in regression analysis?
What does the term 'heteroscedasticity' refer to in regression analysis?
Signup and view all the answers
What is the relationship between the error variance and another variable in the context of heteroscedasticity?
What is the relationship between the error variance and another variable in the context of heteroscedasticity?
Signup and view all the answers
What should be considered when applying a GLS procedure to correct autocorrelation?
What should be considered when applying a GLS procedure to correct autocorrelation?
Signup and view all the answers
What will be true about the disturbances in a regression equation after applying GLS?
What will be true about the disturbances in a regression equation after applying GLS?
Signup and view all the answers
Which assumption is NOT necessary for OLS to be considered BLUE?
Which assumption is NOT necessary for OLS to be considered BLUE?
Signup and view all the answers
What characteristic of OLS allows it to provide unbiased coefficient estimates?
What characteristic of OLS allows it to provide unbiased coefficient estimates?
Signup and view all the answers
What is a potential issue when the regression becomes very sensitive to small changes in specification?
What is a potential issue when the regression becomes very sensitive to small changes in specification?
Signup and view all the answers
Which method can be used to measure the extent of multicollinearity among variables?
Which method can be used to measure the extent of multicollinearity among variables?
Signup and view all the answers
Which of the following is NOT a suggested solution for multicollinearity problems?
Which of the following is NOT a suggested solution for multicollinearity problems?
Signup and view all the answers
What does the Ramsey’s RESET test help to identify?
What does the Ramsey’s RESET test help to identify?
Signup and view all the answers
If the RESET test statistic exceeds the critical value of a chi-squared distribution, what should be concluded?
If the RESET test statistic exceeds the critical value of a chi-squared distribution, what should be concluded?
Signup and view all the answers
Which of the following is true about high correlation between y and predictor variables?
Which of the following is true about high correlation between y and predictor variables?
Signup and view all the answers
What is one common approach to alleviate multicollinearity issues?
What is one common approach to alleviate multicollinearity issues?
Signup and view all the answers
What higher-order terms does the RESET test incorporate for testing mis-specification?
What higher-order terms does the RESET test incorporate for testing mis-specification?
Signup and view all the answers
Study Notes
Classical Linear Regression Model Assumptions and Diagnostics
- Classical linear regression model assumes certain characteristics of the error term.
- The error term's expected value is zero (E(εt) = 0).
- The variance of the error term is constant (Var(εt) = σ²).
- Error terms are uncorrelated (cov(εi, εj) = 0 for i ≠ j).
- The X matrix is non-stochastic or fixed in repeated samples.
- Error terms follow a normal distribution (εt ~ N(0, σ²)).
Violation of Classical Linear Regression Model Assumptions
- Violations of these assumptions can lead to incorrect coefficient estimates, inaccurate standard errors, and inappropriate test statistics.
- Multiple violations of these assumptions are possible.
Investigating Violations of CLRM Assumptions
- Studying how to test for violations
- Identifying causes of violations
- Defining the consequences of violations.
Assumption 1: E(εt) = 0
- The mean of the error terms should be zero.
- Residuals are used to test this assumption as the error terms cannot be observed directly.
- The mean of residuals will be zero when there is a constant (intercept) term in the regression model.
Assumption 2: Var(εt) = σ²
- Constant variance of error terms, known as homoscedasticity.
- Non-constant variance is heteroscedasticity.
- Detection methods like graphical or formal tests (e.g. Goldfeld-Quandt test and White's test) can be used.
Detection of Heteroscedasticity: The Goldfeld-Quandt Test
- Splits the dataset into two sub-samples.
- Calculates residual variances on each sub-sample.
- The ratio of the variances forms the test statistic.
- This statistic is F(T₁-k, T₂-k) distributed under the null hypothesis of equal variances.
- Choice of split point is arbitrary, affecting the test.
Detection of Heteroscedasticity: The White's Test
- Assumes a general form for heteroscedasticity.
- Builds an auxiliary regression using squared and cross-products of predictor variables.
- The test statistic is the multiplied R-squared from the auxiliary regression with the sample size (TR²).
- This is a x²(m) distribution, where m is the number of regressors in the auxiliary regression.
Consequences of Using OLS in the Presence of Heteroscedasticity
- OLS estimation provides unbiased coefficient estimates but is no longer Best Linear Unbiased Estimator (BLUE).
- Standard errors calculated via the traditional OLS formula are unreliable.
- Inference made from conclusions based on OLS is potentially incorrect.
- R-squared might be inflated for positively correlated residuals.
Remedies for Heteroscedasticity
- If the form of heteroscedasticity is known, generalized least squares (GLS) can correct.
- Dividing by the variable influencing variances can result in a homoscedastic model.
Autocorrelation
- Models assume errors are uncorrelated (Cov(ε₁, εj) = 0, for i ≠ j).
- Autocorrelation (serial correlation) occurs when errors at one period (εt) are correlated with error terms of a past period (εt-1).
- Visual inspection of the residuals plot can indicate autocorrelation.
- Positive autocorrelation indicates cyclical patterns.
- Negative indicates alternating patterns.
Detecting Autocorrelation: The Durbin-Watson Test
- Tests for first-order autocorrelation.
- The test statistic (DW) ranges from 0 to 4.
- Values near 2 suggest little evidence of autocorrelation.
- Critical values (d_l and d_u) allow for rejection of the null hypothesis (no autocorrelation).
- Intermediate values of DW cannot assist in either rejecting or not rejecting the null hypothesis.
Another Test for Autocorrelation: The Breusch-Godfrey Test
- More general test for rth-order autocorrelation.
- The test runs a regression with residuals and lagged residuals.
- The test statistic (derived from the regression's R²) follows a x²(r) distribution.
- Values exceeding the critical value lead to rejection of the null hypothesis of no autocorrelation.
Consequences of Ignoring Autocorrelation
- Unbiased coefficient estimates may be inefficient (i.e., not BLUE even in large samples).
- Incorrect standard errors can lead to invalid inferences.
- R-squared values might be inflated (especially for positive autocorrelation).
Remedies for Autocorrelation
- If the form is known, GLS (like Cochrane-Orcutt) can account for it.
- However, these procedures rely on assumptions of the correlation's form.
- In some cases, modelling adjustments may be needed rather than simply correcting for autocorrelation.
Multicollinearity
- Explanatory variables are highly correlated.
- Perfect multicollinearity prohibits estimating all coefficients.
- Near multicollinearity results in high R-squared but unreliable coefficient standard errors and potentially erroneous inferences.
Measuring Multicollinearity
- Method 1: Correlation matrix (looking at correlations between variables).
- Method 2: Variance Inflation Factor (VIF).
- High correlation between a dependent and independent variable is not considered a violation of the model.
Solutions to the Problem of Multicollinearity
- Traditional methods (ridge regression, principal components) can worsen issues.
- If model validity is otherwise sound, multicollinearity might be ignored.
- Transformations of highly correlated variables, collecting more data, or shifting to a higher frequency can potentially resolve issues.
Adopting the Wrong Functional Form
- Linearity is not always the appropriate form (can be multiplicative).
- Ramsey's RESET test is a general test to assess model specification.
- Adding higher-order terms of fitted values to the model can be a way to detect misspecifications.
- Test statistic following a x²(p-1) distribution is created from the regression's R squared.
Testing the Normality Assumption
- Normality is inherent to many hypothesis testing procedures.
- Bera-Jarque test tests for departures from normality in residuals (by testing whether skewness and excess kurtosis are jointly zero).
- Skewness and excess kurtosis are standardized third and fourth moments of a distribution.
What if Non-Normality is Detected?
- Method that doesn't assume normality may be applied.
- One solution involves dummy variables to account for extreme residuals.
Omission of an Important Variable or Inclusion of an Irrelevant Variable
- Omission of a relevant covariate or inclusion of an irrelevant variable leads to biased and inconsistent estimates.
- Relevant model considerations include that any omitted relevant covariate must not be correlated with any other included variable.
Parameter Stability Tests
- Tests whether model parameters remain constant across different data periods.
- Splitting data into sub-samples and comparing the residual sum of squares (RSS) across models is used as a test.
- Chow test is an analysis of variance test that examines stability across sub-sample regressions.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
Test your understanding of key concepts in time series analysis, including autocorrelation, Chow Test, and heteroscedasticity. This quiz covers definitions, interpretations, and statistical tests relevant to the field. Perfect for students looking to solidify their knowledge in econometrics.