Podcast
Questions and Answers
What does negative autocorrelation indicate about the residuals?
What does negative autocorrelation indicate about the residuals?
What is the null hypothesis (H0) in the Durbin-Watson test?
What is the null hypothesis (H0) in the Durbin-Watson test?
Which condition is NOT required for the Durbin-Watson test to be valid?
Which condition is NOT required for the Durbin-Watson test to be valid?
What range of values can the Durbin-Watson statistic (DW) take?
What range of values can the Durbin-Watson statistic (DW) take?
Signup and view all the answers
What does a DW statistic value near 2 indicate?
What does a DW statistic value near 2 indicate?
Signup and view all the answers
Which of the following components is part of the DW test statistic formula?
Which of the following components is part of the DW test statistic formula?
Signup and view all the answers
What does the presence of no pattern in residuals imply?
What does the presence of no pattern in residuals imply?
Signup and view all the answers
In the context of the Durbin-Watson test, what does H1 represent?
In the context of the Durbin-Watson test, what does H1 represent?
Signup and view all the answers
What does the assumption E(𝜀𝑡 ) = 0 indicate?
What does the assumption E(𝜀𝑡 ) = 0 indicate?
Signup and view all the answers
Which condition must be satisfied for errors to be considered homoscedastic?
Which condition must be satisfied for errors to be considered homoscedastic?
Signup and view all the answers
What happens if the assumption of independence (cov(𝜀𝑖, 𝜀𝑗) = 0) is violated?
What happens if the assumption of independence (cov(𝜀𝑖, 𝜀𝑗) = 0) is violated?
Signup and view all the answers
If errors exhibit heteroscedasticity, what might we need to consider in our analysis?
If errors exhibit heteroscedasticity, what might we need to consider in our analysis?
Signup and view all the answers
The assumption that the X matrix is non-stochastic means what in regression analysis?
The assumption that the X matrix is non-stochastic means what in regression analysis?
Signup and view all the answers
What indicates a violation of the assumption that Var(𝜀𝑡 ) = 𝜎² < ∞?
What indicates a violation of the assumption that Var(𝜀𝑡 ) = 𝜎² < ∞?
Signup and view all the answers
How can one test for violations of the classical linear regression model assumptions?
How can one test for violations of the classical linear regression model assumptions?
Signup and view all the answers
What is a potential consequence of incorrect standard errors due to assumption violations?
What is a potential consequence of incorrect standard errors due to assumption violations?
Signup and view all the answers
What is one of the traditional approaches to address multicollinearity?
What is one of the traditional approaches to address multicollinearity?
Signup and view all the answers
What does a high correlation between independent variables indicate?
What does a high correlation between independent variables indicate?
Signup and view all the answers
Which test is used to formally check for mis-specification of functional form?
Which test is used to formally check for mis-specification of functional form?
Signup and view all the answers
What happens if the test statistic from the RESET test is greater than the critical value?
What happens if the test statistic from the RESET test is greater than the critical value?
Signup and view all the answers
What might be a consequence of high correlation between a dependent variable and one of the independent variables?
What might be a consequence of high correlation between a dependent variable and one of the independent variables?
Signup and view all the answers
What is a possible solution if multicollinearity is identified in a model?
What is a possible solution if multicollinearity is identified in a model?
Signup and view all the answers
Which of the following statements is true regarding variance inflation factor?
Which of the following statements is true regarding variance inflation factor?
Signup and view all the answers
What is a drawback of using traditional approaches like ridge regression for multicollinearity?
What is a drawback of using traditional approaches like ridge regression for multicollinearity?
Signup and view all the answers
What is one remedy for the rejection of the test due to model mis-specification?
What is one remedy for the rejection of the test due to model mis-specification?
Signup and view all the answers
Why is normality assumed for hypothesis testing?
Why is normality assumed for hypothesis testing?
Signup and view all the answers
What does a normal distribution's coefficient of skewness and excess kurtosis indicate?
What does a normal distribution's coefficient of skewness and excess kurtosis indicate?
Signup and view all the answers
What does the Bera Jarque test statistic measure?
What does the Bera Jarque test statistic measure?
Signup and view all the answers
What is one potential cause of evidence of non-normality in residuals?
What is one potential cause of evidence of non-normality in residuals?
Signup and view all the answers
What transformation often helps in handling multiplicative models?
What transformation often helps in handling multiplicative models?
Signup and view all the answers
What should a researcher consider if evidence of non-normality is detected?
What should a researcher consider if evidence of non-normality is detected?
Signup and view all the answers
How is skewness represented in terms of residuals?
How is skewness represented in terms of residuals?
Signup and view all the answers
What is the null hypothesis in the Goldfeld-Quandt test?
What is the null hypothesis in the Goldfeld-Quandt test?
Signup and view all the answers
Which statement describes the calculation of the GQ test statistic?
Which statement describes the calculation of the GQ test statistic?
Signup and view all the answers
What is a potential issue when conducting the GQ test?
What is a potential issue when conducting the GQ test?
Signup and view all the answers
Which aspect is noteworthy about White's general test for heteroscedasticity?
Which aspect is noteworthy about White's general test for heteroscedasticity?
Signup and view all the answers
What is the effect of omitting an important variable in a regression model?
What is the effect of omitting an important variable in a regression model?
Signup and view all the answers
In the auxiliary regression used for White's test, which variable is NOT typically included?
In the auxiliary regression used for White's test, which variable is NOT typically included?
Signup and view all the answers
What distribution does the product of the number of observations and R² from White's test approximately follow?
What distribution does the product of the number of observations and R² from White's test approximately follow?
Signup and view all the answers
What is a consequence of including an irrelevant variable in a regression analysis?
What is a consequence of including an irrelevant variable in a regression analysis?
Signup and view all the answers
What is the main goal of detecting heteroscedasticity in regression analysis?
What is the main goal of detecting heteroscedasticity in regression analysis?
Signup and view all the answers
What is the primary purpose of parameter stability tests in regression analysis?
What is the primary purpose of parameter stability tests in regression analysis?
Signup and view all the answers
How is the variance of the residuals represented in the context of White's test?
How is the variance of the residuals represented in the context of White's test?
Signup and view all the answers
Which of the following statements correctly describes the Chow test?
Which of the following statements correctly describes the Chow test?
Signup and view all the answers
In the context of regression analysis, what does RSS stand for?
In the context of regression analysis, what does RSS stand for?
Signup and view all the answers
During a Chow test, which of the following steps is performed first?
During a Chow test, which of the following steps is performed first?
Signup and view all the answers
What happens to the estimate of the coefficient on the constant term when an important variable is omitted?
What happens to the estimate of the coefficient on the constant term when an important variable is omitted?
Signup and view all the answers
What is the impact of using parameter stability tests on regression analysis?
What is the impact of using parameter stability tests on regression analysis?
Signup and view all the answers
Study Notes
Classical Linear Regression Model (CLRM) Assumptions and Diagnostics
- The CLRM disturbance terms are assumed to have specific properties:
- Expected value (E(εt)) = 0
- Variance (Var(εt)) = σ²
- Covariance (cov(εi, εj)) = 0 for i ≠ j
- X matrix is non-stochastic or fixed in repeated samples
- εt ~ N(0, σ²)
Violation of CLRM Assumptions
-
Studying these assumptions further, including testing for violations, causes, and consequences.
-
In general, violations of several assumptions could lead to:
- Inaccurate coefficient estimates
- Incorrect associated standard errors
- Inappropriate distribution of test statistics
-
Solutions include addressing the issue directly, or applying alternative estimation techniques.
Assumption 1: E(εt) = 0
- The mean of the disturbances is zero.
- Residuals are used as a proxy since disturbances are unobservable
- Residuals will always average to zero when the regression includes a constant term.
Assumption 2: Var(εt) = σ²
-
Homoscedasticity: Error variance is constant.
-
If the error variance is not constant: heteroscedasticity.
-
The variance of errors can be visualized by examining the residuals (εt) against the independent variables. An uneven scatter plot suggests heteroscedasticity.
-
Detection of Heteroscedasticity:
- Graphical methods
- Goldfeld-Quandt test
- White's test
Goldfeld-Quandt test
- Splits the sample into two subsamples
- Calculates residual variances for each subsample
- Compares the residual variances using F-distribution
- Checks the null hypothesis that variances of disturbances are equal.
White's Test
- Assumes a regression form and calculates residuals
- Runs an auxiliary regression on residuals & squared/cross products of variables
- Checks for heteroscedasticity by examining R2 from the auxiliary regression, using a chi-squared distribution.
Consequences of Heteroscedasticity
- OLS estimates are unbiased, but no longer BLUE.
- Standard errors calculated using usual formulas might be incorrect, potentially leading to misleading inferences.
- R2 is likely to be inflated for positive autocorrelation in residuals.
- Solutions to heteroscedasticity include Generalized Least Squares (GLS) if the form of the heteroscedasticity is known.
Autocorrelation
- Assumed that the error terms do not exhibit any pattern (Cov(εi, εj) = 0 if i ≠ j).
- Residuals are used from the regression.
- The presence of patterns in residuals indicates autocorrelation.
- Types of autocorrelation: positive, negative, and no autocorrelation. Illustrations provided in slides.
Detecting Autocorrelation: Durbin-Watson Test
- Testing for first-order autocorrelation.
- Assumes correlation between error & previous error term
- Test statistic (DW) is calculated using residuals.
- DW = (sum from t=2 to T) (εt - εt-1)² / (sum from t=1 to T) εt²
- 0 ≤ DW ≤ 4. A value near 2 suggests no autocorrelation; values significantly different from 2 imply autocorrelation
- Reject null hypothesis when DW < dL or DW > du (lower and upper critical values) in a table.
Breusch-Godfrey Test
- More general test for rth-order autocorrelation.
- Formulates a null and alternative hypotheses about the autocorrelation coefficients.
- Estimates a regression using OLS and obtains residuals (εt)
- Regresses εt on εt-1, εt-2, ..., εt-r and included regressors from the original model
- Examines R2 from this auxiliary regression using a chi-squared distribution
Consequences of Ignoring Autocorrelation
- Coefficient estimates using OLS are unbiased but inefficient (not BLUE) even in large samples
- Standard errors are likely to be incorrect.
- R2 can be inflated.
Remedies for Autocorrelation
- If the form of autocorrelation is known, GLS procedures such as Cochrane-Orcutt are available.
- However, these require assumptions about autocorrelation's form and correcting an invalid assumption could be worse.
Multicollinearity
- Occurs when explanatory variables are highly correlated.
- Perfect multicollinearity prevents estimation of all coefficients (e.g., if x3 = 2x2).
- Near multicollinearity results in:
- High R² values
- High standard errors for individual coefficients
- Sensitive regression to small changes in specification.
- Solutions include variable elimination; transformation of variables, or more data.
Measuring Multicollinearity
- Method 1: Correlation matrix.
- Method 2: Variance Inflation Factor (VIF)
- VIF = 1 / (1 - Rxi2)
Solutions to Multicollinearity
- Traditional approaches such as ridge regression are available but may introduce complexity
- In cases where the model is otherwise OK, ignoring the issue may be a reasonable approach
- Solutions include removing one of the collinear variables, transforming variables into ratios, increasing the sample size or switching to a higher frequency.
Adopting the Wrong Functional Form
- Assumption that the model's functional form is linear.
- Ramsey's RESET test checks nonlinear form misspecification.
- The technique uses higher order terms (e.g., y2, y3...).
- If the test statistic (TR2) is greater than the critical value in the chi-squared distribution (χ2(p-1)), the null hypothesis (functional form is correct) should be rejected.
- To fix the situation, potentially transformations are needed (e.g., logarithms).
Testing for Normality
-
Normality assumption for hypothesis testing.
-
A normal distribution's skewness and kurtosis have coefficients = 0.
-
Testing departures from normality using the Bera-Jarque test (W-test) that checks joint significance of the skewness & kurtosis coefficients in the distribution.
-
W = T ( b12/6 + b22/24 ) ~Χ2(2)
-
Skewness (b1) & Kurtosis (b2) calculated from the residuals.
Omission of an Important Variable or Inclusion of an Irrelevant Variable
- Omitting essential variables biases and may influence estimates, potentially leading to inaccuracies in variable coefficient estimates as well as the constant in the model. In contrast, including an unnecessary variable generally only affects efficiency.
Parameter Stability Tests (Chow Test)
- Assumes that parameters are consistent for the entire sample period.
- Using a Chow test, the sample is split into sub-periods, and regressions are run in each period.
- The differences in sum of squared residuals (RSS) in the unrestricted model (over all periods) compared to the sum of residuals based on each individual sub-period regression.
- If the F-test statistic exceeds the critical value in an F-distribution (k, T-2k), the hypothesis of stable parameters is rejected.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
Test your knowledge on the Durbin-Watson test with this quiz! Explore key concepts like negative autocorrelation, null hypotheses, and the implications of the test statistic. Perfect for students or professionals looking to reinforce their understanding of regression analysis.