Durbin-Watson Test Quiz
48 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What does negative autocorrelation indicate about the residuals?

  • They are randomly distributed.
  • They occasionally increase and decrease.
  • They consistently rise over time.
  • They cross the time axis more frequently than random distribution. (correct)
  • What is the null hypothesis (H0) in the Durbin-Watson test?

  • The residuals are independent.
  • There is no autocorrelation. (correct)
  • There is a positive correlation between residuals.
  • There is significant negative autocorrelation.
  • Which condition is NOT required for the Durbin-Watson test to be valid?

  • Regressors are non-stochastic.
  • Constant term in regression.
  • Regressors must be stochastic. (correct)
  • Model has normally distributed residuals.
  • What range of values can the Durbin-Watson statistic (DW) take?

    <p>0 to 4.</p> Signup and view all the answers

    What does a DW statistic value near 2 indicate?

    <p>Little evidence of autocorrelation.</p> Signup and view all the answers

    Which of the following components is part of the DW test statistic formula?

    <p>$σ_T (ε_t - ε_{t-1})^2$</p> Signup and view all the answers

    What does the presence of no pattern in residuals imply?

    <p>There is no autocorrelation.</p> Signup and view all the answers

    In the context of the Durbin-Watson test, what does H1 represent?

    <p>There is significant autocorrelation.</p> Signup and view all the answers

    What does the assumption E(𝜀𝑡 ) = 0 indicate?

    <p>The mean of disturbances is zero.</p> Signup and view all the answers

    Which condition must be satisfied for errors to be considered homoscedastic?

    <p>The variance of the errors is constant and finite.</p> Signup and view all the answers

    What happens if the assumption of independence (cov(𝜀𝑖, 𝜀𝑗) = 0) is violated?

    <p>The standard errors may be improperly calculated.</p> Signup and view all the answers

    If errors exhibit heteroscedasticity, what might we need to consider in our analysis?

    <p>Using a method that adjusts for varying error variances.</p> Signup and view all the answers

    The assumption that the X matrix is non-stochastic means what in regression analysis?

    <p>The values of independent variables are fixed during estimation.</p> Signup and view all the answers

    What indicates a violation of the assumption that Var(𝜀𝑡 ) = 𝜎² < ∞?

    <p>The variance of the residuals changes across the range of independent variables.</p> Signup and view all the answers

    How can one test for violations of the classical linear regression model assumptions?

    <p>By analyzing the residuals from the regression.</p> Signup and view all the answers

    What is a potential consequence of incorrect standard errors due to assumption violations?

    <p>Misleading hypothesis test results.</p> Signup and view all the answers

    What is one of the traditional approaches to address multicollinearity?

    <p>Dropping one of the collinear variables</p> Signup and view all the answers

    What does a high correlation between independent variables indicate?

    <p>Presence of multicollinearity</p> Signup and view all the answers

    Which test is used to formally check for mis-specification of functional form?

    <p>Ramsey’s RESET test</p> Signup and view all the answers

    What happens if the test statistic from the RESET test is greater than the critical value?

    <p>Reject the null hypothesis</p> Signup and view all the answers

    What might be a consequence of high correlation between a dependent variable and one of the independent variables?

    <p>It signifies multicollinearity.</p> Signup and view all the answers

    What is a possible solution if multicollinearity is identified in a model?

    <p>Transform correlated variables into a ratio.</p> Signup and view all the answers

    Which of the following statements is true regarding variance inflation factor?

    <p>It is used to measure multicollinearity.</p> Signup and view all the answers

    What is a drawback of using traditional approaches like ridge regression for multicollinearity?

    <p>They can introduce new problems.</p> Signup and view all the answers

    What is one remedy for the rejection of the test due to model mis-specification?

    <p>Transform the data into logarithms</p> Signup and view all the answers

    Why is normality assumed for hypothesis testing?

    <p>The properties of normal distributions are well-understood</p> Signup and view all the answers

    What does a normal distribution's coefficient of skewness and excess kurtosis indicate?

    <p>Both values should equal 0</p> Signup and view all the answers

    What does the Bera Jarque test statistic measure?

    <p>Joint normality of residuals' skewness and kurtosis</p> Signup and view all the answers

    What is one potential cause of evidence of non-normality in residuals?

    <p>Presence of extreme residuals</p> Signup and view all the answers

    What transformation often helps in handling multiplicative models?

    <p>Using logarithms</p> Signup and view all the answers

    What should a researcher consider if evidence of non-normality is detected?

    <p>Consider using methods that do not assume normality</p> Signup and view all the answers

    How is skewness represented in terms of residuals?

    <p>It is the standardized third moment of the residuals</p> Signup and view all the answers

    What is the null hypothesis in the Goldfeld-Quandt test?

    <p>The variances of the disturbances are equal.</p> Signup and view all the answers

    Which statement describes the calculation of the GQ test statistic?

    <p>It is the ratio of the two residual variances.</p> Signup and view all the answers

    What is a potential issue when conducting the GQ test?

    <p>The choice of where to split the sample is arbitrary.</p> Signup and view all the answers

    Which aspect is noteworthy about White's general test for heteroscedasticity?

    <p>It makes few assumptions about the form of heteroscedasticity.</p> Signup and view all the answers

    What is the effect of omitting an important variable in a regression model?

    <p>Coefficients of other variables will be biased and inconsistent.</p> Signup and view all the answers

    In the auxiliary regression used for White's test, which variable is NOT typically included?

    <p>The dependent variable from the original regression.</p> Signup and view all the answers

    What distribution does the product of the number of observations and R² from White's test approximately follow?

    <p>Chi-squared distribution</p> Signup and view all the answers

    What is a consequence of including an irrelevant variable in a regression analysis?

    <p>The estimators remain consistent and unbiased.</p> Signup and view all the answers

    What is the main goal of detecting heteroscedasticity in regression analysis?

    <p>To ensure uniform variance in residuals.</p> Signup and view all the answers

    What is the primary purpose of parameter stability tests in regression analysis?

    <p>To confirm that parameters are constant across the sample period.</p> Signup and view all the answers

    How is the variance of the residuals represented in the context of White's test?

    <p>Var(𝜀𝑡) = σ2</p> Signup and view all the answers

    Which of the following statements correctly describes the Chow test?

    <p>It compares the RSS of a restricted regression to an unrestricted regression.</p> Signup and view all the answers

    In the context of regression analysis, what does RSS stand for?

    <p>Residual Sum of Squares</p> Signup and view all the answers

    During a Chow test, which of the following steps is performed first?

    <p>Estimate the regression for the whole period.</p> Signup and view all the answers

    What happens to the estimate of the coefficient on the constant term when an important variable is omitted?

    <p>It is biased if the omitted variable is correlated.</p> Signup and view all the answers

    What is the impact of using parameter stability tests on regression analysis?

    <p>It allows for analysis of changes in parameters over time.</p> Signup and view all the answers

    Study Notes

    Classical Linear Regression Model (CLRM) Assumptions and Diagnostics

    • The CLRM disturbance terms are assumed to have specific properties:
      • Expected value (E(εt)) = 0
      • Variance (Var(εt)) = σ²
      • Covariance (cov(εi, εj)) = 0 for i ≠ j
      • X matrix is non-stochastic or fixed in repeated samples
      • εt ~ N(0, σ²)

    Violation of CLRM Assumptions

    • Studying these assumptions further, including testing for violations, causes, and consequences.

    • In general, violations of several assumptions could lead to:

      • Inaccurate coefficient estimates
      • Incorrect associated standard errors
      • Inappropriate distribution of test statistics
    • Solutions include addressing the issue directly, or applying alternative estimation techniques.

    Assumption 1: E(εt) = 0

    • The mean of the disturbances is zero.
    • Residuals are used as a proxy since disturbances are unobservable
    • Residuals will always average to zero when the regression includes a constant term.

    Assumption 2: Var(εt) = σ²

    • Homoscedasticity: Error variance is constant.

    • If the error variance is not constant: heteroscedasticity.

    • The variance of errors can be visualized by examining the residuals (εt) against the independent variables. An uneven scatter plot suggests heteroscedasticity.

    • Detection of Heteroscedasticity:

      • Graphical methods
      • Goldfeld-Quandt test
      • White's test

    Goldfeld-Quandt test

    • Splits the sample into two subsamples
    • Calculates residual variances for each subsample
    • Compares the residual variances using F-distribution
    • Checks the null hypothesis that variances of disturbances are equal.

    White's Test

    • Assumes a regression form and calculates residuals
    • Runs an auxiliary regression on residuals & squared/cross products of variables
    • Checks for heteroscedasticity by examining R2 from the auxiliary regression, using a chi-squared distribution.

    Consequences of Heteroscedasticity

    • OLS estimates are unbiased, but no longer BLUE.
    • Standard errors calculated using usual formulas might be incorrect, potentially leading to misleading inferences.
    • R2 is likely to be inflated for positive autocorrelation in residuals.
    • Solutions to heteroscedasticity include Generalized Least Squares (GLS) if the form of the heteroscedasticity is known.

    Autocorrelation

    • Assumed that the error terms do not exhibit any pattern (Cov(εi, εj) = 0 if i ≠ j).
    • Residuals are used from the regression.
    • The presence of patterns in residuals indicates autocorrelation.
    • Types of autocorrelation: positive, negative, and no autocorrelation. Illustrations provided in slides.

    Detecting Autocorrelation: Durbin-Watson Test

    • Testing for first-order autocorrelation.
    • Assumes correlation between error & previous error term
    • Test statistic (DW) is calculated using residuals.
    • DW = (sum from t=2 to T) (εt - εt-1)² / (sum from t=1 to T) εt²
    • 0 ≤ DW ≤ 4. A value near 2 suggests no autocorrelation; values significantly different from 2 imply autocorrelation
    • Reject null hypothesis when DW < dL or DW > du (lower and upper critical values) in a table.

    Breusch-Godfrey Test

    • More general test for rth-order autocorrelation.
    • Formulates a null and alternative hypotheses about the autocorrelation coefficients.
    • Estimates a regression using OLS and obtains residuals (εt)
    • Regresses εt on εt-1, εt-2, ..., εt-r and included regressors from the original model
    • Examines R2 from this auxiliary regression using a chi-squared distribution

    Consequences of Ignoring Autocorrelation

    • Coefficient estimates using OLS are unbiased but inefficient (not BLUE) even in large samples
    • Standard errors are likely to be incorrect.
    • R2 can be inflated.

    Remedies for Autocorrelation

    • If the form of autocorrelation is known, GLS procedures such as Cochrane-Orcutt are available.
    • However, these require assumptions about autocorrelation's form and correcting an invalid assumption could be worse.

    Multicollinearity

    • Occurs when explanatory variables are highly correlated.
    • Perfect multicollinearity prevents estimation of all coefficients (e.g., if x3 = 2x2).
    • Near multicollinearity results in:
      • High R² values
      • High standard errors for individual coefficients
      • Sensitive regression to small changes in specification.
    • Solutions include variable elimination; transformation of variables, or more data.

    Measuring Multicollinearity

    • Method 1: Correlation matrix.
    • Method 2: Variance Inflation Factor (VIF)
    • VIF = 1 / (1 - Rxi2)

    Solutions to Multicollinearity

    • Traditional approaches such as ridge regression are available but may introduce complexity
    • In cases where the model is otherwise OK, ignoring the issue may be a reasonable approach
    • Solutions include removing one of the collinear variables, transforming variables into ratios, increasing the sample size or switching to a higher frequency.

    Adopting the Wrong Functional Form

    • Assumption that the model's functional form is linear.
    • Ramsey's RESET test checks nonlinear form misspecification.
    • The technique uses higher order terms (e.g., y2, y3...).
    • If the test statistic (TR2) is greater than the critical value in the chi-squared distribution (χ2(p-1)), the null hypothesis (functional form is correct) should be rejected.
    • To fix the situation, potentially transformations are needed (e.g., logarithms).

    Testing for Normality

    • Normality assumption for hypothesis testing.

    • A normal distribution's skewness and kurtosis have coefficients = 0.

    • Testing departures from normality using the Bera-Jarque test (W-test) that checks joint significance of the skewness & kurtosis coefficients in the distribution.

    • W = T ( b12/6 + b22/24 ) ~Χ2(2)

    • Skewness (b1) & Kurtosis (b2) calculated from the residuals.

    Omission of an Important Variable or Inclusion of an Irrelevant Variable

    • Omitting essential variables biases and may influence estimates, potentially leading to inaccuracies in variable coefficient estimates as well as the constant in the model. In contrast, including an unnecessary variable generally only affects efficiency.

    Parameter Stability Tests (Chow Test)

    • Assumes that parameters are consistent for the entire sample period.
    • Using a Chow test, the sample is split into sub-periods, and regressions are run in each period.
    • The differences in sum of squared residuals (RSS) in the unrestricted model (over all periods) compared to the sum of residuals based on each individual sub-period regression.
    • If the F-test statistic exceeds the critical value in an F-distribution (k, T-2k), the hypothesis of stable parameters is rejected.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    Description

    Test your knowledge on the Durbin-Watson test with this quiz! Explore key concepts like negative autocorrelation, null hypotheses, and the implications of the test statistic. Perfect for students or professionals looking to reinforce their understanding of regression analysis.

    More Like This

    Bilirubin Transport Disorders
    5 questions

    Bilirubin Transport Disorders

    ProblemFreeWatermelonTourmaline avatar
    ProblemFreeWatermelonTourmaline
    Use Quizgecko on...
    Browser
    Browser