Durbin-Watson Test Quiz
48 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What does negative autocorrelation indicate about the residuals?

  • They are randomly distributed.
  • They occasionally increase and decrease.
  • They consistently rise over time.
  • They cross the time axis more frequently than random distribution. (correct)
  • What is the null hypothesis (H0) in the Durbin-Watson test?

  • The residuals are independent.
  • There is no autocorrelation. (correct)
  • There is a positive correlation between residuals.
  • There is significant negative autocorrelation.
  • Which condition is NOT required for the Durbin-Watson test to be valid?

  • Regressors are non-stochastic.
  • Constant term in regression.
  • Regressors must be stochastic. (correct)
  • Model has normally distributed residuals.
  • What range of values can the Durbin-Watson statistic (DW) take?

    <p>0 to 4. (A)</p> Signup and view all the answers

    What does a DW statistic value near 2 indicate?

    <p>Little evidence of autocorrelation. (D)</p> Signup and view all the answers

    Which of the following components is part of the DW test statistic formula?

    <p>$σ_T (ε_t - ε_{t-1})^2$ (D)</p> Signup and view all the answers

    What does the presence of no pattern in residuals imply?

    <p>There is no autocorrelation. (D)</p> Signup and view all the answers

    In the context of the Durbin-Watson test, what does H1 represent?

    <p>There is significant autocorrelation. (D)</p> Signup and view all the answers

    What does the assumption E(𝜀𝑡 ) = 0 indicate?

    <p>The mean of disturbances is zero. (C)</p> Signup and view all the answers

    Which condition must be satisfied for errors to be considered homoscedastic?

    <p>The variance of the errors is constant and finite. (D)</p> Signup and view all the answers

    What happens if the assumption of independence (cov(𝜀𝑖, 𝜀𝑗) = 0) is violated?

    <p>The standard errors may be improperly calculated. (D)</p> Signup and view all the answers

    If errors exhibit heteroscedasticity, what might we need to consider in our analysis?

    <p>Using a method that adjusts for varying error variances. (A)</p> Signup and view all the answers

    The assumption that the X matrix is non-stochastic means what in regression analysis?

    <p>The values of independent variables are fixed during estimation. (C)</p> Signup and view all the answers

    What indicates a violation of the assumption that Var(𝜀𝑡 ) = 𝜎² < ∞?

    <p>The variance of the residuals changes across the range of independent variables. (B)</p> Signup and view all the answers

    How can one test for violations of the classical linear regression model assumptions?

    <p>By analyzing the residuals from the regression. (C)</p> Signup and view all the answers

    What is a potential consequence of incorrect standard errors due to assumption violations?

    <p>Misleading hypothesis test results. (D)</p> Signup and view all the answers

    What is one of the traditional approaches to address multicollinearity?

    <p>Dropping one of the collinear variables (A)</p> Signup and view all the answers

    What does a high correlation between independent variables indicate?

    <p>Presence of multicollinearity (A)</p> Signup and view all the answers

    Which test is used to formally check for mis-specification of functional form?

    <p>Ramsey’s RESET test (A)</p> Signup and view all the answers

    What happens if the test statistic from the RESET test is greater than the critical value?

    <p>Reject the null hypothesis (D)</p> Signup and view all the answers

    What might be a consequence of high correlation between a dependent variable and one of the independent variables?

    <p>It signifies multicollinearity. (D)</p> Signup and view all the answers

    What is a possible solution if multicollinearity is identified in a model?

    <p>Transform correlated variables into a ratio. (B)</p> Signup and view all the answers

    Which of the following statements is true regarding variance inflation factor?

    <p>It is used to measure multicollinearity. (C)</p> Signup and view all the answers

    What is a drawback of using traditional approaches like ridge regression for multicollinearity?

    <p>They can introduce new problems. (A)</p> Signup and view all the answers

    What is one remedy for the rejection of the test due to model mis-specification?

    <p>Transform the data into logarithms (A)</p> Signup and view all the answers

    Why is normality assumed for hypothesis testing?

    <p>The properties of normal distributions are well-understood (C)</p> Signup and view all the answers

    What does a normal distribution's coefficient of skewness and excess kurtosis indicate?

    <p>Both values should equal 0 (D)</p> Signup and view all the answers

    What does the Bera Jarque test statistic measure?

    <p>Joint normality of residuals' skewness and kurtosis (A)</p> Signup and view all the answers

    What is one potential cause of evidence of non-normality in residuals?

    <p>Presence of extreme residuals (B)</p> Signup and view all the answers

    What transformation often helps in handling multiplicative models?

    <p>Using logarithms (C)</p> Signup and view all the answers

    What should a researcher consider if evidence of non-normality is detected?

    <p>Consider using methods that do not assume normality (C)</p> Signup and view all the answers

    How is skewness represented in terms of residuals?

    <p>It is the standardized third moment of the residuals (B)</p> Signup and view all the answers

    What is the null hypothesis in the Goldfeld-Quandt test?

    <p>The variances of the disturbances are equal. (A)</p> Signup and view all the answers

    Which statement describes the calculation of the GQ test statistic?

    <p>It is the ratio of the two residual variances. (D)</p> Signup and view all the answers

    What is a potential issue when conducting the GQ test?

    <p>The choice of where to split the sample is arbitrary. (C)</p> Signup and view all the answers

    Which aspect is noteworthy about White's general test for heteroscedasticity?

    <p>It makes few assumptions about the form of heteroscedasticity. (A)</p> Signup and view all the answers

    What is the effect of omitting an important variable in a regression model?

    <p>Coefficients of other variables will be biased and inconsistent. (A)</p> Signup and view all the answers

    In the auxiliary regression used for White's test, which variable is NOT typically included?

    <p>The dependent variable from the original regression. (D)</p> Signup and view all the answers

    What distribution does the product of the number of observations and R² from White's test approximately follow?

    <p>Chi-squared distribution (C)</p> Signup and view all the answers

    What is a consequence of including an irrelevant variable in a regression analysis?

    <p>The estimators remain consistent and unbiased. (B)</p> Signup and view all the answers

    What is the main goal of detecting heteroscedasticity in regression analysis?

    <p>To ensure uniform variance in residuals. (B)</p> Signup and view all the answers

    What is the primary purpose of parameter stability tests in regression analysis?

    <p>To confirm that parameters are constant across the sample period. (B)</p> Signup and view all the answers

    How is the variance of the residuals represented in the context of White's test?

    <p>Var(𝜀𝑡) = σ2 (C)</p> Signup and view all the answers

    Which of the following statements correctly describes the Chow test?

    <p>It compares the RSS of a restricted regression to an unrestricted regression. (D)</p> Signup and view all the answers

    In the context of regression analysis, what does RSS stand for?

    <p>Residual Sum of Squares (B)</p> Signup and view all the answers

    During a Chow test, which of the following steps is performed first?

    <p>Estimate the regression for the whole period. (D)</p> Signup and view all the answers

    What happens to the estimate of the coefficient on the constant term when an important variable is omitted?

    <p>It is biased if the omitted variable is correlated. (C)</p> Signup and view all the answers

    What is the impact of using parameter stability tests on regression analysis?

    <p>It allows for analysis of changes in parameters over time. (A)</p> Signup and view all the answers

    Flashcards

    E(𝜀𝑡 ) = 0

    The expected value (mean) of the error terms is zero. This assumption ensures that the errors are not systematically biased in one direction.

    Var(𝜀𝑡 ) = 𝜎2 < 

    The variance of the error terms is constant across all observations and finite. This ensures consistent error variability across the data.

    cov(𝜀𝑖 , 𝜀𝑗 ) = 0

    The error terms are uncorrelated with each other. This assumption prevents the errors from influencing each other, ensuring independence.

    The X matrix is non-stochastic or fixed in repeated samples

    The explanatory variables (X) are not random and remain fixed in repeated samples. This allows us to treat them as constants when estimating the regression coefficients.

    Signup and view all the flashcards

    𝜀𝑡  N(0,2)

    The error terms follow a normal distribution with a mean of zero and a constant variance (𝜎2). This assumption is important for hypothesis testing and confidence interval construction.

    Signup and view all the flashcards

    Homoscedasticity

    A situation where the error terms have a constant variance across all observations. This means that the spread of the errors is consistent regardless of the value of the explanatory variables.

    Signup and view all the flashcards

    Heteroscedasticity

    A situation where the error terms do not have a constant variance across all observations. This means that the spread of the errors varies depending on the value of the explanatory variables.

    Signup and view all the flashcards

    Coefficient estimates are wrong

    The estimates of the regression coefficients are not accurate, meaning they might be biased and systematically different from the true values.

    Signup and view all the flashcards

    Goldfeld-Quandt Test

    A statistical method used to detect heteroscedasticity, a violation of the assumption of equal variances in the error terms of a regression model.

    Signup and view all the flashcards

    Residual

    A value representing the difference between the actual observed value of a dependent variable and its predicted value from a regression model.

    Signup and view all the flashcards

    Splitting the Sample (in GQ Test)

    Splitting the data into two subsamples and estimating separate regressions for each subsample.

    Signup and view all the flashcards

    Calculating Residual Variances

    Estimating the regression model using each subsample obtained from the data, and calculating the variances of the residuals for each subgroup.

    Signup and view all the flashcards

    Null Hypothesis (GQ Test)

    A statistical test where the null hypothesis is that the variances of the error terms are equal across the two subgroups.

    Signup and view all the flashcards

    GQ Test Statistic

    The ratio of the higher residual variance to the lower residual variance, used to assess heteroscedasticity in the Goldfeld-Quandt test.

    Signup and view all the flashcards

    F Distribution (GQ Test)

    The distribution of the GQ test statistic under the assumption of homoscedasticity (equal variances); used to determine the p-value of the test.

    Signup and view all the flashcards

    Splitting Point (GQ Test)

    The choice of where to split the sample in the GQ test, potentially affecting the outcome of the test.

    Signup and view all the flashcards

    Negative Autocorrelation

    A pattern in residuals where they alternate above and below the time axis more frequently than if they were random. Suggests a relationship between an error term and its previous one.

    Signup and view all the flashcards

    No Autocorrelation

    Absence of any systematic patterns in the residuals. This is the ideal scenario.

    Signup and view all the flashcards

    Durbin-Watson Test

    A statistical test used to detect first-order autocorrelation, analyzing the relationship between an error term and its previous one.

    Signup and view all the flashcards

    Durbin-Watson (DW) Statistic

    A statistical value calculated to assess the presence of autocorrelation. A value near 2 indicates no autocorrelation. Values significantly different from 2 suggest the presence of autocorrelation.

    Signup and view all the flashcards

    H0: =0

    The null hypothesis tested by DW test, suggesting that there's no autocorrelation present. We aim to reject this hypothesis if sufficient evidence exists.

    Signup and view all the flashcards

    H1: 0

    The alternative hypothesis in the DW test, suggesting that there is autocorrelation present. We aim to accept this hypothesis if the null hypothesis is rejected.

    Signup and view all the flashcards

    Non-stochastic Regressors

    The condition where the explanatory variables are not random and stay fixed in repeated samples. This allows us to treat them as constants while estimating regression coefficients.

    Signup and view all the flashcards

    Constant Term in Regression

    The condition where the regression equation includes a constant term. It's often represented by the 'intercept' in linear regressions.

    Signup and view all the flashcards

    Logarithmic Transformation

    A transformation applied to data to convert multiplicative relationships into additive ones, simplifying analysis.

    Signup and view all the flashcards

    Bera-Jarque Test

    A test designed to assess if the residuals in a regression model deviate from a normal distribution. It examines skewness and kurtosis.

    Signup and view all the flashcards

    Skewness

    The standardized third moment of a probability distribution, measuring the asymmetry of the data.

    Signup and view all the flashcards

    Kurtosis

    The standardized fourth moment of a probability distribution, measuring the 'tailedness' or 'peakedness' of the data.

    Signup and view all the flashcards

    Bera-Jarque Statistic

    The Bera-Jarque test statistic, calculated based on skewness and kurtosis coefficients, used to assess normality of the data.

    Signup and view all the flashcards

    Bera-Jarque Test Significance

    The Bera-Jarque test compares the calculated test statistic to a chi-squared distribution with 2 degrees of freedom to determine the significance of the deviation from normality.

    Signup and view all the flashcards

    Outliers

    A situation where influential data points distort the regression model's fit and can significantly impact the results. Often identified as outliers.

    Signup and view all the flashcards

    Using Dummy Variables to Adjust for Outliers

    The process of using dummy variables to account for unusual patterns in data, such as outliers, identified in the residuals, helps adjust the model's fit.

    Signup and view all the flashcards

    Multicollinearity

    A situation where two or more independent variables in a regression model are highly correlated. This can make it difficult to isolate the individual effects of each variable, affecting the accuracy and reliability of the regression results.

    Signup and view all the flashcards

    How to detect multicollinearity: Correlation Matrix

    When looking at the correlation matrix of variables, you notice a strong correlation between pairs of variables.

    Signup and view all the flashcards

    Variance Inflation Factor (VIF)

    A statistical method that uses the variance of the regression coefficient to measure the amount of multicollinearity in a regression model.

    Signup and view all the flashcards

    Drop a variable

    A technique to reduce or eliminate multicollinearity by dropping one of the highly correlated variables from the regression model.

    Signup and view all the flashcards

    Transform Variables into a Ratio

    A way to handle multicollinearity by creating a new variable that represents the ratio of two highly correlated variables. This can help to capture the combined effect of both variables.

    Signup and view all the flashcards

    Ramsey’s RESET Test

    A statistical test used to check if the chosen functional form of a regression model is appropriate. It essentially tests for the presence of misspecification of functional forms.

    Signup and view all the flashcards

    Breusch-Pagan Test

    A test used to check for the presence of heteroscedasticity. It is usually done by regressing the squared residuals from the original regression onto the explanatory variables.

    Signup and view all the flashcards

    Dummy Variable

    A technique used to create a new variable that takes on a value of 1 for a specific time period and 0 otherwise. This allows you to isolate the effect of a particular observation or time period on the regression results.

    Signup and view all the flashcards

    Omission of an Important Variable

    The estimated coefficients of included variables in a regression model can be biased and inconsistent if an important variable is omitted, unless the omitted variable is uncorrelated with all the included variables.

    Signup and view all the flashcards

    Inclusion of an Irrelevant Variable

    Including a variable that has no real relationship with the dependent variable in your regression model does not affect the consistency and unbiasedness of coefficient estimates, but it makes the estimation less efficient.

    Signup and view all the flashcards

    Chow Test

    A statistical test used to determine if the regression coefficients are constant across different time periods. It compares the sum of squared residuals from regressions estimated separately for different time periods to a regression estimated over the entire period.

    Signup and view all the flashcards

    Parameter Stability Tests

    A type of parameter stability test that assesses whether the estimated coefficients are consistent across different time periods.

    Signup and view all the flashcards

    Study Notes

    Classical Linear Regression Model (CLRM) Assumptions and Diagnostics

    • The CLRM disturbance terms are assumed to have specific properties:
      • Expected value (E(εt)) = 0
      • Variance (Var(εt)) = σ²
      • Covariance (cov(εi, εj)) = 0 for i ≠ j
      • X matrix is non-stochastic or fixed in repeated samples
      • εt ~ N(0, σ²)

    Violation of CLRM Assumptions

    • Studying these assumptions further, including testing for violations, causes, and consequences.

    • In general, violations of several assumptions could lead to:

      • Inaccurate coefficient estimates
      • Incorrect associated standard errors
      • Inappropriate distribution of test statistics
    • Solutions include addressing the issue directly, or applying alternative estimation techniques.

    Assumption 1: E(εt) = 0

    • The mean of the disturbances is zero.
    • Residuals are used as a proxy since disturbances are unobservable
    • Residuals will always average to zero when the regression includes a constant term.

    Assumption 2: Var(εt) = σ²

    • Homoscedasticity: Error variance is constant.

    • If the error variance is not constant: heteroscedasticity.

    • The variance of errors can be visualized by examining the residuals (εt) against the independent variables. An uneven scatter plot suggests heteroscedasticity.

    • Detection of Heteroscedasticity:

      • Graphical methods
      • Goldfeld-Quandt test
      • White's test

    Goldfeld-Quandt test

    • Splits the sample into two subsamples
    • Calculates residual variances for each subsample
    • Compares the residual variances using F-distribution
    • Checks the null hypothesis that variances of disturbances are equal.

    White's Test

    • Assumes a regression form and calculates residuals
    • Runs an auxiliary regression on residuals & squared/cross products of variables
    • Checks for heteroscedasticity by examining R2 from the auxiliary regression, using a chi-squared distribution.

    Consequences of Heteroscedasticity

    • OLS estimates are unbiased, but no longer BLUE.
    • Standard errors calculated using usual formulas might be incorrect, potentially leading to misleading inferences.
    • R2 is likely to be inflated for positive autocorrelation in residuals.
    • Solutions to heteroscedasticity include Generalized Least Squares (GLS) if the form of the heteroscedasticity is known.

    Autocorrelation

    • Assumed that the error terms do not exhibit any pattern (Cov(εi, εj) = 0 if i ≠ j).
    • Residuals are used from the regression.
    • The presence of patterns in residuals indicates autocorrelation.
    • Types of autocorrelation: positive, negative, and no autocorrelation. Illustrations provided in slides.

    Detecting Autocorrelation: Durbin-Watson Test

    • Testing for first-order autocorrelation.
    • Assumes correlation between error & previous error term
    • Test statistic (DW) is calculated using residuals.
    • DW = (sum from t=2 to T) (εt - εt-1)² / (sum from t=1 to T) εt²
    • 0 ≤ DW ≤ 4. A value near 2 suggests no autocorrelation; values significantly different from 2 imply autocorrelation
    • Reject null hypothesis when DW < dL or DW > du (lower and upper critical values) in a table.

    Breusch-Godfrey Test

    • More general test for rth-order autocorrelation.
    • Formulates a null and alternative hypotheses about the autocorrelation coefficients.
    • Estimates a regression using OLS and obtains residuals (εt)
    • Regresses εt on εt-1, εt-2, ..., εt-r and included regressors from the original model
    • Examines R2 from this auxiliary regression using a chi-squared distribution

    Consequences of Ignoring Autocorrelation

    • Coefficient estimates using OLS are unbiased but inefficient (not BLUE) even in large samples
    • Standard errors are likely to be incorrect.
    • R2 can be inflated.

    Remedies for Autocorrelation

    • If the form of autocorrelation is known, GLS procedures such as Cochrane-Orcutt are available.
    • However, these require assumptions about autocorrelation's form and correcting an invalid assumption could be worse.

    Multicollinearity

    • Occurs when explanatory variables are highly correlated.
    • Perfect multicollinearity prevents estimation of all coefficients (e.g., if x3 = 2x2).
    • Near multicollinearity results in:
      • High R² values
      • High standard errors for individual coefficients
      • Sensitive regression to small changes in specification.
    • Solutions include variable elimination; transformation of variables, or more data.

    Measuring Multicollinearity

    • Method 1: Correlation matrix.
    • Method 2: Variance Inflation Factor (VIF)
    • VIF = 1 / (1 - Rxi2)

    Solutions to Multicollinearity

    • Traditional approaches such as ridge regression are available but may introduce complexity
    • In cases where the model is otherwise OK, ignoring the issue may be a reasonable approach
    • Solutions include removing one of the collinear variables, transforming variables into ratios, increasing the sample size or switching to a higher frequency.

    Adopting the Wrong Functional Form

    • Assumption that the model's functional form is linear.
    • Ramsey's RESET test checks nonlinear form misspecification.
    • The technique uses higher order terms (e.g., y2, y3...).
    • If the test statistic (TR2) is greater than the critical value in the chi-squared distribution (χ2(p-1)), the null hypothesis (functional form is correct) should be rejected.
    • To fix the situation, potentially transformations are needed (e.g., logarithms).

    Testing for Normality

    • Normality assumption for hypothesis testing.

    • A normal distribution's skewness and kurtosis have coefficients = 0.

    • Testing departures from normality using the Bera-Jarque test (W-test) that checks joint significance of the skewness & kurtosis coefficients in the distribution.

    • W = T ( b12/6 + b22/24 ) ~Χ2(2)

    • Skewness (b1) & Kurtosis (b2) calculated from the residuals.

    Omission of an Important Variable or Inclusion of an Irrelevant Variable

    • Omitting essential variables biases and may influence estimates, potentially leading to inaccuracies in variable coefficient estimates as well as the constant in the model. In contrast, including an unnecessary variable generally only affects efficiency.

    Parameter Stability Tests (Chow Test)

    • Assumes that parameters are consistent for the entire sample period.
    • Using a Chow test, the sample is split into sub-periods, and regressions are run in each period.
    • The differences in sum of squared residuals (RSS) in the unrestricted model (over all periods) compared to the sum of residuals based on each individual sub-period regression.
    • If the F-test statistic exceeds the critical value in an F-distribution (k, T-2k), the hypothesis of stable parameters is rejected.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    Description

    Test your knowledge on the Durbin-Watson test with this quiz! Explore key concepts like negative autocorrelation, null hypotheses, and the implications of the test statistic. Perfect for students or professionals looking to reinforce their understanding of regression analysis.

    More Like This

    Bilirubin Transport Disorders
    5 questions

    Bilirubin Transport Disorders

    ProblemFreeWatermelonTourmaline avatar
    ProblemFreeWatermelonTourmaline
    Use Quizgecko on...
    Browser
    Browser