Introductory Econometrics Test Bank PDF

Document Details

SmootherAllegory

Uploaded by SmootherAllegory

Jeffrey M. Wooldridge

Tags

econometrics economic relationships statistical methods economics

Summary

This document is a test bank for Introductory Econometrics: A Modern Approach, 5th Edition by Jeffrey M. Wooldridge. It contains questions and answers on econometrics, economic relationships and different types of data. The questions cover topics such as dependent variables, explanatory variables, and different types of data sets.

Full Transcript

Test Bank –Introductory Econometrics: A Modern Approach, 5th Edition by Jeffrey M. Wooldridge With PERFECT SOLUTION AVAILABLE OF ALL CHAPTERS Chapter 1 1. Econometrics is the branch of economics that _____. a. studies the behavior of individual economic agents in making economic decisions b. deve...

Test Bank –Introductory Econometrics: A Modern Approach, 5th Edition by Jeffrey M. Wooldridge With PERFECT SOLUTION AVAILABLE OF ALL CHAPTERS Chapter 1 1. Econometrics is the branch of economics that _____. a. studies the behavior of individual economic agents in making economic decisions b. develops and uses statistical methods for estimating economic relationships c. deals with the performance, structure, behavior, and decision-making of an economy as a whole d. applies mathematical methods to represent economic theories and solve economic problems. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: What is Econometrics? BUSPROG: Feedback: Econometrics is the branch of economics that develops and uses statistical methods for estimating economic relationships. 2. Nonexperimental data is called _____. a. cross-sectional data b. time series data c. observational data d. panel data Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: What is Econometrics? BUSPROG: Feedback: 3. Which of the following is true of experimental data? a. Experimental data are collected in laboratory environments in the natural sciences. b. Experimental data cannot be collected in a controlled environment. c. Experimental data is sometimes called observational data. d. Experimental data is sometimes called retrospective data. Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: What is Econometrics? BUSPROG: Feedback: 4. An empirical analysis relies on _____to test a theory. a. common sense b. ethical considerations c. data d. customs and conventions Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Steps in Empirical Economic Analysis BUSPROG: Feedback: An empirical analysis relies on data to test a theory. 5. The term ‘u’ in an econometric model is usually referred to as the _____. a. error term/ disturbance term b. parameter c. hypothesis d. dependent variable Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Steps in Empirical Economic Analysis BUSPROG: Feedback: The term u in an econometric model is called the error term or disturbance term. 6. The parameters of an econometric model _____. a. include all unobserved factors affecting the variable being studied b. describe the strength of the relationship between the variable under study and the factors affecting it c. refer to the explanatory variables included in the model d. refer to the predictions that can be made using the model Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Steps in Empirical Economic Analysis BUSPROG: Feedback: The parameters of an econometric model describe the direction and strength of the relationship between the variable under study and the factors affecting it. 7. Which of the following is the first step in empirical economic analysis? a. Collection of data b. Statement of hypotheses c. Specification of an econometric model d. Testing of hypotheses Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Steps in Empirical Economic Analysis BUSPROG: Feedback: The first step in empirical economic analysis is the specification of the econometric model. 8. A data set that consists of a sample of individuals, households, firms, cities, states, countries, or a variety of other units, taken at a given point in time, is called a(n) _____. a. cross-sectional data set b. longitudinal data set c. time series data set d. experimental data set Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: The Structure of Economic Data BUSPROG: Feedback: A data set that consists of a sample of individuals, households, firms, cities, states, countries, or a variety of other units, taken at a given point in time, is called a cross-sectional data set. 9. Data on the income of law graduates collected at different times during the same year is_____. a. panel data b. experimental data c. time series data d. cross-sectional data Answer: d Difficulty: Easy Bloom’s: Application A-Head: The Structure of Economic Data BUSPROG: Analytic Feedback: A data set that consists of a sample of individuals, households, firms, cities, states, countries, or a variety of other units, taken at a given point in time, is called a cross-sectional data set. Therefore, data on the income of law graduates on a particular year are examples of cross-sectional data. 10. A data set that consists of observations on a variable or several variables over time is called a _____ data set. a. binary b. cross-sectional c. time series d. experimental Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: The Structure of Economic Data BUSPROG: Feedback: A time-series data set consists of observations on a variable or several variables over time. 11. Which of the following is an example of time series data? cross-sectional a. Data on the unemployment rates in different parts of a country during a year. multiple subjects- cross-sectional b. Data on the consumption of wheat by 200 households during a year. particular year c. Data on the gross domestic product of a country over a period of 10 years. cross-sectional d. Data on the number of vacancies in various departments of an organization on a particular month. Answer: c Difficulty: Easy Bloom’s: Application A-Head: The Structure of Economic Data BUSPROG: Analytic Feedback: A time-series data set consists of observations on a variable or several variables over time. Therefore, data on the gross domestic product of a country over a period of 10 years is an example of time series data. 12. Which of the following refers to panel data? time series a. Data on the unemployment rate in a country over a 5-year period one variable - one subject b. Data on the birth rate, death rate and population growth rate in developing countries over a 10-year period. multiple subjects - multiple time periods - multiple measurements. cross-sectional c. Data on the income of 5 members of a family on a particular year. multiple subjects - particular year time-series d. Data on the price of a company’s share during a year. one variable - one subject Answer: b Difficulty: Easy Bloom’s: Application A-Head: The Structure of Economic Data BUSPROG: Analytic Feedback: A panel data set consists of a time series for each cross-sectional member in the data set. Therefore, data on the birth rate, death rate and infant mortality rate in developing countries over a 10-year period refers to panel data. 13. Which of the following is a difference between panel and pooled cross-sectional data? a. A panel data set consists of data on different cross-sectional units over a given period of time while a pooled data set consists of data on the same cross-sectional units over a given period of time. b. A panel data set consists of data on the same cross-sectional units over a given period of time while a pooled data set consists of data on different cross-sectional units over a given period of time c. A panel data consists of data on a single variable measured at a given point in time while a pooled data set consists of data on the same cross-sectional units over a given period of time. d. A panel data set consists of data on a single variable measured at a given point in time while a pooled data set consists of data on more than one variable at a given point in time. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: The Structure of Economic Data BUSPROG: Feedback: A panel data set consists of data on the same cross-sectional units over a given period of time while a pooled data set consists of data on the same cross- sectional units over a given period of time. 14. _____ has a causal effect on _____. a. Income; unemployment b. Height; health c. Income; consumption d. Age; wage Answer: c Difficulty: Moderate Bloom’s: Application A-Head: Causality and the Notion of Ceteris Paribus in Econometric Analysis BUSPROG: Analytic Feedback: Income has a causal effect on consumption because an increase in income leads to an increase in consumption. 15. Which of the following is true? a. A variable has a causal effect on another variable if both variables increase or decrease simultaneously. b. The notion of ‘ceteris paribus’ plays an important role in causal analysis. c. Difficulty in inferring causality disappears when studying data at fairly high levels of aggregation. d. The problem of inferring causality arises if experimental data is used for analysis. Answer: b Difficulty: Moderate Bloom’s: Knowledge A-Head: Causality and the Notion of Ceteris Paribus in Econometric Analysis BUSPROG: Feedback: The notion of ‘ceteris paribus’ plays an important role in causal analysis. 16. Experimental data are sometimes called retrospective data. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: What is Econometrics? BUSPROG: Feedback: Nonexperimental data are sometimes called retrospective data. 17. An economic model consists of mathematical equations that describe various relationships between economic variables. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Steps in Empirical Economic Analysis BUSPROG: Feedback: An economic model consists of mathematical equations that describe various relationships between economic variables. 18. A cross-sectional data set consists of observations on a variable or several variables over time. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: The Structure of Economic Data BUSPROG: Feedback: A time series data set consists of observations on a variable or several variables over time. 19. A time series data is also called a longitudinal data set. Answer: False True Difficulty: Easy Bloom’s: Knowledge A-Head: The Structure of Economic Data BUSPROG: Feedback: A time series data is also called a longitudinal data set. 20. The notion of ceteris paribus means “other factors being equal.” Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Causality and the Notion of Ceteris Paribus in Econometric Analysis BUSPROG: Feedback: The notion of ceteris paribus means “other factors being equal.” Chapter 2 1. A dependent variable is also known as a(n) _____. a. explanatory variable b. control variable c. predictor variable d. response variable Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Definition of the Simple Regression Model BUSPROG: Feedback: A dependent variable is known as a response variable. 2. If a change in variable x causes a change in variable y, variable x is called the _____. a. dependent variable b. explained variable c. explanatory variable d. response variable Answer: c Difficulty: Easy Bloom’s: Comprehension A-Head: Definition of the Simple Regression Model BUSPROG: Feedback: If a change in variable x causes a change in variable y, variable x is called the independent variable or the explanatory variable. 3. In the equation y = β0 + β 1 x + u, β 0 is the _____. a. dependent variable b. independent variable c. slope parameter d. intercept parameter Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Definition of the Simple Regression Model BUSPROG: Feedback: In the equation y = β0 + β 1 x + u, β 0 is the intercept parameter. 4. In the equation y = β0 + β 1 x + u, what is the estimated value of β 0 ? a. ý− β^1 x́ b. ý + β 1 x́ y yi −´¿ ¿ ¿ c. (x i− x́)¿ n ∑¿ i=1 ¿ n d. ∑ xy i=1 Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Deriving the Ordinary Least Squares Estimates BUSPROG: Feedback: The estimated value of β 0 is ý− β^1 x́. 5. In the equation c = β0 + β 1 i + u, c denotes consumption and i denotes income. What is the residual for the 5th observation if c 5 =$500 and c^5 =$475? a. $975 b. $300 c. $25 d. $50 Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Deriving the Ordinary Least Squares Estimates BUSPROG: Feedback: The formula for calculating the residual for the i th observation is u^i= y i −^ y i. In this case, the residual is u^5=c5 −^ c 5 =$500 -$475= $25. 6. What does the equation ^y = β^0 + ^ β1 x denote if the regression equation is y = β0 + β1x1 + u? a. The explained sum of squares b. The total sum of squares c. The sample regression function d. The population regression function Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Deriving the Ordinary Least Squares Estimates BUSPROG: Feedback: The equation ^y = β^0 + ^ β1 x denotes the sample regression function of the given regression model. 7. Consider the following regression model: y = β 0 + β1x1 + u. Which of the following is a property of Ordinary Least Square (OLS) estimates of this model and their associated statistics? a. The sum, and therefore the sample average of the OLS residuals, is positive. b. The sum of the OLS residuals is negative. c. The sample covariance between the regressors and the OLS residuals is positive. d. The point ( x́ , ý ) always lies on the OLS regression line. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS on Any Sample of Data BUSPROG: Feedback: An important property of the OLS estimates is that the point ( x́ , ý ) always lies on the OLS regression line. In other words, if x=x́ , the predicted value of y is ý. 8. The explained sum of squares for the regression function, y i=β 0 + β 1 x 1+ u1 , is defined as _____. n a. ∑ ( y i− ý )2 i=1 n b. ∑ ( y i−^y )2 i=1 n c. ∑ u^i i=1 n d. ∑ (ui)2 i=1 Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS on Any Sample of Data BUSPROG: n Feedback: The explained sum of squares is defined as ∑ ( y i−^y )2 i=1 9. If the total sum of squares (SST) in a regression equation is 81, and the residual sum of squares (SSR) is 25, what is the explained sum of squares (SSE)? a. 64 SST = 81 b. 56 SSR = 25 c. 32 SSE = ? d. 18 Answer: b Difficulty: Moderate Bloom’s: Application A-Head: Properties of OLS on Any Sample of Data BUSPROG: Analytic Feedback: Total sum of squares (SST) is given by the sum of explained sum of squares (SSE) and residual sum of squares (SSR). Therefore, in this case, SSE=81- 25=56. 10. If the residual sum of squares (SSR) in a regression analysis is 66 and the total sum of squares (SST) is equal to 90, what is the value of the coefficient of determination? a. 0.73 b. 0.55 c. 0.27 d. 1.2 Answer: c Difficulty: Moderate Bloom’s: Application A-Head: Properties of OLS on Any Sample of Data BUSPROG: Analytic Feedback: The formula for calculating the coefficient of determination is SSR 66 R2=1−. In this case, R2=1− =0.27 SST 90 11. Which of the following is a nonlinear regression model? a. y = β0 + β1x1/2 + u b. log y = β0 + β1log x +u c. y = 1 / (β0 + β1x) + u d. y = β0 + β1x + u Answer: c Difficulty: Moderate Bloom’s: Comprehension A-Head: Properties of OLS on Any Sample of Data BUSPROG: Feedback: A regression model is nonlinear if the equation is nonlinear in the parameters. In this case, y=1 / (β0 + β1x) + u is nonlinear as it is nonlinear in its parameters. 12. Which of the following is assumed for establishing the unbiasedness of Ordinary Least Square (OLS) estimates? a. The error term has an expected value of 1 given any value of the explanatory variable. b. The regression equation is linear in the explained and explanatory variables. c. The sample outcomes on the explanatory variable are all the same value. d. The error term has the same variance given any value of the explanatory variable. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Expected Values and Variances of the OLS Estimators BUSPROG: Feedback: The error u has the same variance given any value of the explanatory variable. 13. The error term in a regression equation is said to exhibit homoskedasticty if _____. a. it has zero conditional mean b. it has the same variance for all values of the explanatory variable. c. it has the same value for all values of the explanatory variable d. if the error term has a value of one given any value of the explanatory variable. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Expected Values and Variances of the OLS Estimators BUSPROG: Feedback: The error term in a regression equation is said to exhibit homoskedasticty if it has the same variance for all values of the explanatory variable. 14. In the regression of y on x, the error term exhibits heteroskedasticity if _____. a. it has a constant variance b. Var(y|x) is a function of x c. x is a function of y d. y is a function of x Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Expected Values and Variances of the OLS Estimators BUSPROG: Feedback: Heteroskedasticity is present whenever Var(y|x) is a function of x because Var(u|x) = Var(y|x). 15. What is the estimated value of the slope parameter when the regression equation, y = β0 + β1x1 + u passes through the origin? n a. ∑ yi i=1 y ¿ ¿ b. ¿ ) n ∑¿ i=1 n ∑ xi yi i=1 c. n ∑ x i2 i=1 n d. ∑ ( y i− ý )2 i=1 Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Regression through the Origin and Regression on a Constant BUSPROG: Feedback: The estimated value of the slope parameter when the regression n ∑ xi yi i=1 equation passes through the origin is n. ∑ x i2 i=1 16. A natural measure of the association between two random variables is the correlation coefficient. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Definition of the Simple Regression Model BUSPROG: Feedback: A natural measure of the association between two random variables is the correlation coefficient. 17. The sample covariance between the regressors and the Ordinary Least Square (OLS) residuals is always positive. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS on Any Sample of Data BUSPROG: Feedback: The sample covariance between the regressors and the Ordinary Least Square (OLS) residuals is zero. 18. R2 is the ratio of the explained variation compared to the total variation. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS on Any Sample of Data BUSPROG: Feedback: The sample covariance between the regressors and the Ordinary Least Square (OLS) residuals is zero. 19. There are n-1 degrees of freedom in Ordinary Least Square residuals. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Expected Values and Variances of the OLS Estimators BUSPROG: Feedback: There are n-2 degrees of freedom in Ordinary Least Square residuals. 20. The variance of the slope estimator increases as the error variance decreases. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Expected Values and Variances of the OLS Estimators BUSPROG: Feedback: The variance of the slope estimator increases as the error variance increases. Chapter 3 1. In the equation, y=β 0 + β 1 x 1 + β 2 x 2+ u , β 2 is a(n) _____. a. independent variable b. dependent variable c. slope parameter d. intercept parameter Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Motivation for Multiple Regression BUSPROG: Feedback: In the equation, y=β 0 + β 1 x 1 + β 2 x 2+ u , β 2 is a slope parameter. 2. Consider the following regression equation: y=β 1 + β 2 x 1+ β2 x 2+u. What does β 1 imply? a. β 1 measures the ceteris paribus effect of x 1 on x2. b. β 1 measures the ceteris paribus effect of y on x1. c. β 1 measures the ceteris paribus effect of x 1 on y. d. β 1 measures the ceteris paribus effect of x 1 on u. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Motivation for Multiple Regression BUSPROG: Feedback: β 1 measures the ceteris paribus effect of x 1 on y. 3. If the explained sum of squares is 35 and the total sum of squares is 49, what is the residual sum of squares? a. 10 b. 12 c. 18 d. 14 Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Mechanics and Interpretation of Ordinary Least Squares BUSPROG: Analytic Feedback: The residual sum of squares is obtained by subtracting the explained sum of squares from the total sum of squares, or 49-35=14. 4. Which of the following is true of R2? a. R2 is also called the standard error of regression. b. A low R2 indicates that the Ordinary Least Squares line fits the data well. c. R2 usually decreases with an increase in the number of independent variables in a regression. d. R2 shows what percentage of the total variation in the dependent variable, Y, is explained by the explanatory variables. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Mechanics and Interpretation of Ordinary Least Squares BUSPROG: Feedback: R2 shows what percentage of the total variation in Y is explained by the explanatory variables. 5. The value of R2 always _____. a. lies below 0 b. lies above 1 c. lies between 0 and 1 d. lies between 1 and 1.5 Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Mechanics and Interpretation of Ordinary Least Squares BUSPROG: Feedback: By definition, the value of R2 always lies between 0 and 1. 6. If an independent variable in a multiple linear regression model is an exact linear combination of other independent variables, the model suffers from the problem of _____. a. perfect collinearity b. homoskedasticity c. heteroskedasticty d. omitted variable bias Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: The Expected Value of the OLS Estimators BUSPROG: Feedback: If an independent variable in a multiple linear regression model is an exact linear combination of other independent variables, the model suffers from the problem of perfect collinearity. 7. The assumption that there are no exact linear relationships among the independent variables in a multiple linear regression model fails if _____, where n is the sample size and k is the number of parameters. a. n>2 b. n=k+1 For a multiple linear regression model to be properly estimated, the sample size n must be greater than the number of parameters k --> ensures that the model has enough degrees of c. n>k freedom to estimate the parameters. d. n0 and x and x are positively correlated. 1 2 10. Suppose the variable x2 has been omitted from the following regression ~ equation, y=β 0 + β 1 x 1 + β 2 x 2+ u. β 1 is the estimator obtained when x is omitted 2 ~ from the equation. The bias in β 1 is negative if _____. a. β 2 >0 and x and x are positively correlated 1 2 b. β 2 β. 1 12. High (but not perfect) correlation between two or more independent variables is called _____. a. heteroskedasticty variance of the errors is not constant across all levels of the independent variables b. homoskedasticty variance of the errors (or residuals) is constant across all levels of the independent variables. c. multicollinearity two or more independent variables are highly correlated d. micronumerosity a small sample size Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: The Variance of the OLS Estimators BUSPROG: Feedback: High, but not perfect, correlation between two or more independent variables is called multicollinearity. 13. The term _____ refers to the problem of small sample size. a. micronumerosity b. multicollinearity c. homoskedasticity d. heteroskedasticity Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: The Variance of the OLS Estimators BUSPROG: Feedback: The term micronumerosity refers to the problem of small sample size. 14. Find the degrees of freedom in a regression model that has 10 observations and 7 independent variables. a. 17 b. 2 c. 3 d. 4 Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: The Variance of the OLS Estimators BUSPROG: Analytic Feedback: The degrees of freedom in a regression model is computed by subtracting the number of parameters from the number of observations in a regression model. Since, the number of parameters is one more than the number of independent variables, the degrees of freedom in this case is 10-(7 + 1) = 2. 15. The Gauss-Markov theorem will not hold if _____. a. the error term has the same variance given any values of the explanatory variables b. the error term has an expected value of zero given any values of the independent variables c. the independent variables have exact linear relationships among them d. the regression model relies on the method of random sampling for collection of data Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Efficiency of OLS: The Gauss-Markov Theorem BUSPROG: Feedback: The Gauss-Markov theorem will not hold if the independent variables have exact linear relationships among them. 16. The term “linear” in a multiple linear regression model means that the equation is linear in parameters. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Motivation for Multiple Regression BUSPROG: Feedback: The term “linear” in a multiple linear regression model means that the equation is linear in parameters. 17. The key assumption for the general multiple regression model is that all factors in the unobserved error term be correlated with the explanatory variables. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Motivation for Multiple Regression BUSPROG: Feedback: The key assumption of the general multiple regression model is that all factors in the unobserved error term be uncorrelated with the explanatory variables. 18. The coefficient of determination (R2) decreases when an independent variable is added to a multiple regression model. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Mechanics and Interpretation of Ordinary Least Squares BUSPROG: Feedback: The coefficient of determination (R2) never decreases when an independent variable is added to a multiple regression model. 19. An explanatory variable is said to be exogenous if it is correlated with the error term. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: The Expected Value of the OLS Estimators BUSPROG: Feedback: An explanatory variable is said to be endogenous if it is correlated with the error term. 20. A larger error variance makes it difficult to estimate the partial effect of any of the independent variables on the dependent variable. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: The Variance of the OLS Estimators BUSPROG: Feedback: A larger error variance makes it difficult to estimate the partial effect of any of the independent variables on the dependent variable. Chapter 4 1. The normality assumption implies that: a. the population error u is dependent on the explanatory variables and is normally distributed with mean equal to one and variance σ 2. b. the population error u is independent of the explanatory variables and is normally distributed with mean equal to one and variance σ. c. the population error u is dependent on the explanatory variables and is normally distributed with mean zero and variance σ. d. the population error u is independent of the explanatory variables and is normally distributed with mean zero and variance σ 2. Answer: d Difficulty: Moderate Bloom’s: Knowledge A-Head: Sampling Distributions of the OLS Estimators BUSPROG: Feedback: The normality assumption implies that the population error ‘u’ is independent of the explanatory variables and is normally distributed with mean zero and variance σ2. 2. Which of the following statements is true? a. Taking a log of a nonnormal distribution yields a distribution that is closer to normal. b. The mean of a nonnormal distribution is 0 and the variance is σ 2. c. The CLT assumes that the dependent variable is unaffected by unobserved factors. d. OLS estimators have the highest variance among unbiased estimators. Answer: a Difficulty: Moderate Bloom’s: Knowledge A-Head: Sampling Distribution of the OLS Estimators BUSPROG: Feedback: Transformations such as logs of nonnormal distributions, yields distributions which are closer to normal. 3. A normal variable is standardized by: a. subtracting off its mean from it and multiplying by its standard deviation. b. adding its mean to it and multiplying by its standard deviation. c. subtracting off its mean from it and dividing by its standard deviation. d. adding its mean to it and dividing by its standard deviation. Answer: c Difficulty: Moderate Bloom’s: Knowledge A-Head: Sampling Distribution of the OLS Estimators BUSPROG: Feedback: A normal variable is standardized by subtracting off its mean from it and dividing by its standard deviation. 4. Which of the following is a statistic that can be used to test hypotheses about a single population parameter? a. F statistic b. t statistic c. χ2 statistic d. Durbin Watson statistic Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Testing Hypotheses about a Single Population Parameter: The t Test BUSPROG: Feedback: The t statistic can be used to test hypotheses about a single population parameter. 5. Consider the equation, Y = β 1 + β2X2 + u. A null hypothesis, H0: β2 = 0 states that: a. X2 has no effect on the expected value of β2. b. X2 has no effect on the expected value of Y. c. β2 has no effect on the expected value of Y. d. Y has no effect on the expected value of X 2. Answer: b Difficulty: Moderate Bloom’s: Comprehension A-Head: Testing Hypotheses about a Single Population Parameter: The t Test BUSPROG: Feedback: In such an equation, a null hypothesis, H 0: β2 = 0 states that X2 has no effect on the expected value of Y. This is because β 2 is the coefficient associated with X2. 6. The significance level of a test is: a. the probability of rejecting the null hypothesis when it is false. b. one minus the probability of rejecting the null hypothesis when it is false. c. the probability of rejecting the null hypothesis when it is true. d. one minus the probability of rejecting the null hypothesis when it is true. Answer: c Difficulty: Moderate Bloom’s: Knowledge A-Head: Testing Hypotheses about a Single Population Parameter: The t Test BUSPROG: Feedback: The significance level of a test refers to the probability of rejecting the null hypothesis when it is in fact true. 7. The general t statistic can be written as: Hypothesized value a. t = Standard e rror estimate – hypothesized value b. t = ¿ ) (estimate – hypothesized value) c. t = variance (estimate – hypothesized value) d. t = standard error Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Testing Hypotheses about a Single Population Parameter: The t Test BUSPROG: Feedback: The general t statistic can be written as t = (estimate – hypothesized value)/standard error. 8. Which of the following statements is true of confidence intervals? a. Confidence intervals in a CLM are also referred to as point estimates. b. Confidence intervals in a CLM provide a range of likely values for the population parameter. c. Confidence intervals in a CLM do not depend on the degrees of freedom of a distribution. d. Confidence intervals in a CLM can be truly estimated when heteroskedasticity is present. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Confidence Intervals BUSPROG: Feedback: Confidence intervals provide a range of likely values for the population parameter and are not point estimates. Estimation of confidence intervals depends on the degrees of freedom of the distribution and cannot be truly estimated when heteroskedasticity is present. 9. Which of the following statements is true? a. When the standard error of an estimate increases, the confidence interval for the estimate narrows down. b. Standard error of an estimate does not affect the confidence interval for the estimate. c. The lower bound of the confidence interval for a regression coefficient, say β j, is ^β ^ given by J - [standard error × ( β J )] d. The upper bound of the confidence interval for a regression coefficient, say βj, is ^β ^ given by J + [Critical value × standard error ( β J )] Answer: d Difficulty: Moderate Bloom’s: Knowledge A-Head: Confidence Intervals BUSPROG: Feedback: The upper bound of the confidence interval for a regression coefficient, ^β ^ say βj, is given by J + [Critical value × standard error ( β J )]. 10. Which of the following tools is used to test multiple linear restrictions? a. t test b. z test c. F test d. Unit root test Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Testing Multiple Linear Restrictions: The F test BUSPROG: Feedback: The F test is used to test multiple linear restrictions. 11. Which of the following statements is true of hypothesis testing? a. The t test can be used to test multiple linear restrictions. b. A test of single restriction is also referred to as a joint hypotheses test. c. A restricted model will always have fewer parameters than its unrestricted model. d. OLS estimates maximize the sum of squared residuals. Answer: c Difficulty: Moderate Bloom’s: Knowledge A-Head: Testing Multiple Linear Restrictions: The F test BUSPROG: Feedback: A restricted model will always have fewer parameters than its unrestricted model. 12. Which of the following correctly defines F statistic if SSR r represents sum of squared residuals from the restricted model of hypothesis testing, SSR ur represents sum of squared residuals of the unrestricted model, and q is the number of restrictions placed? (SSR ur −SSR r )/q a. F = SSR ur /( n−k −1) (SSR r −SSRur )/q b. F = SSR ur /(n−k −1) (SSR ur −SSR r )/q c. F = SSR r /( n−k−1) (SSR ur −SSR r )/(n−k−1) d. F = SSR ur /q Answer: b Difficulty: Moderate Bloom’s: Knowledge A-Head: Testing Multiple Linear Restrictions: The F test BUSPROG: (SSR r −SSRur )/q Feedback: The F statistic is given by, F = SSR ur /(n−k −1) 13. Which of the following statements is true? a. If the calculated value of F statistic is higher than the critical value, we reject the alternative hypothesis in favor of the null hypothesis. b. The F statistic is always nonnegative as SSR r is never smaller than SSRur. c. Degrees of freedom of a restricted model is always less than the degrees of freedom of an unrestricted model. d. The F statistic is more flexible than the t statistic to test a hypothesis with a single restriction. Answer: b Difficulty: Moderate Bloom’s: Comprehension A-Head: Testing Multiple Linear Restrictions: The F test BUSPROG: Feedback: The F statistic is always nonnegative as SSRr is never smaller than SSRur. 14. If R2ur = 0.6873, R2r = 0.5377, number of restrictions = 3, and n – k – 1 = 229, F statistic equals: a. 21.2 b. 28.6 c. 36.5 d. 42.1 Answer: c Difficulty: Hard Bloom’s: Application A-Head: Testing Multiple Linear Restrictions: The F test BUSPROG: Analytic Feedback: The F statistic can be calculated as F = [(R 2ur – R2r)/q] / [(1-R2ur)/n – k – 1]. Here, q represents the number of restrictions. In this case it is equal to [(0.6873 – 0.5377)/3] / [(1 – 0.6873)/229] = [0.04986/0.001365] = 36.5. 15. Which of the following correctly identifies a reason why some authors prefer to report the standard errors rather than the t statistic? a. Having standard errors makes it easier to compute confidence intervals. b. Standard errors are always positive. c. The F statistic can be reported just by looking at the standard errors. d. Standard errors can be used directly to test multiple linear regressions. Answer: a Difficulty: Medium Bloom’s: Comprehension A-Head: Reporting Regression Results BUSPROG: Feedback: One of the advantages of reporting standard errors over t statistics is that confidence intervals can be easily calculated using stand errors. 16. Whenever the dependent variable takes on just a few values it is close to a normal distribution. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Sampling Distribution of the OLS Estimators BUSPROG: Feedback: Whenever the dependent variable takes on just a few values it cannot have anything close to a normal distribution. A normal distribution requires the dependent variable to take up a large range of values. 17. If the calculated value of the t statistic is greater than the critical value, the null hypothesis, H0 is rejected in favor of the alternative hypothesis, H 1. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Testing Hypotheses about a Single Population Parameter: The t Test BUSPROG: Feedback: If the calculated value of the t statistic is greater than the critical value, H0 is rejected in favor of H1. 18. H1: βj ≠ 0, where βj is a regression coefficient associated with an explanatory variable, represents a one-sided alternative hypothesis. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Testing Hypotheses about a Single Population Parameter: The t Test BUSPROG: Feedback: H1: βj ≠ 0, where βj is a regression coefficient associated with an explanatory variable, represents a two-sided alternative hypothesis. ^ ^β 19. If β 1 and 2 are estimated values of regression coefficients associated with ^ two explanatory variables in a regression equation, then the standard error ( β 1 – ^β ^ ^ ) = standard error ( β 2 1 ) – standard error ( β 2 ). Answer: False Difficulty: Easy Bloom’s: Comprehension A-Head: Testing Hypotheses about a Single Linear Combinations of the Parameters BUSPROG: ^ ^β Feedback: If β 1 and 2 are estimated values of regression coefficients associated with two explanatory variables in a regression equation, then the ^ ^β ^ ^ standard error ( β 1 – 2 ) ≠ standard error ( β ) – standard error ( β 1 ). 2 20. Standard errors must always be positive. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Testing Hypotheses about a Single Linear Combinations of the Parameters BUSPROG: Feedback: Standard errors must always be positive since they are estimates of standard deviations. Chapter 5 1. Which of the following statements is true? a. The standard error of a regression, σ^ , is not an unbiased estimator for σ , the standard deviation of the error, u, in a multiple regression model. b. In time series regressions, OLS estimators are always unbiased. c. Almost all economists agree that unbiasedness is a minimal requirement for an estimator in regression analysis. d. All estimators in a regression model that are consistent are also unbiased. Answer: b Difficulty: Moderate Bloom’s: Knowledge A-Head: Consistency BUSPROG: Feedback: The standard error of a regression is not an unbiased estimator for the standard deviation of the error in a multiple regression model. ^β β , is consistent, then the: 2. If j, an unbiased estimator of j ^β β a. distribution of j becomes more and more loosely distributed around j as the sample size grows. ^β β b. distribution of j becomes more and more tightly distributed around j as the sample size grows. ^β c. distribution of j tends toward a standard normal distribution as the sample size grows. ^β d. distribution of j remains unaffected as the sample size grows. Answer: b Difficulty: Medium Bloom’s: Knowledge A-Head: Consistency BUSPROG: ^β β , is consistent, then the distribution Feedback: If j, an unbiased estimator of j ^β β of j becomes more and more tightly distributed around j as the sample size grows. ^β β j, is also a consistent estimator of β j, 3. If j, an unbiased estimator of then when the sample size tends to infinity: ^β a. the distribution of j collapses to a single value of zero. ^β b. the distribution of j diverges away from a single value of zero. ^β β j. c. the distribution of j collapses to the single point ^β β j. d. the distribution of j diverges away from Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Consistency BUSPROG: ^β β , is also a consistent estimator of Feedback: If j, an unbiased estimator of j β j, then when the sample size tends to infinity the distribution of ^β j collapses to the single point β j. 4. In a multiple regression model, the OLS estimator is consistent if: a. there is no correlation between the dependent variables and the error term. b. there is a perfect correlation between the dependent variables and the error term. c. the sample size is less than the number of parameters in the model. d. there is no correlation between the independent variables and the error term. Answer: d Difficulty: Moderate Bloom’s: Knowledge A-Head: Consistency BUSPROG: Feedback: In a multiple regression model, the OLS estimator is consistent if there is no correlation between the explanatory variables and the error term. 5. If the error term is correlated with any of the independent variables, the OLS estimators are: a. biased and consistent. b. unbiased and inconsistent. c. biased and inconsistent. d. unbiased and consistent. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Consistency BUSPROG: Feedback: If the error term is correlated with any of the independent variables, then the OLS estimators are biased and inconsistent. 6. If δ1 = Cov(x1/x2) / Var(x1) where x1 and x2 are two independent variables in a regression equation, which of the following statements is true? a. If x2 has a positive partial effect on the dependent variable, and δ 1 > 0, then the inconsistency in the simple regression slope estimator associated with x 1 is negative. b. If x2 has a positive partial effect on the dependent variable, and δ 1 > 0, then the inconsistency in the simple regression slope estimator associated with x 1 is positive. c. If x1 has a positive partial effect on the dependent variable, and δ 1 > 0, then the inconsistency in the simple regression slope estimator associated with x 1 is negative. d. If x1 has a positive partial effect on the dependent variable, and δ 1 > 0, then the inconsistency in the simple regression slope estimator associated with x 1 is positive. Answer: b Difficulty: Moderate Bloom’s: Knowledge A-Head: Consistency BUSPROG: Feedback: Given that δ1 = Cov(x1/x2)/Var(x1) where x1 and x2 are two independent variables in a regression equation, if x2 has a positive partial effect on the dependent variable, and δ1 > 0, then the inconsistency in the simple regression slope estimator associated with x1 is positive. 7. If OLS estimators satisfy asymptotic normality, it implies that: a. they are approximately normally distributed in large enough sample sizes. b. they are approximately normally distributed in samples with less than 10 observations. c. they have a constant mean equal to zero and variance equal to σ 2. d. they have a constant mean equal to one and variance equal to σ. Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: If OLS estimators satisfy asymptotic normality, it implies that they are approximately normally distributed in large enough sample sizes. 8. In a regression model, if variance of the dependent variable, y, conditional on an explanatory variable, x, or Var(y|x), is not constant, _____. a. the t statistics are invalid and confidence intervals are valid for small sample sizes b. the t statistics are valid and confidence intervals are invalid for small sample sizes c. the t statistics confidence intervals are valid no matter how large the sample size is c. the t statistics and confidence intervals are both invalid no matter how large the sample size is Answer: d Difficulty: Moderate Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: If variance of the dependent variable conditional on an explanatory variable is not a constant the usual t statistics confidence intervals are both invalid no matter how large the sample size is. ^β 9. If j is an OLS estimator of a regression coefficient associated with one of the ^β explanatory variables, such that j= 1, 2, …., n, asymptotic standard error of j will refer to the: ^β a. estimated variance of j when the error term is normally distributed. b. estimated variance of a given coefficient when the error term is not normally distributed. ^β c. square root of the estimated variance of j when the error term is normally distributed. ^β d. square root of the estimated variance of j when the error term is not normally distributed. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: Asymptotic standard error refers to the square root of the estimated ^β variance of j when the error term is not normally distributed. 10. A useful rule of thumb is that standard errors are expected to shrink at a rate that is the inverse of the: a. square root of the sample size. b. product of the sample size and the number of parameters in the model. c. square of the sample size. d. sum of the sample size and the number of parameters in the model. Answer: a Difficulty: Moderate Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: Standard errors can be expected to shrink at a rate that is the inverse of the square root of the sample size. 11. An auxiliary regression refers to a regression that is used: a. when the dependent variables are qualitative in nature. b. when the independent variables are qualitative in nature. c. to compute a test statistic but whose coefficients are not of direct interest. d. to compute coefficients which are of direct interest in the analysis. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: An auxiliary regression refers to a regression that is used to compute a test statistic but whose coefficients are not of direct interest. 12. The n-R-squared statistic also refers to the: a. F statistic. b. t statistic. c. z statistic. d. LM statistic. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: The n-R-squared statistic also refers to the LM statistic. 13. The LM statistic follows a: a. t distribution. b. f distribution. c. χ 2 distribution. d. binomial distribution. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: The LM statistic follows a χ 2 distribution. 14. Which of the following statements is true? a. In large samples there are not many discrepancies between the outcomes of the F test and the LM test. b. Degrees of freedom of the unrestricted model are necessary for using the LM test. c. The LM test can be used to test hypotheses with single restrictions only and provides inefficient results for multiple restrictions. d. The LM statistic is derived on the basis of the normality assumption. Answer: a Difficulty: Moderate Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: In large samples there are not many discrepancies between the F test and the LM test because asymptotically the two statistics have the same probability of a Type 1 error. 15. Which of the following statements is true under the Gauss-Markov assumptions? a. Among a certain class of estimators, OLS estimators are best linear unbiased, but are asymptotically inefficient. b. Among a certain class of estimators, OLS estimators are biased but asymptotically efficient. c. Among a certain class of estimators, OLS estimators are best linear unbiased and asymptotically efficient. d. The LM test is independent of the Gauss-Markov assumptions. Answer: c Difficulty: Moderate Bloom’s: Knowledge A-Head: Asymptotic Efficiency of OLS BUSPROG: Feedback: Under the Gauss-Markov assumptions, among a certain class of estimators, OLS estimators are best linear unbiased and asymptotically efficient. 16. If variance of an independent variable in a regression model, say x 1, is greater ^β than 0, or Var(x1) > 0, the inconsistency in 1 (estimator associated with x1) is negative, if x1 and the error term are positively related. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Consistency BUSPROG: Feedback: If variance of an independent variable, say x 1, is greater than 0, the ^β inconsistency in 1 (estimator associated with x1) is positive if x1 and the error term are positively related. 17. Even if the error terms in a regression equation, u 1, u2,….., un, are not normally distributed, the estimated coefficients can be normally distributed. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: Even if the error terms in a regression equation, u 1, u2,….., un, are not normally distributed, the estimated coefficients cannot be normally distributed. 18. A normally distributed random variable is symmetrically distributed about its mean, it can take on any positive or negative value (but with zero probability), and more than 95% of the area under the distribution is within two standard deviations. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: A normally distributed random variable is symmetrically distributed about its mean, it can take on any positive or negative value (but with zero probability), and more than 95% of the area under the distribution is within two standard deviations. 19. The F statistic is also referred to as the score statistic. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: The LM statistic is also referred to as the score statistic. 20. The LM statistic requires estimation of the unrestricted model only. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Asymptotic Normality and Large Sample Inference BUSPROG: Feedback: The LM statistic requires estimation of the restricted model only. Chapter 6 1. A change in the unit of measurement of the dependent variable in a model does not lead to a change in: a. the standard error of the regression. b. the sum of squared residuals of the regression. c. the goodness-of-fit of the regression. d. the confidence intervals of the regression. Answer: c Difficulty: Moderate Bloom’s: Knowledge A-Head: Effects of Data Scaling on OLS Statistics BUSPROG: Feedback: Changing the unit of measurement of the dependent variable in a model does not lead to a change in the goodness of fit of the regression. 2. Changing the unit of measurement of any independent variable, where log of the dependent variable appears in the regression: a. affects only the intercept coefficient. b. affects only the slope coefficient. c. affects both the slope and intercept coefficients. d. affects neither the slope nor the intercept coefficient. Answer: a Difficulty: Moderate Bloom’s: Comprehension A-Head: Effects of Data Scaling on OLS Statistics BUSPROG: Feedback: Changing the unit of measurement of any independent variable, where log of the independent variable appears in the regression only affects the intercept. This follows from the property log(ab) = log(a) + log(b). 3. A variable is standardized in the sample: a. by multiplying by its mean. b. by subtracting off its mean and multiplying by its standard deviation. c. by subtracting off its mean and dividing by its standard deviation. d. by multiplying by its standard deviation. Answer: c Difficulty: Moderate Bloom’s: Knowledge A-Head: Effects of Data Scaling on OLS Statistics BUSPROG: Feedback: A variable is standardized in the sample by subtracting off its mean and dividing by its standard deviation. 4. Standardized coefficients are also referred to as: a. beta coefficients. b. y coefficients. c. alpha coefficients. d. j coefficients. Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Effects of Data Scaling on OLS Statistics BUSPROG: Feedback: Standardized coefficients are also referred to as beta coefficients. 5. If a regression equation has only one explanatory variable, say x 1, its standardized coefficient must lie in the range: a. -2 to 0. b. -1 to 1. c. 0 to 1. d. 0 to 2. Answer: b Difficulty: Easy Bloom’s: Comprehension A-Head: Effects of Data Scaling on OLS Statistics BUSPROG: Feedback: If a regression equation has only one explanatory variable, say x 1, its standardized coefficient is the correlation coefficient between the dependent variable and x1, and must lie in the range -1 to 1. 6. In the following equation, gdp refers to gross domestic product, and FDI refers to foreign direct investment. log(gdp) = 2.65 + 0.527log(bankcredit) + 0.222FDI (0.13) (0.022) (0.017) Which of the following statements is then true? a. If gdp increases by 1%, bank credit increases by 0.527%, the level of FDI remaining constant. b. If bank credit increases by 1%, gdp increases by 0.527%, the level of FDI remaining constant. c. If gdp increases by 1%, bank credit increases by log(0.527)%, the level of FDI remaining constant. d. If bank credit increases by 1%, gdp increases by log(0.527)%, the level of FDI remaining constant. Answer: b Difficulty: Moderate Bloom’s: Application A-Head: More on Functional Form BUSPROG: Feedback: The equation suggests that if bank credit increases by 1%, gdp increases by 0.527%. This is known from the value of the coefficient associated with bank credit. 7. In the following equation, gdp refers to gross domestic product, and FDI refers to foreign direct investment. log(gdp) = 2.65 + 0.527log(bankcredit) + 0.222FDI (0.13) (0.022) (0.017) Which of the following statements is then true? a. If FDI increases by 1%, gdp increases by approximately 22.2%, the amount of bank credit remaining constant. b. If FDI increases by 1%, gdp increases by approximately 26.5%, the amount of bank credit remaining constant. c. If FDI increases by 1%, gdp increases by approximately 24.8%, the amount of bank credit remaining constant. d. If FDI increases by 1%, gdp increases by approximately 52.7%, the amount of bank credit remaining constant. Answer: c Difficulty: Hard Bloom’s: Application A-Head: More on Functional Form BUSPROG: Feedback: The equation suggests that if FDI increases by 1%, gdp increases by 100(exp(0.222) – 1)%. This equals (1.24857 -1) = 24.8% approx. 8. Which of the following statements is true when the dependent variable, y > 0? a. Taking log of a variable often expands its range. b. Models using log(y) as the dependent variable will satisfy CLM assumptions more closely than models using the level of y. c. Taking log of variables make OLS estimates more sensitive to extreme values. d. Taking logarithmic form of variables make the slope coefficients more responsive to rescaling. Answer: b Difficulty: Moderate Bloom’s: Knowledge A-Head: More on Functional Form BUSPROG: Feedback: Models using log(y) as the dependent variable will satisfy CLM assumptions more closely than models using the level of y. This is because taking log of a variable gets it closer to a normal distribution. 9. Which of the following correctly identifies a limitation of logarithmic transformation of variables? a. Taking log of variables make OLS estimates more sensitive to extreme values in comparison to variables taken in level. b. Logarithmic transformations cannot be used if a variable takes on zero or negative values. c. Logarithmic transformations of variables are likely to lead to heteroskedasticity. d. Taking log of a variable often expands its range which can cause inefficient estimates. Answer: b Difficulty: Moderate Bloom’s: Comprehension A-Head: More on Functional Form BUSPROG: Feedback: Logarithmic transformations cannot be used if a variable takes on zero or negative values. 10. Which of the following models is used quite often to capture decreasing or increasing marginal effects of a variable? a. Models with logarithmic functions b. Models with quadratic functions c. Models with variables in level d. Models with interaction terms Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: More on Functional Form BUSPROG: Feedback: Models with quadratic functions are used quite often to capture decreasing or increasing marginal effects of a variable 11. Which of the following correctly represents the equation for adjusted R 2? a. Ŕ 2 = 1 – [SSR/(n –1)]/[SST/(n+1)] b. Ŕ 2 = 1 – [SSR/(n –k – 1)]/[SST/(n+1)] c. Ŕ 2 = 1 – [SSR/(n –k – 1)]/[SST/(n – 1)] d. Ŕ 2 = 1 – [SSR]/[SST/(n – 1)] Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: More on Goodness-of-Fit and Selection of Regressors BUSPROG: Feedback: Ŕ 2 = 1 – [SSR/(n –k – 1)]/[SST/(n – 1)] 12. Which of the following correctly identifies an advantage of using adjusted R 2 over R2? a. Adjusted R2 corrects the bias in R2. b. Adjusted R2 is easier to calculate than R2. c. The penalty of adding new independent variables is better understood through adjusted R2 than R2. d. The adjusted R2 can be calculated for models having logarithmic functions while R2 cannot be calculated for such models. Answer: c Difficulty: Moderate Bloom’s: Knowledge A-Head: More on Goodness-of-Fit and Selection of Regressors BUSPROG: Feedback: The penalty of adding new independent variables is better understood through adjusted R2 than R2 since its calculation is directly dependent on the number of independent variables included. 13. Two equations form a nonnested model when: a. one is logarithmic and the other is quadratic. b. neither equation is a special case of the other. c. each equation has the same independent variables. d. there is only one independent variable in both equations. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: More on Goodness-of-Fit and Selection of Regressors BUSPROG: Feedback: Two equations form a nonnested model when neither equation is a special case of the other. 14. A predicted value of a dependent variable: a. represents the difference between the expected value of the dependent variable and its actual value. b. is always equal to the actual value of the dependent variable. c. is independent of explanatory variables and can be estimated on the basis of the residual error term only. d. represents the expected value of the dependent variable given particular values for the explanatory variables. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Prediction and Residual Analysis BUSPROG: Feedback: A predicted value of a dependent variable represents the expected value of the dependent variable given particular values for the explanatory variables. 15. Residual analysis refers to the process of: a. examining individual observations to see whether the actual value of a dependent variable differs from the predicted value. b. calculating the squared sum of residuals to draw inferences for the consistency of estimates. c. transforming models with variables in level to logarithmic functions so as to understand the effect of percentage changes in the independent variable on the dependent variable. d. sampling and collection of data in such a way to minimize the squared sum of residuals. Answer: a Difficulty: Moderate Bloom’s: Knowledge A-Head: Prediction and Residual Analysis BUSPROG: Feedback: Residual analysis refers to the process of examining individual observations to see whether the actual value of a dependent variable differs from the predicted value. 16. Beta coefficients are always greater than standardized coefficients. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Effects of Data Scaling on OLS Statistics BUSPROG: Feedback: Beta coefficients the same as standardized coefficients. 17. If a new independent variable is added to a regression equation, the adjusted R 2 increases only if the absolute value of the t statistic of the new variable is greater than one. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: More on Goodness-of-Fit and Selection of Regressors BUSPROG: Feedback: If a new independent variable is added to a regression equation, the adjusted R2 increases only if the absolute value of the t statistic of the new variable is greater than one in absolute value. 18. F statistic can be used to test nonnested models. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: More on Goodness-of-Fit and Selection of Regressors BUSPROG: Feedback: F statistic can be used only to test nested models. 19. Predictions of a dependent variable are subject to sampling variation. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Prediction and Residual Analysis BUSPROG: Feedback: Predictions of a dependent variable are subject to sampling variation since they are obtained using OLS estimators. 20. To make predictions of logarithmic dependent variables, they first have to be converted to their level forms. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Prediction and Residual Analysis BUSPROG: Feedback: It is possible to make predictions of dependent variables when they are in their logarithmic form. It is not necessary to convert them into their level forms. Chapter 7 1. A _____ variable is used to incorporate qualitative information in a regression model. a. dependent b. continuous c. binomial d. dummy Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Describing Qualitative Information BUSPROG: Feedback: A dummy variable or binary variable is used to incorporate qualitative information in a regression model. 2. In a regression model, which of the following will be described using a binary variable? a. Whether it rained on a particular day or it did not b. The volume of rainfall during a year c. The percentage of humidity in air on a particular day d. The concentration of dust particles in air Answer: a Difficulty: Medium Bloom’s: Comprehension A-Head: Describing Qualitative Information BUSPROG: Feedback: A binary variable is used to describe qualitative information in regression model. Therefore, such a variable will be used to describe whether it rained on a particular day or it did not. 3. Which of the following is true of dummy variables? a. A dummy variable always takes a value less than 1. b. A dummy variable always takes a value higher than 1. c. A dummy variable takes a value of 0 or 1. d. A dummy variable takes a value of 1 or 10. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Describing Qualitative Information BUSPROG: Feedback: A dummy variable takes a value of 0 or 1. The following simple model is used to determine the annual savings of an individual on the basis of his annual income and education. Savings = β0+∂0 Edu + β1Inc+u The variable ‘Edu’ takes a value of 1 if the person is educated and the variable ‘Inc’ measures the income of the individual. 4. Refer to the model above. The inclusion of another binary variable in this model that takes a value of 1 if a person is uneducated, will give rise to the problem of _____. a. omitted variable bias b. self-selection c. dummy variable trap d. heteroskedastcity Answer: c Difficulty: Medium Bloom’s: Application A-Head: Describing Qualitative Information BUSPROG: Analytic Feedback: The inclusion of another dummy variable in this model would introduce perfect collinearity and lead to a dummy variable trap. The following simple model is used to determine the annual savings of an individual on the basis of his annual income and education. Savings = β0+∂0 Edu + β1Inc+u The variable ‘Edu’ takes a value of 1 if the person is educated and the variable ‘Inc’ measures the income of the individual. 5. Refer to the model above. The benchmark group in this model is _____. a. the group of educated people b. the group of uneducated people c. the group of individuals with a high income d. the group of individuals with a low income Answer: b Difficulty: Moderate Bloom’s: Application A-Head: A Single Dummy Independent Variable BUSPROG: Analytic Feedback: The benchmark group is the group against which comparisons are made. In this case, the savings of a literate person is being compared to the savings of an illiterate person; therefore, the group of illiterate people is the base group or benchmark group. The following simple model is used to determine the annual savings of an individual on the basis of his annual income and education. Savings = β0+∂0 Edu + β1Inc+u The variable ‘Edu’ takes a value of 1 if the person is educated and the variable ‘Inc’ measures the income of the individual. 6. Refer to the above model. If ∂0 > 0, _____. a. uneducated people have higher savings than those who are educated b. educated people have higher savings than those who are not educated c. individuals with lower income have higher savings d. individual with lower income have higher savings Answer: b Difficulty: Moderate Bloom’s: Application A-Head: A Single Dummy Independent Variable BUSPROG: Analytic Feedback: The coefficient ∂0 measures the impact of education on an individual’s annual savings. If it has a positive impact, as in this case, educated people should have higher savings. 7. The income of an individual in Budopia depends on his ethnicity and several other factors which can be measured quantitatively. If there are 5 ethnic groups in Budopia, how many dummy variables should be included in the regression equation for income determination in Budopia? a. 1 b. 5 c. 6 d. 4 Answer: d Difficulty: Moderate Bloom’s: Application A-Head: Using Dummy Variables for Multiple Categories BUSPROG: Analytic Feedback: If a regression model is to have different intercepts for, say, g groups or categories, we need to include g -1 dummy variables in the model along with an intercept. In this case, the regression equation should include 5-1=4 dummy variables since there are 5 ethnic groups. 8. The quarterly increase in an employee’s salary depends on the rating of his work by his employer and several other factors as shown in the model below: Increase in salary= β0+∂0Rating + other factors. The variable ‘Rating’ is a(n) _____ variable. a. dependent variable b. ordinal variable c. continuous variable d. Poisson variable Answer: b Difficulty: Moderate Bloom’s: Application A-Head: Using Dummy Variables for Multiple Categories BUSPROG: Analytic Feedback: The value of the variable ‘Rating’ depends on the employer’s rating of the worker. Therefore, it incorporates ordinal information and is called an ordinal variable. 9. Which of the following is true of Chow test? a. It is a type of t test. b. It is a type of sign test. c. It is only valid under homoskedasticty. d. It is only valid under heteroskedasticity. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Interactions Involving Dummy Variables BUSPROG: Feedback: Since the Chow test is just an F test, it is only valid under homoskedasticity. 10. Which of the following is true of dependent variables? a. A dependent variable can only have a numerical value. b. A dependent variable cannot have more than 2 values. c. A dependent variable can be binary. d. A dependent variable cannot have a qualitative meaning. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: A Binary Dependent Variable: The Linear Probability Model BUSPROG: Feedback: A dependent variable is binary if it has a qualitative meaning. 11. In the following regression equation, y is a binary variable: y= β 0+β1x1+…βk xk+ u ^ In this case, the estimated slope coefficient, β 1 measures _____. a. the predicted change in the value of y when x 1 increases by one unit, everything else remaining constant b. the predicted change in the value of y when x 1 decreases by one unit, everything else remaining constant c. the predicted change in the probability of success when x 1 decreases by one unit, everything else remaining constant d. the predicted change in the probability of success when x 1 increases by one unit, everything else remaining constant Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: A Binary Dependent Variable: The Linear Probability Model BUSPROG: Feedback: A binary dependent variable is used when a regression model is used to explain a qualitative event. The dependent variable takes a value of 1 when the event takes place (success) and it takes a value of zero when the event does not take place. The coefficient of an independent variable in this case measures the predicted change in the probability of success when the independent variable increases by one unit. 12. Consider the following regression equation: y = β 0+β1x1+…βk xk+ u In which of the following cases, the dependent variable is binary? a. y indicates the gross domestic product of a country b. y indicates whether an adult is a college dropout c. y indicates household consumption expenditure d. y indicates the number of children in a family Answer: b Difficulty: Easy Bloom’s: Application A-Head: A Binary Dependent Variable: The Linear Probability Model BUSPROG: Analytic Feedback: The dependent variable, y is binary if it is used to indicate a qualitative outcome. 13. Which of the following Gauss-Markov assumptions is violated by the linear probability model? a. The assumption of constant variance of the error term. b. The assumption of zero conditional mean of the error term. c. The assumption of no exact linear relationship among independent variables. d. The assumption that none of the independent variables are constants. Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: A Binary Dependent Variable: The Linear Probability Model BUSPROG: Feedback: The linear probability model violates the assumption of constant variance of the error term. 14. Which of the following problems can arise in policy analysis and program evaluation using a multiple linear regression model? a. There exists homoscedasticity in the model. b. The model can produce predicted probabilities that are less than zero and greater than one. c. The model leads to the omitted variable bias as only two independent factors can be included in the model. d. The model leads to an overestimation of the effect of independent variables on the dependent variable. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: More on Policy Analysis and Program Evaluation BUSPROG: Feedback: The model can produce predicted probabilities that are less than zero and greater than one. 15. Consider the following regression equation: y = β 0+β1x1+…βk xk+ u In which of the following cases, is ‘y’ a discrete variable? a. y indicates the gross domestic product of a country b. y indicates the total volume of rainfall during a year c. y indicates household consumption expenditure d. y indicates the number of children in a family Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Interpreting Regression Results with Discrete Dependent Variables BUSPROG: Feedback: The number of children in a family can only take a small set of integer values. Therefore, y is a discrete variable if it measures the number of children in a family. 16. A binary variable is a variable whose value changes with a change in the number of observations. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Describing Qualitative Information BUSPROG: Feedback: A binary variable is one whose value depends on the event taking place. 17. A dummy variable trap arises when a single dummy variable describes a given number of groups. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: A Single Dummy Independent Variable BUSPROG: Feedback: A dummy variable trap arises when too many dummy variables describe a given number of groups. 18. The dummy variable coefficient for a particular group represents the estimated difference in intercepts between that group and the base group. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Using Dummy Variables for Multiple Categories BUSPROG: Feedback: The dummy variable coefficient for a particular group represents the estimated difference in intercepts between that group and the base group. 19. The multiple linear regression model with a binary dependent variable is called the linear probability model. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: A Binary Dependent Variable: The Linear Probability Model BUSPROG: Feedback: The multiple linear regression model with a binary dependent variable is called the linear probability model. 20. A problem that often arises in policy and program evaluation is that individuals (or firms or cities) choose whether or not to participate in certain behaviors or programs. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: More on Policy Analysis and Program Evaluation BUSPROG: Feedback: A problem that often arises in policy and program evaluation is that individuals (or firms or cities) choose whether or not to participate in certain behaviors or programs and their choice depends on several other factors. It is not possible to control for these factors while examining the effect of the programs. Chapter 8 1. Which of the following is true of heteroskedasticity? a. Heteroskedasticty causes inconsistency in the Ordinary Least Squares estimators. b. Population R2 is affected by the presence of heteroskedasticty. c. The Ordinary Least Square estimators are not the best linear unbiased estimators if heteroskedasticity is present. d. It is not possible to obtain F statistics that are robust to heteroskedasticity of an unknown form. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Consequences of Heteroskedasticity for OLS BUSPROG: Feedback: The Ordinary Least Square estimators are no longer the best linear unbiased estimators if heteroskedasticity is present in a regression model. 2. Consider the following regression model: y i=β0+β1 xi+ui. If the first four Gauss- Markov assumptions hold true, and the error term contains heteroskedasticity, then _____. a. Var(ui|xi) =0 b. Var(ui|xi) =1 c. Var(ui|xi) = σi2 d. Var(ui|xi) =σ Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Heteroskedasticity-Robust Inference after OLS Estimation BUSPROG: Feedback: If the first four Gauss-Markov assumptions hold and the error term contains heteroskedasticity, then Var(ui|xi) = σi2. 3. The general form of the t statistic is _____. estimate−hypothesized value a. t= standard error hypothesized value−estimate b. t= standard error standard error c. t= estimate−hypothesized value d. t=estimate−hypothesized value Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Heteroskedasticity-Robust Inference after OLS Estimation BUSPROG: estimate−hypothesized value Feedback: The general form of the t statistic is t=. standard error 4. Which of the following is true of the OLS t statistics? a. The heteroskedasticity-robust t statistics are justified only if the sample size is large. b. The heteroskedasticty-robust t statistics are justified only if the sample size is small. c. The usual t statistics do not have exact t distributions if the sample size is large. d. In the presence of homoscedasticity, the usual t statistics do not have exact t distributions if the sample size is small. Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Heteroskedasticity-Robust Inference after OLS Estimation BUSPROG: Feedback: The heteroskedasticity-robust t statistics are justified only if the sample size is large. 5. The heteroskedasticity-robust _____ is also called the heteroskedastcity-robust Wald statistic. a. t statistic b. F statistic c. LM statistic d. z statistic Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Heteroskedasticity-Robust Inference after OLS Estimation BUSPROG: Feedback: The heteroskedasticity-robust F statistic is also called the heteroskedastcity-robust Wald statistic. 6. Which of the following tests helps in the detection of heteroskedasticity? a. The Breusch-Pagan test b. The Breusch-Godfrey test c. The Durbin-Watson test d. The Chow test Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Testing for Heteroskedasticity BUSPROG: Feedback: The Breusch-Pagan test is used for the detection of heteroskedasticity in a regression model. 7. What will you conclude about a regression model if the Breusch-Pagan test results in a small p-value? a. The model contains homoskedasticty. b. The model contains heteroskedasticty. c. The model contains dummy variables. d. The model omits some important explanatory factors. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Testing for Heteroskedasticity BUSPROG: Feedback: The Breusch-Pagan test results in a small p-value if the regression model contains heteroskedasticty. 8. A test for heteroskedasticty can be significant if _____. a. the Breusch-Pagan test results in a large p-value b. the White test results in a large p-value c. the functional form of the regression model is misspecified d. the regression model includes too many independent variables Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Testing for Heteroskedasticity BUSPROG: Feedback: A test for heteroskedasticty can be significant if the functional form of the regression model is misspecified. 9. Which of the following is a difference between the White test and the Breusch- Pagan test? a. The White test is used for detecting heteroskedasticty in a linear regression model while the Breusch-Pagan test is used for detecting autocorrelation. b. The White test is used for detecting autocorrelation in a linear regression model while the Breusch-Pagan test is used for detecting heteroskedasticity.. c. The number of regressors used in the White test is larger than the number of regressors used in the Breusch-Pagan test. d. The number of regressors used in the Breusch-Pagan test is larger than the number of regressors used in the White test. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Testing for Heteroskedasticity BUSPROG: Feedback: The White test includes the squares and cross products of all independent variables. Therefore, the number of regressors is larger for the White test. 10. Which of the following is true of the White test? a. The White test is used to detect the presence of multicollinearity in a linear regression model. b. The White test cannot detect forms of heteroskedasticity that invalidate the usual Ordinary Least Squares standard errors. c. The White test can detect the presence of heteroskedasticty in a linear regression model even if the functional form is misspecified. d. The White test assumes that the square of the error term in a regression model is uncorrelated with all the independent variables, their squares and cross products. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Testing for Heteroskedasticity BUSPROG: Feedback: The White test assumes that the square of the error term in a regression model is uncorrelated with all the independent variables, the squares of independent variables and all the cross products. 11. Which of the following is true? a. In ordinary least squares estimation, each observation is given a different weight. b. In weighted least squares estimation, each observation is given an identical weight. c. In weighted least squares estimation, less weight is given to observations with a higher error variance. d. In ordinary least squares estimation, less weight is given to observations with a lower error variance. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Weighted Least Squares Estimation BUSPROG: Feedback: In weighted Least Squares estimation, less weight is given to observations with a higher error variance. 12. Weighted least squares estimation is used only when _____. a. the dependent variable in a regression model is binary b. the independent variables in a regression model are correlated c. the error term in a regression model has a constant variance d. the functional form of the error variances is known Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Weighted Least Squares Estimation BUSPROG: Feedback: Weighted Least Squares estimation is used only when the functional form of the error variances is known. 13. Consider the following regression equation: y=β 0 + β 1 x 1 +u. Which of the following indicates a functional form misspecification in E(y|x)? a. Ordinary Least Squares estimates equal Weighted Least Squares estimates. b. Ordinary Least Squares estimates exceed Weighted Least Squares estimates by a small magnitude. c. Weighted Least Squares estimates exceed Ordinary Least Squares estimates by a small magnitude. d. Ordinary Least Square estimates are positive while Weighted Least Squares estimates are negative. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Weighted Least Squares Estimation BUSPROG: Feedback: If Ordinary Least Square estimates are positive while Weighted Least Squares estimates are negative, the functional form of a regression equation is said to be misspecified. 14. Which of the following tests is used to compare the Ordinary Least Squares (OLS) estimates and the Weighted Least Squares (WLS) estimates? a. The White test b. The Hausman test c. The Durbin-Watson test d. The Breusch-Godfrey test Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Weighted Least Squares Estimation BUSPROG: Feedback: The Hausman test can be used to formally compare the OLS and WLS estimates to see if they differ by more than sampling error suggests they should. 15. The linear probability model contains heteroskedasticity unless _____. a. the intercept parameter is zero b. all the slope parameters are positive c. all the slope parameters are zero d. the independent variables are binary Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: The Linear Probability Model Revisited BUSPROG: Feedback: The linear probability model contains heteroskedasticity unless all the slope parameters are zero. 16. The interpretation of goodness-of-fit measures changes in the presence of heteroskedasticity. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Consequences of Heteroskedasticity for OLS BUSPROG: Feedback: The interpretation of goodness-of-fit measures is unaffected by the presence of heteroskedasticty. 17. Multicollinearity among the independent variables in a linear regression model causes the heteroskedasticity-robust standard errors to be large. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Heteroskedasticity-Robust Inference after OLS Estimation BUSPROG: Feedback: Multicollinearity among the independent variables in a linear regression model causes the heteroskedasticity-robust standard errors to be large. 18. If the Breusch-Pagan Test for heteroskedasticity results in a large p-value, the null hypothesis of homoskedasticty is rejected. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: Testing for Heteroskedasticity BUSPROG: Feedback: If the Breusch-Pagan Test for heteroskedasticity results in a large p-value, the null hypothesis of heteroskedasticty is rejected. 19. The generalized least square estimators for correcting heteroskedasticity are called weighed least squares estimators. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Weighted Least Squares Estimation BUSPROG: Feedback: The generalized least square estimators for correcting heteroskedasticity are called weighed least squares estimators. 20. The linear probability model always contains heteroskedasticity when the dependent variable is a binary variable unless all of the slope parameters are zero. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: The Linear Probability Model Revisited BUSPROG: Feedback: The linear probability model always contains heteroskedasticity when the dependent variable is a binary variable unless all of the slope parameters are zero. Chapter 9 1. Consider the following regression model: log(y) = β 0 + β1x1 + β2x12 + β3x3 + u. This model will suffer from functional form misspecification if _____. a. β0 is omitted from the model b. u is heteroskedastic c. x12 is omitted from the model d. x3 is a binary variable Answer: c Difficulty: Easy Bloom’s: Comprehension A-Head: Functional Form Misspecification BUSPROG: Feedback: The model suffers from functional form misspecification if x 12 is omitted from the model since it is a function of x1 which is an observed explanatory variable. 2. A regression model suffers from functional form misspecification if _____. a. a key variable is binary. b. the dependent variable is binary. c. an interaction term is omitted. d. the coefficient of a key variable is zero. Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Functional Form Misspecification BUSPROG: Feedback: A regression model suffers from functional form misspecification if an interaction term is omitted. 3. Which of the following is true? a. A functional form misspecification can occur if the level of a variable is used when the logarithm is more appropriate. b. A functional form misspecification occurs only if a key variable is uncorrelated with the error term.. c. A functional form misspecification does not lead to biasedness in the ordinary least squares estimators. d. A functional form misspecification does not lead to inconsistency in the ordinary least squares estimators. Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Functional Form Misspecification BUSPROG: Feedback: A functional form misspecification can occur if the level of a variable is used when the logarithm is more appropriate. 4. Which of the following is true of Regression Specification Error Test (RESET)? a. It tests if the functional form of a regression model is misspecified. b. It detects the presence of dummy variables in a regression model. c. It helps in the detection of heteroskedasticity when the functional form of the model is correctly specified. d. It helps in the detection of multicollinearity among the independent variables in a regression model. Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Functional Form Misspecification BUSPROG: Feedback: It tests if the functional form of a regression model is misspecified. 5. A proxy variable _____. a. increases the error variance of a regression model b. cannot contain binary information c. is used when data on a key independent variable is unavailable d. is detected by running the Davidson-MacKinnon test Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Using Proxy Variables for Unobserved Explanatory Variables BUSPROG: Feedback: A proxy variable is used when data on a key independent variable is unavailable. 6. Which of the following assumptions is needed for the plug-in solution to the omitted variables problem to provide consistent estimators? a. The error term in the regression model exhibits heteroskedasticity. b. The error term in the regression model is uncorrelated with all the independent variables. c. The proxy variable is uncorrelated with the dependent variable. d. The proxy variable has zero conditional mean. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Using Proxy Variables for Unobserved Explanatory Variables BUSPROG: Feedback: The error term in the regression model is uncorrelated with the proxy variable. 7. Which of the following is a drawback of including proxy variables in a regression model? a. It leads to misspecification analysis. b. It reduces the error variance. c. It increases the error variance. d. It exacerbates multicollinearity. Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Using Proxy Variables for Unobserved Explanatory Variables BUSPROG: Feedback: The inclusion of a proxy variable in a regression model exacerbates multicollinearity. 8. Consider the following equation for household consumption expenditure: Consmptn= β0+ β1Inc + β2Consmptn-1+ u where ‘Consmptn’ measures the monthly consumption expenditure of a household, ‘Inc’ measures household income and ‘Consmptn -1’ is the consumption expenditure in the previous month. Consmptn-1 is a _____ variable. a. exogenous b. binary variable c. lagged dependent d. proxy variable Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Using Proxy Variables for Unobserved Explanatory Variables BUSPROG: Feedback: ‘Consmptn-1’ is a lagged dependent variable in this model. 9. A measurement error occurs in a regression model when _____. a. the observed value of a variable used in the model differs from its actual value b. the dependent variable is binary c. the partial effect of an independent variable depends on unobserved factors d. the model includes more than two independent variables Answer: a Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS under Measurement Error BUSPROG: Feedback: A measurement error occurs in a regression model when the observed value of a variable used in the model differs from its actual value. 10. The classical errors-in-variables (CEV) assumption is that _____. a. the error term in a regression model is correlated with all observed explanatory variables b. the error term in a regression model is uncorrelated with all observed explanatory variables c. the measurement error is correlated with the unobserved explanatory variable d. the measurement error is uncorrelated with the unobserved explanatory variable Answer: d Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS under Measurement Error BUSPROG: Feedback: The classical errors-in-variables (CEV) assumption is that the measurement error is uncorrelated with the unobserved explanatory variable. 11. Which of the following is true of measurement error? a. If measurement error in a dependent variable has zero mean, the ordinary least squares estimators for the intercept are biased and inconsistent. b. If measurement error in an independent variable is uncorrelated with the variable, the ordinary least squares estimators are unbiased. c. If measurement error in an independent variable is uncorrelated with other independent variables, all estimators are biased. d. If measurement error in a dependent variable is correlated with the independent variables, the ordinary least squares estimators are unbiased. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS under Measurement Error BUSPROG: Feedback: If measurement error in an independent variable is uncorrelated with the variable, the ordinary least squares estimators are unbiased. 12. Sample selection based on the dependent variable is called _____. a. random sample selection b. endogenous sample selection c. exogenous sample selection d. stratified sample selection Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Missing Data, Nonrandom Samples, and Outlying Observations BUSPROG: Feedback: Sample selection based on the dependent variable is called endogenous sample selection. 13. The method of data collection in which the population is divided into nonoverlapping, exhaustive groups is called _____. a. random sampling b. stratified sampling c. endogenous sampling d. exogenous sampling Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Missing Data, Nonrandom Samples, and Outlying Observations BUSPROG: Feedback: The method of data collection in which the population is divided into nonoverlapping, exhaustive groups is called stratified sampling. 14. Which of the following types of sampling always causes bias or inconsistency in the ordinary least squares estimators? a. Random sampling b. Exogenous sampling c. Endogenous sampling d. Stratified sampling Answer: c Difficulty: Easy Bloom’s: Knowledge A-Head: Missing Data, Nonrandom Samples, and Outlying Observations BUSPROG: Feedback: Endogenous sampling always causes bias in the OLS estimators. If the sample is based on whether the dependent variable is above or below a given value, bias always occurs in OLS in estimating the population model. 15. Which of the following is a difference between least absolute deviations (LAD) and ordinary least squares (OLS) estimation? a. OLS is more computationally intensive than LAD. b. OLS is more sensitive to outlying observations than LAD. c. OLS is justified for very large sample sizes while LAD is justified for smaller sample sizes. d. OLS is designed to estimate the conditional median of the dependent variable while LAD is designed to estimate the conditional mean. Answer: b Difficulty: Easy Bloom’s: Knowledge A-Head: Least Absolute Deviations Estimation BUSPROG: Feedback: OLS is more sensitive to outlying observations than LAD. 16. An explanatory variable is called exogenous if it is correlated with the error term. Answer: False Difficulty: Easy Bloom’s: Knowledge A-Head: BUSPROG: Feedback: An explanatory variable is called endogenous if it is correlated with the error term. 17. A multiple regression model suffers from functional form misspecification when it does not properly account for the relationship between the dependent and the observed explanatory variables. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Functional Form Misspecification BUSPROG: Feedback: A multiple regression model suffers from functional form misspecification when it does not properly account for the relationship between the dependent and the observed explanatory variables. 18. The measurement error is the difference between the actual value of a variable and its reported value. Answer: True Difficulty: Easy Bloom’s: Knowledge A-Head: Properties of OLS under Measurement Error BUSPROG: Feedback: The measurement error is the difference

Use Quizgecko on...
Browser
Browser