Applied Econometrics Forecasting with ARMA & GARCH Models PDF
Document Details
Uploaded by ComprehensiveChrysocolla
University of Oulu, Oulu Business School
2023
Elias Oikarinen
Tags
Related
- Applied Econometrics Lecture 1 PDF
- Applied Econometrics Regression Diagnostics & Complications PDF
- Applied Econometrics Endogeneity & Instrumental Variables Lecture 3 PDF
- Applied Econometrics Probit and Logit Regression Lecture Handout 11 Autumn 2022 PDF
- Applied Econometrics Lecture Handouts PDF
- Applied Econometrics ARIMA Models Lecture Handout 5 PDF
Summary
This handout covers forecasting with ARMA and GARCH models in applied econometrics. It includes univariate time series analysis, examples using EViews, and details on various forecasting models.
Full Transcript
Applied Econometrics Forecasting with ARMA & GARCH Models Lecture handout 7 Autumn 2023 D.Sc. (Econ.) Elias Oikarinen Professor (Associate) of Economics University of Oulu, Oulu Business School 1 This Handout • Univariate time series analysis: Forecasting with ARMA and GARCH models • EViews examp...
Applied Econometrics Forecasting with ARMA & GARCH Models Lecture handout 7 Autumn 2023 D.Sc. (Econ.) Elias Oikarinen Professor (Associate) of Economics University of Oulu, Oulu Business School 1 This Handout • Univariate time series analysis: Forecasting with ARMA and GARCH models • EViews example: OMXH Small Cap continued • Extra reading market with * • Much more information on forecasting: • Brooks, pp. 285-298 • Enders, pp. 79-88 • Eviews UG, Chapter 23 2 Forecasting Models: Some Basics For forecasting purposes, the coefficients don’t need to be causal, and often are not The tools of regression can be used to construct forecasting models − even if there is no causal interpretation of the coefficients It is often possible to get better forecasts for asset returns and other economic variables based on historical data than based on gut feeling or historical mean return The most important task of ARMA and GARCH models is often to provide forecasts In pure ARMA and GARCH models, forecasting is based solely on own historical observations 3 Forecasting Models: Some Basics Other variables with predictive power can be included in the models Multiple equation models, such as VAR models, can also be used for forecasting purposes Typically, parsimonious models provide the most accurate (out-ofsample) forecasts even if R2 is relatively low Note: Models that aim to test theories or investigate dynamics in detail often include more variables (and hence more estimated parameters) than the best forecasting models 4 Forecast function Let’s assume that we know the true DGP and the historical and current values of the times series yt For simplicity and illustration purposes, let’s assume that AR(1) process, yt = a0 + a1yt-1 + t , reflects the DGP By updating one period forward, we get: yt+1 = a0 + a1yt + t+1 If we know the parameters, we can forecast the next period value yt+1 based on information available at time t (i.e. at the moment): Et (yt+1) = a0 + a1yt = Et (yt+1 | yt) More generally for an ARMA process the j period ahead forecast: Et (yt+j) = Et (yt+j | yt, yt-1, yt-2, … t, t-1, t-2…) 5 Forecast function AR(1) example cont’d In a similar manner, forecast for yt+2 based on time t info set: Et (yt+2) = a0 + a1 Et (yt+1) From the equation for Etyt+1 we get: Et (yt+2) = a0 + a1 (a0 + a1 yt) That is, we can use the forecast for yt+1 to forecast yt+2 Longer-horizon forecasts can be computed by iterating forward: Forecast for yt+j-1 can be used to make a prediction for yt+j Et (yt+j) = a0 + a1 Et (yt+j-1) The series of forecasts until the j step forecast can be computed by iterating forward; the forecast function presents all j forecasts until period t+j as a function of period t, the forecast for period t+j being: Et (yt+j) = a0 (1 + a1 + a12 +…+a1j-1 ) + a1j yt 6 Forecast function Prediction accuracy gets worse as j increases: Variance of prediction error increases with forecast horizon When j , E (yt+j)= a0 / (1-a1), i.e. the forecast converges to the mean of the series yt In all stationary ARMA models: Conditional expectation, i.e. forecast, of yt+j converges to the unconditional mean as j 7 Prediction Error of AR(1) Model Obviously, prediction based on ARMA model is not perfectly accurate Prediction error for j step forecast made in period t, et(j): et(j) = yt+j – Etyt+j One step prediction error = et(1) = yt+1 – Etyt+1 = t+1 Two step prediction error: et(2) = yt+2 – Etyt+2 Since yt+2 = a0 + a1yt+1 + t+2 and Etyt+2 = a0 + a1Etyt+1 et(2) = a1 (yt+1 – Etyt+1) + t+2 = t+2 + a1 t+1 The j step prediction error: et(j) = t+j + a1 t+j-1 + a12 t+j-2 + a13 t+j-3 +…+ a1j-1 t+1 • • Since Et+1 = Et+2 = … = Et+j = 0, the conditional expectation of et(j) is 0 the forecasts are unbiased predictions for yt+j Confidence bands can be computed for the predictions 8 *Predictions for Higher-Order Models Forecasts can be computed for any ARMA(p,q) model by the iteration technique For example, ARMA(2,1): By updating one period forward: Ett+j = 0 (j > 0), conditional expected value for yt+1: et(1) = t+1 9 *Predictions for Higher-Order Models Two step forecast: Conditional expectations for yt+2: Two step prediction error: Given the one step prediction error, yt+1 – Etyt+1: j step prediction error: 10 Predictions for Higher-Order Models In reality, we do not have exact knowledge on the DGP We estimate the model based on observed data: The parameter estimates are not perfectly accurate Forecasts made using the estimated model extrapolate the coefficient uncertainty into the future Coefficient uncertainty increases as the model becomes more complex An estimated AR(1) model may provide better out-of-sample forecasts for an actual ARMA(2,1) the process given than an estimated ARMA(2, 1) model The general point is that using overly parsimonious models with little parameter uncertainty can provide better forecasts than models consistent with the actual data-generating process 11 Comparing Out-of-Sample Forecasts Here, one-step ahead forecasts Assume a time series with 150 observations (T=150) Let’s estimate models to be compared so that: 1) Use e.g. the 125 first observations to estimate the models, i.e., leave out obs 126-150 2) Compute one-step ahead forecasts for period 126 based on each model 3) Compute prediction errors: predicted value – actual observation 4) Repeat estimation by using 126 first observations and compute predictions for period 127, and compute predictions errors 5) Repeat the same procedure for each remaining period to get one step predictions for all the 25 last periods (”recursive forecasting”) 6) Compare the model prediction accuracy by some criterion or several criteria 12 Comparing Out-of-Sample Forecasts Similar comparison can be conducted for multi-step ahead, i.e. longer horizon, forecasts ”Dynamic forecasts” – shorter horizon predictions are used to compute longer horizon forecast E.g.: in AR(1) model the 2-step forecast is based on the forecasted value for t+1, the 3-step forecast is based on the 2-step forecast etc. Comparison requires sufficient number of (forecast) observations Properties of a good prediction model Mean of prediction errors = 0 Prediction error variance is as small as possible NOTE: In-sample forecasts are those generated for the same set of data that was used to estimate the model’s parameters: R2 of the model works as a measure of in-sample prediction power, but higher R2 does not mean better actual (i.e. out-of-sample) prediction model! 13 Comparing Out-of-Sample Forecasts MSPE (mean squared prediction error) Smaller MSPE indicate better forecast accuracy With H one-step predictions: MSPE decomposed: ”bias proportion”: the extent to which the mean of the forecasts is different to the mean of the actual data (i.e. whether the forecasts are biased). ”variance proportion”: the difference between the variation of the forecasts and the variation of the actual data (in a good prediction model, the variation of actual values is somewhat greater than that of predicted values) ”covariance proportion”, remaining ”unsystematic” prediction error Bias and variance proportions are (as) small (as possible) in a good prediction model Other criteria available as well, see e.g. Brooks and Enders; Eviews UG I, pp. 425-426 Testing statistical significance of MSPE differences: Granger-Newbold and Diebold-Mariano tests 14 Forecasting with Rolling Window A recursive forecasting model is one where the initial estimation date is fixed, but additional observations are added one at a time to the estimation period (see p. 12) A rolling window is one where the length of the in-sample period used to estimate the model is fixed, so that the start date and end date successively increase by one observation For instance, basing the forecasts always on a model estimated with the last 100 observations (even if there are more historical observations) Model coefficients are updated for each new prediction Caters better for possible structural changes in the coefficients Possible with relatively high-frequency data that includes a large number of observations even within a relatively short sample period 15 Forecasting with GARCH Model In ARMA-GARCH model, the same applies for forecasting the value of yt as discussed above regarding ARMA models However, adding GARCH model together with ARMA model typically somewhat changes the ARMA parameter estimates and their confidence bands If ARMA model residual series are heteroskedastic, one should generally base the forecasts on an ARMAGARCH model GARCH(p,q) model allows for forecasting the volatility 16 Forecasting with GARCH Model GARCH(1,1) process ht = 0 + 12t-1 + 1ht-1 + t updated by one period: Iterated further ahead: As j , converges to the unconditional (long-term) variance: 17 *Forecast evaluation in EViews II After estimating the model, choose Forecast (see next slide): • Set the name for forecast and time period for forecast sample • Forecasting method: Dynamic calculates dynamic, multi-step forecasts starting from the first period in the forecast sample. In dynamic forecasting, previously forecasted values for the lagged dependent variables are used in forming forecasts of the current value. Static calculates a sequence of one-step ahead forecasts, using the actual, rather than forecasted values for lagged dependent variables (if available). • Remove the tick from Insert actuals for out-of-sample observations if you want the new series to contain only the forecasted values • For Output Graph, choose Forecast & Actuals, if you want to visually compare the forecasted values with actual ones © Pearson Education Limited 2015 1-18 *Forecast evaluation in EViews II, ctd. After estimating the model, choose Forecast: © Pearson Education Limited 2015 1-19 *Forecast evaluation in EViews II, ctd. • RMSE = 2.62 12 Forecast: GDPGRF Actual: GDPGR 8 Forecast sample: 2003Q1 2012Q4 Included observations: 40 4 0 Root Mean Squared Error 2.619123 Mean Absolute Error 1.719345 Mean Abs. Percent Error 149.0413 Theil Inequality Coef. 0.422126 -4 -8 -12 2003 2004 2005 2006 GDPGRF © Pearson Education Limited 2015 2007 2008 Actuals 2009 2010 2011 2012 Bias Proportion 0.113017 Variance Proportion 0.466257 Covariance Proportion 0.420727 Theil U2 Coefficient 0.537992 Symmetric MAPE 65.64850 ± 2 S.E. 1-20 *Forecast evaluation in EViews II, ctd. • Compute the pseudo out-of-sample forecast errors eg. by following command: series forecast_error = gdpgr – gdpgrf • Check the mean and standard deviation of the forecast_error 12 (View Descriptive statistics & Tests). Series: FORECAST_ERROR Sample 2003Q1 2012Q4 Observations 40 • Calculate the 95%10confidence interval to see if Mean -0.645431 the forecast errors8 have a nonzero mean: Median 0.063187 Maximum 3.655569 6 95% CI: mean ± 1.96 * SE (= standard error), Minimum -10.12299 Std. Dev. 2.475709 4 Skewness -1.552139 where SE = standard deviation divided by the Kurtosis 6.715185 2 square root of sample size. Jarque-Bera 39.06524 Probability 0 -10 -8 -6 • Now: 95% CI: (-1.413; 0.122) -4 -2 0 2 0.000000 4 There seems not to be any significant (negative or positive) bias in forecasts © Pearson Education Limited 2015 1-21 An Empirical Example: Recap OMXH Small Cap weekly returns Non-normal residuals QML estimation Coefficients estimates do not change, but their std. errors (and hence p-values) change slightly All coefficients still statistically highly significant All stationarity and non-negativity conditions are fulfilled The index returns appear to exhibit surprisingly strong predictability 22 Empirical Example: ”Static” OMXH Small Cap return example continued from earlier lecture Based on the model preferred by SBC, AR(1)-GARCH(1,1) Selected option = “Static”, to calculate a sequence of one-step-ahead forecasts, rolling the sample forward one observation after each forecast Small bias proportion: unbiased forecasts Rather large variance proportion: actual returns are much more volatile than the forecasted values 23 Empirical Example: ”Dynamic” Select option = “Dynamic”, to calculate multi-step forecasts starting from the first period in the forecast sample For the dynamic forecasts, it is clearly evident that the forecasts for return quickly converge upon the long-term unconditional mean 24 Empirical Example: Comparing Models One-step ahead forecasts – which model has been the best prediction model? (POLL) AR(1)-GARCH(1,1) AR(1) ARMA(2,|3|)-GARCH(1,1) ARMA(2,|3|) Theil U2 coefficient: the closer the value of U2 is to zero, the better the forecast method (based on this criterion); a value of 1 means the forecast is no better than a naïve guess. Anyhow, RMSE tends to be more typically the one used to select the best model. 25