Random Processes and Markov Chains PDF
Document Details
Uploaded by HardyQuadrilateral1394
Tags
Summary
This document provides a comprehensive overview of random processes and Markov chains, including concepts like autocorrelation functions, weakly stationary processes, and AR(p) and MA(q) models. It also discusses how to use R for time series analysis.
Full Transcript
## Autocorrelation Function (ACF) and Plot * **Autocorrelation Function (ACF):** The ACF measures the correlation between a time series and its lagged values. For a time series {Yt}, the autocorrelation at lag k is defined as: * pk = Cor(Yt, Yt-k) * The ACF plot visualizes these autocorrelation...
## Autocorrelation Function (ACF) and Plot * **Autocorrelation Function (ACF):** The ACF measures the correlation between a time series and its lagged values. For a time series {Yt}, the autocorrelation at lag k is defined as: * pk = Cor(Yt, Yt-k) * The ACF plot visualizes these autocorrelation coefficients for different lags k. The first value p0 is always 1, as it represents the correlation of the series with itself. * **Interpretation of ACF Plot:** * If pk is close to 1 or -1, it indicates strong positive or negative correlation at lag k. * For a stationary process, the ACF typically decays quickly to zero as the lag increases. * For non-stationary processes (like a random walk), the ACF declines very slowly. ## Weakly Stationary Process * A stochastic process {Y} is weakly stationary if: * The mean μ(t) is constant over time. * The variance σ²(t) is constant over time. * The autocovariance γ(t, s) depends only on the lag t-s, not on the specific time points t and s. * **Implications:** * Weak stationarity implies that the statistical properties of the process (mean, variance, and autocovariance) do not change over time. * Many time series models (like ARMA) assume weak stationarity. ## Random Walk * A random walk is a stochastic process where the value at each time step is the sum of the previous value and a random noise term. It is defined as: ## AR(p) Process (Autoregressive Process of Order p) * An AR(p) process is a time series model where the current value Yt depends linearly on its past p values and a random noise term: * Yt = φ1Yt−1 + φ2Yt−2 + ··· + φpYt-p + εt where εt is white noise. * **Stationarity Condition:** For an AR(p) process to be stationary, the roots of the characteristic equation must lie outside the unit circle. For AR(1), this simplifies to |φ1| < 1. * **ACF Behavior:** The ACF of an AR(p) process decays geometrically, with the rate of decay depending on the coefficients φ1, ..., φp. ## MA(q) Process (Moving Average Process of Order q) * An MA(q) process is a time series model where the current value Yt depends linearly on the past q random noise terms: * Yt = εt + θ1εt−1 + θ2εt−2 + ··· + θqεt_q where εt is white noise. * **Stationarity Condition:** MA(q) processes are always stationary because they are finite linear combinations of white noise terms. * **ACF Behavior:** The ACF of an MA(q) process cuts off after lag q, meaning pk ≈ 0 for k > q. ## Interpretation of Time Series Output in R * **Model Fitting:** In R, you can fit ARMA models using functions like arma() or auto.arima(). These functions estimate the parameters of the ARMA model and provide summary statistics, including coefficients, standard errors, and p-values. * **Diagnostics:** After fitting a model, you can check the residuals to ensure they resemble white noise. This involves: * Plotting the residuals to check for patterns. * Examining the ACF of the residuals to ensure no significant autocorrelation remains. * Using information criteria like AIC and BIC to compare different models. * **Forecasting:** Once a model is fitted, you can use it to forecast future values. The forecast() function in R provides point forecasts and prediction intervals. ## Summary * **ACF** helps identify the correlation structure of a time series. * **Weak stationarity** is a key assumption for many time series models. * **Random Walk** is a non-stationary process with slowly declining ACF. * **AR(p) and MA(q)** are fundamental models for stationary time series, with AR(p) focusing on past values and MA(q) on past noise terms. * **R Output Interpretation** involves model fitting, diagnostics, and forecasting, using tools like arma(), auto.arima(), and forecast(). ## Transition Matrix * **Definition:** The transition matrix P of a Markov chain is a square matrix that describes the probabilities of moving from one state to another in a single step. Each element pij of the matrix represents the probability of transitioning from state i to state j. * **Properties:** * The rows of the transition matrix sum to 1, i.e., ∑j Pij = 1 for all i. * The n-step transition probabilities can be obtained by raising the transition matrix to the n-th power, i.e., Pn. ## Stationary Distribution * **Definition:** A stationary distribution is a probability distribution that remains unchanged as the system evolves over time. In other words, if π is the initial distribution, then after one transition, the distribution remains π. * **Mathematical Formulation:** * π = πP * This equation states that when you multiply the stationary distribution by the transition matrix P, you get back π. ## Ergodic Markov Chain * **Definition:** A Markov chain is ergodic if it is both irreducible and aperiodic. * **Irreducible:** A Markov chain is irreducible if it is possible to get from any state to any other state in a finite number of steps. * **Aperiodic:** A state is aperiodic if the chain can return to it at irregular times. A Markov chain is aperiodic if all its states are aperiodic. * **Properties:** * An ergodic Markov chain has a unique stationary distribution. * The chain will converge to this stationary distribution regardless of the starting state. ## Summary * **Transition Matrix:** Describes the probabilities of moving between states in a Markov chain. * **Stationary Distribution:** A probability distribution that remains unchanged over time. * **Ergodic Markov Chain:** A Markov chain that is both irreducible and aperiodic, ensuring it has a unique stationary distribution to which it converges over time. These concepts are fundamental in understanding the behavior of Markov chains, especially in applications like the Metropolis-Hastings algorithm in Markov Chain Monte Carlo (MCMC) methods.