Extended Kalman Filter PDF

Summary

This document provides an introduction to the Extended Kalman Filter (EKF). It explains the EKF as a method for dealing with non-linear systems within the context of robotics and control systems. The document details algorithms, examples, and necessary linearizations.

Full Transcript

Bayes Filter Reminder bel(xt ) =  p(zt | xt)  p(x | u ,x t t ) bel(xt−1) dxt−1 t−1 ▪ Prediction bel(xt ) =  p(x | u, x t t t−1 ) bel(xt−1)...

Bayes Filter Reminder bel(xt ) =  p(zt | xt)  p(x | u ,x t t ) bel(xt−1) dxt−1 t−1 ▪ Prediction bel(xt ) =  p(x | u, x t t t−1 ) bel(xt−1) dxt−1 ▪ Correction bel(xt ) =  p(zt | xt )bel(xt ) Discrete Kalman Filter Estimates the state x of a discrete-time controlled process xt = At xt−1 + Btut + t with a measurement zt = Ct xt + t 3 Components of a Kalman Filter At Matrix (nxn) that describes how the state evolves from t-1 to t without controls or noise. Bt Matrix (nxl) that describes how the control ut changes the state from t-1 to t. Ct Matrix (kxn) that describes how to map the state xt to an observation zt. t Random variables representing the process and measurement noise that are assumed to t be independent and normally distributed with covariance Qt and Rt respectively. 4 Kalman Filter Update Example prediction measurement correction Kalman Filter Algorithm 1. Algorithm Kalman_filter( t-1, t-1, ut, zt): 2. Prediction: 3. t = Att−1 + Btut 4. t = At t−1AtT + Qt 5. Corr ection: 6. Kt = tCTt (Ct tCtT + Rt )−1 7. t = t + Kt (zt − Ct t ) 8. t = (I − KtCt )t 9. Return t, t Nonlinear Dynamic Systems ▪ Most realistic robotic problems involve nonlinear functions xt = At xt−1 + Btut + t zt = Ct xt + t xt = g(ut, xt−1) zt = h(xt ) Linearity Assumption Revisited Non-Linear Function Non-Gaussian! Non-Gaussian Distributions ▪ The non-linear functions lead to non- Gaussian distributions ▪ Kalman filter is not applicable anymore! What can be done to resolve this? Non-Gaussian Distributions ▪ The non-linear functions lead to non- Gaussian distributions ▪ Kalman filter is not applicable anymore! What can be done to resolve this? Local linearization! EKF Linearization: First Order Taylor Expansion ▪ Prediction: g(ut , t−1 ) g(ut , xt−1)  g(ut , t−1 ) + (xt−1 − t−1 ) xt−1 g(ut , xt−1)  g(ut , t−1 ) + Gt (xt−1 − t−1 ) ▪ Correction: h(t ) Jacobian matrices h(xt )  h(t ) + (xt − t ) xt h(xt )  h(t ) + Ht (xt − t ) Reminder: Jacobian Matrix ▪ It is a non-square matrix in general ▪ Given a vector-valued function ▪ The Jacobian matrix is defined as Reminder: Jacobian Matrix ▪ It is the orientation of the tangent plane to the vector-valued function at a given point ▪ Generalizes the gradient of a scalar valued function EKF Linearization: First Order Taylor Expansion ▪ Prediction: g(ut , t−1 ) g(ut , xt−1)  g(ut , t−1 ) + (xt−1 − t−1 ) xt−1 g(ut , xt−1)  g(ut , t−1 ) + Gt (xt−1 − t−1 ) ▪ Correction: h(t ) Linear function! h(xt )  h(t ) + (xt − t ) xt h(xt )  h(t ) + Ht (xt − t ) Linearity Assumption Revisited Non-Linear Function EKF Linearization (1) EKF Linearization (2) EKF Linearization (3) EKF Algorithm 1. Extended_Kalman_filter( t-1, t-1, ut, zt): 2. Prediction: 3. t = g(ut,t−1 ) t = Att−1 + 4. t = G  GT +Q B ttu=t A  AT +Q t t−1 t t t t−1 t t 5. Correction: 6. Kt = tHtT (Ht tHtT + Rt )−1 Kt = tCtT (Ct tCtT + Rt )−1 7. t = t + Kt (zt − h(t)) t = t + Kt (zt −Ctt ) 8. t = (I − KtHt )t t = (I − KtCt )t 9. Return t, t g(ut, t−1 ) h(t ) Gt = Ht = xt xt−1 Example EKF- Localization Example: EKF Localization ▪ EKF localization with landmarks (point features) Velocity-Based Model -90 Motion Including 3rd Parameter Term to account for the final rotation Landmarks Observations Landmarks Observations, Cont. 1. EKF_localization ( t-1, t-1, ut, zt, m):   Prediction:  x' x' x'   t−1, x t−1, y t−1,    g(u,t  t−1 ) y' y' y'  Gt = =   t−1   t−1,x  t−1,y Jac  t−1, obian of g w.r.t location      '  '  '     t−1,x  t−1,y  t−1,      x' x'   vt t      g(u,t  t−1 ) y' y'  Vt = =  ut  vt t  Jacobian of g w.r.t control     '  '     vt t    ( | v | + |  |)2 0  2. Qt =  1 t 2 t 2  Motion noise  0 (3 | vt | +4 | t |)  3. t = g(ut, t−1 ) Predicted mean 4. t = G t−1 G T +VQV T t Predicted covariance (V t t t t maps Q into state space) 1. EKF_localization ( t-1, t-1, ut, zt, m): Correction:   (mx − t,x ) + (my − t,y )  2 2  ẑt =   Predicted measurement mean  atan 2 (my − t,y, mx − t,x )− t,    (depends on observation type)    rt rt rt  h(t, m)  t, x t, y t,  Jacobian of h w.r.t location Ht = = xt  t t t   t, x t, y t,   2    0  Rt =  r 2.  0 2   r  3. St = Ht t Ht + Rt T Innovation covariance T −1 4. Kt = tHt St Kalman gain 5. t = t + Kt (zt − ẑt ) Updated mean 6. t = (I − KtHt )t Updated covariance EKF Prediction Step Examples V tQ tV tT VtQ tV Tt V tQ tV tT VtQ tV Tt EKF Observation Prediction Step Rt Rt Rt Rt EKF Correction Step Rt Rt Estimation Sequence (1) Estimation Sequence (2) EKF Localization with known correspondence EKF Localization with unknown correspondence Extended Kalman Filter Summary ▪ The EKF is an ad-hoc solution to deal with non- linearities ▪ It performs local linearization in each step ▪ It works well in practice for moderate non-linearities (example: landmark localization) ▪ There exist better ways for dealing with non-linearities such as the unscented Kalman filter, called UKF ▪ Unlike the KF, the EKF in general is not an optimal estimator ▪ It is optimal if the measurement and the motion model are both linear, in which case the EKF reduces to the KF. 31

Use Quizgecko on...
Browser
Browser