COM2009-3009 Robotics: Lecture 12 - Simultaneous Localization and Mapping (SLAM) - 2022-2023 PDF

Document Details

DiplomaticUvite

Uploaded by DiplomaticUvite

The University of Sheffield

2023

Dr Tom Howard

Tags

robotics simultaneous localization and mapping SLAM computer science

Summary

This is a lecture on Simultaneous Localization and Mapping (SLAM) using the Kalman Filter. The lecture discusses the importance of state tracking in robotics and explores how the Kalman filter can be applied to iteratively build maps and estimate robot position, with a worked example included. The lecture includes real-world SLAM examples using depth cameras.

Full Transcript

© 2023 The University of Sheffield COM2009-3009 Robotics Lecture 12 Simultaneous Localisation & Mapping (SLAM) Dr Tom Howard Multidisciplinary Engineering Education (MEE) COM2009-3009 Robotics: Lecture 12 slide 1 © 2023 The University of Sheffield So far: • The principal of localisation (i.e. t...

© 2023 The University of Sheffield COM2009-3009 Robotics Lecture 12 Simultaneous Localisation & Mapping (SLAM) Dr Tom Howard Multidisciplinary Engineering Education (MEE) COM2009-3009 Robotics: Lecture 12 slide 1 © 2023 The University of Sheffield So far: • The principal of localisation (i.e. tracking a robot’s “pose”): – Via external ques: e.g. GPS – Via on-board sensors: Odometry • Maps for robotics: – Types of maps – How to use them (Path planning) • Sensors: – Capabilities and limitations – Accommodating variance and improving confidence through sensor fusion COM2009-3009 Robotics: Lecture 12 slide 2 © 2023 The University of Sheffield Introducing “SLAM” • A combination of all of these things! • “Simultaneous Localisation and Mapping”: “a family of robot algorithms that seek to simultaneously track the pose of the robot (localisation), while at the same time building and refining a map of the environment (mapping).” Relies upon predictions from a robot model and updates from the robot’s sensors. Aim: To allow robots to learn autonomously about the environments through which they navigate. COM2009-3009 Robotics: Lecture 12 slide 3 © 2023 The University of Sheffield This lecture: 1. Introduction to SLAM. 2. Introduction to the classic Kalman Filter state estimation algorithm that uses a recursive measure-update-predict process, which is the basis for how SLAM works (we will talk about this at a conceptual level). 3. Examples of state-of-the-art SLAM methods. COM2009-3009 Robotics: Lecture 12 slide 4 © 2023 The University of Sheffield The importance of state tracking A fundamental problem shared across robot systems is robust tracking of the current pose relative to the environment, e.g.: Track end effector pose: 𝑥 𝑦 𝑧 𝜃𝑥 𝜃𝑦 𝜃𝑧 𝑇 Track robot pose: 𝑥 𝑦 𝑧 𝜃𝑥 𝜃𝑦 Moved ~10m ~EAST Diagram from www.Thespod.com COM2009-3009 Robotics: Lecture 12 Image from www.Needpix.com slide 5 𝜃𝑧 𝑇 © 2023 The University of Sheffield Two sources of information 1. A model generating predictions of the outcome of actions. 𝑢𝑡 control signal 𝑦𝑡 Model outcome 𝑒. 𝑔. 𝑜𝑑𝑜𝑚𝑒𝑡𝑟𝑦 2. Sensors that update estimate using external cues. GPS COM2009-3009 Robotics: Lecture 12 slide 6 © 2023 The University of Sheffield The “L” in SLAM A core problem for all autonomous navigation methods is to accurately track the state of the robot i.e. its pose 𝑥 𝑦 𝜃 𝑇 Moved ~10m ~EAST COM2009-3009 Robotics: Lecture 12 slide 7 © 2023 The University of Sheffield 1D Distance Example High level goal: Move 10m, East. Controller task: Track robot pose 𝑥 𝑦 𝜃 𝑇 and when it reaches desired value (𝑥 + 10) 𝑦 𝜃 𝑇 then stop. Data Sources: 1. A model of the robot from which one can predict the consequences of actions. 2. Sensory feedback from which one can update the state estimate. COM2009-3009 Robotics: Lecture 12 slide 8 © 2023 The University of Sheffield Predicting State from Model Robot Model Input to robot 𝑢𝑡 (velocity of wheels) Previous state estimate 𝐴𝑥ො𝑡−1 + 𝐵𝑢𝑡 + ∆𝑡 Impact of the input signal The sum of two independent normal distributions (A and B) is given by: Summing a normal distribution causes the variance to increase. COM2009-3009 Robotics: Lecture 12 𝑥ෝ𝑡 Predicted outcome (pose) System noise modelled as a Normal distribution: ~𝒩(0, 𝑣𝑚 ) 𝐴~𝒩 𝜇𝐴 , 𝜎 2𝐴 𝑎𝑛𝑑 B~𝒩 𝜇𝐵 , 𝜎 2 𝐵 Then: 𝐴 + 𝐵 = 𝒩 𝜇𝐴 + 𝜇𝐵 , 𝜎 2𝐴 + 𝜎 2 𝐵 slide 9 © 2023 The University of Sheffield Predicting State from Model Robot Model Input to robot 𝑢𝑡 (velocity of wheels) 𝐴𝑥ො𝑡−1 + 𝐵𝑢𝑡 + ∆𝑡 𝑥ෝ𝑡 Predicted outcome (pose) Uncertainty is compounded with each estimate. (recall the Odometry Example from Lecture 9) COM2009-3009 Robotics: Lecture 12 slide 10 © 2023 The University of Sheffield Question: How could we improve the state estimation? COM2009-3009 Robotics: Lecture 12 slide 11 © 2023 The University of Sheffield Update State from Sensory Data Add Sensing: GPS The true position (but we can’t observe this) Input to robot 𝑢𝑡 (velocity of wheels) A sensor reading (what we can observe) IMU Real Robot Dynamics 𝑥𝑡 = 𝐴𝑥𝑡−1 + 𝐵𝑢𝑡 + ∆𝑡 𝑦𝑡 = 𝐶𝑥𝑡 + 𝛻𝑡 The true position (that we can’t observe) COM2009-3009 Robotics: Lecture 12 𝑦𝑡 Measured outcome (pose) Sensor noise modelled as a Normal distribution: ~𝒩(0, 𝑣𝑠 ) slide 12 © 2023 The University of Sheffield Update State from Sensory Data Real Robot Dynamics Input to robot 𝑢𝑡 (velocity of wheels) 𝑥𝑡 = 𝐴𝑥𝑡−1 + 𝐵𝑢𝑡 + ∆𝑡 𝑦𝑡 = 𝐶𝑥𝑡 + 𝛻𝑡 𝑦𝑡 Measured outcome (pose) Predicted Pose Measured Pose (e.g. GPS) COM2009-3009 Robotics: Lecture 12 slide 13 © 2023 The University of Sheffield Why not just use sensory data? 1. What happens if a sensor fails, starts to drift or exhibits random noise? 2. Sensory data can be just as noisy as model predictions, especially over short timescales. 3. Sensors require power (electrical energy) and the data consumes storage space! 1. Battery life and storage are both limited resources for a robot! 2. Think about the RealSense Camera: 1. On-board processing uses computational resources and thus power. 2. Images are large data files! 4. You may be able to run a state-estimation model at a higher frequency than you can acquire sensor data: Consider a racing drone using GPS data alone (fs≈1Hz). COM2009-3009 Robotics: Lecture 12 slide 14 © 2023 The University of Sheffield The Kalman Filter “The Kalman filter is an efficient, recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements” (www.wikipedia.org) First developed in the late 1950’s - early 1960’s and named after Hungarian engineer, mathematician & inventor: Rudolf Kalman. Rudolf Kalman 1930-2016 Used extensively in engineering applications such as missile, spacecraft, naval tracking where measurements are noisy and less frequent than desired. Apollo 11 Navigation Computer COM2009-3009 Robotics: Lecture 12 slide 15 © 2023 The University of Sheffield The Kalman Filter Real Robot Dynamics Kalman Filter 𝑥𝑡 = 𝐴𝑥𝑡−1 + 𝐵𝑢𝑡 + ∆𝑡 𝑦𝑡 = 𝐶𝑥𝑡 + 𝛻𝑡 𝑢𝑡 Input to robot (velocity of wheels) 𝑦𝑡 Measured Pose (𝜇, 𝜎 2 ) Optimal Pose estimate Robot Model 𝑥ෝ𝑡 = 𝐴𝑥ො𝑡−1 + 𝐵𝑢𝑡 + ∆𝑡 𝑥ෝ𝑡 Predicted Pose (𝜇, 𝜎 2 ) The Kalman filter recursively allows sensory updates to the predicted state. The influence that the sensory input has upon the overall output is determined by its reliability, encoded in the Kalman Gain. COM2009-3009 Robotics: Lecture 12 slide 16 © 2023 The University of Sheffield Kalman Filter Worked Example Moved ~10m EAST Robot Model (odometry) COM2009-3009 Robotics: Lecture 12 slide 17 © 2023 The University of Sheffield Kalman Filter Worked Example Moved ~10m EAST Robot Model (odometry) Error True Position COM2009-3009 Robotics: Lecture 12 slide 18 © 2023 The University of Sheffield Kalman Filter Worked Example Moved ~10m EAST Robot Model (odometry) True Position GPS Data COM2009-3009 Robotics: Lecture 12 slide 19 © 2023 The University of Sheffield Kalman Filter Worked Example Model Pose: Odometry GPS Data Moved ~10m EAST Robot Model (odometry) True Position GPS Data COM2009-3009 Robotics: Lecture 12 slide 20 © 2023 The University of Sheffield Kalman Filter Worked Example Iteration 0: Initialisations and predictions • The Kalman Filter takes an iterative process to approximate true position. ෝ𝟎,𝟎 = 𝟎𝒎 𝒙 An estimated position at Iteration 0 (from odometry) Robot Model (odometry) True Position GPS Data 𝝈 = 𝟏𝒎 An estimate of the likely errors in our prediction: the standard deviation (we’ll assume it’s high) 𝒑𝟎,𝟎 = 𝝈𝟐 = 𝟏𝟐 = 𝟏 The resulting uncertainty in our prediction: the variance ෝ𝟏,𝟎 = 𝒙 ෝ𝟎,𝟎 + 𝟏 = 𝟏𝒎 𝒙 Then, we make a prediction about where the robot will be in the next iteration (based on the COM2009-3009 Lecture 12 informationRobotics: we have at Iteration 0) 𝒑𝟏,𝟎 = 𝒑𝟎,𝟎 + 𝒒 And we define the “extrapolated uncertainty” ෝ𝟏,𝟎 in our prediction for 𝒙 slide 21 © 2023 The University of Sheffield Kalman Filter Worked Example Iteration 0: Initialisations and predictions • The Kalman Filter takes an iterative process to approximate true position. Robot Model (odometry) 𝑞 is process noise which, in this case, would represent natural fluctuations in the robot’s true position. This should be minimal, so we will set this at 0.01: 𝒑𝟏,𝟎 = 𝟏 + 𝟎. 𝟎𝟏 = 𝟏. 𝟎𝟏 True Position GPS Data COM2009-3009 Robotics: Lecture 12 𝒑𝟏,𝟎 = 𝒑𝟎,𝟎 + 𝒒 And we define the “extrapolated uncertainty” ෝ𝟏,𝟎 in our estimate for 𝒙 slide 22 © 2023 The University of Sheffield Kalman Filter Worked Example Iteration 1: Measure, update, predict… Take a GPS reading: 𝒛𝟏 = −𝟎. 𝟐𝟖𝒎 Robot Model (odometry) True Position GPS Data COM2009-3009 Robotics: Lecture 12 From previous lab-testing, we have already established the measurement uncertainty for our GPS system (the variance): 𝒓 = 𝟎. 𝟔 Our Kalman Gain for Iteration 1 can then be calculated, based on the uncertainty in our estimates and the uncertainty in our sensor readings: 𝒑𝟏,𝟎 𝟏. 𝟎𝟏 𝑲𝟏 = = = 𝟎. 𝟔𝟑 𝒑𝟏,𝟎 + 𝒓 𝟏. 𝟎𝟏 + 𝟎. 𝟔 slide 23 © 2023 The University of Sheffield Kalman Filter Worked Example Iteration 1: Measure, update, predict… • • • The Kalman Gain will range between 0 and 1 When measurement uncertainty is large: 𝐾 → 0 When model uncertainty is large: 𝐾 → 1 𝑲𝟏 = 𝟎. 𝟔𝟑 Robot Model (odometry) True Position GPS Data COM2009-3009 Robotics: Lecture 12 Currently, we have (slightly) more confidence in our GPS data than we do in our model (the odometry). slide 24 © 2023 The University of Sheffield Kalman Filter Worked Example Iteration 1: Measure, update, predict… Then, we make a new position estimate, using the Kalman Gain we have just established and our previous model prediction (𝑥ො1,0 ): ෝ𝟏,𝟏 = 𝒙 ෝ𝟏,𝟎 + 𝑲𝟏 (𝒛𝟏 − 𝒙 ෝ𝟏,𝟎 ) 𝒙 ෝ𝟏,𝟏 = 𝟏 + 𝟎. 𝟔𝟑 −𝟎. 𝟐𝟖 − 𝟏 = 𝟎. 𝟏𝟗𝒎 𝒙 Robot Model (odometry) True Position GPS DataThen we establish the uncertainty in our new estimate: 𝒑𝟏,𝟏 = 𝟏 − 𝑲𝟏 𝒑𝟏,𝟎 = 𝟏 − 𝟎. 𝟔𝟑 𝟏. 𝟎𝟏 = 𝟎. 𝟑𝟕 COM2009-3009 Robotics: Lecture 12 slide 25 © 2023 The University of Sheffield Kalman Filter Worked Example Iteration 1: Measure, update, predict… Then, make a prediction for the next iteration: ෝ𝟐,𝟏 = 𝒙 ෝ𝟏,𝟏 + 𝟏 = 𝟎. 𝟏𝟗 + 𝟏 = 𝟏. 𝟏𝟗𝒎 𝒙 Robot Model (odometry) True Position GPS Data COM2009-3009 Robotics: Lecture 12 … and update our uncertainty for this prediction: 𝒑𝟐,𝟏 = 𝒑𝟏,𝟏 + 𝒒 = 𝟎. 𝟑𝟕 + 𝟎. 𝟎𝟏 = 𝟎. 𝟑𝟖 slide 26 © 2023 The University of Sheffield Kalman Filter Worked Example Iteration 1: Measure, update, predict… Iteration 2: Measure, update, predict… Iteration 3: Measure, update, predict… Iteration n: And so on… Robot Model (odometry) True Position GPS Data COM2009-3009 Robotics: Lecture 12 slide 27 © 2023 The University of Sheffield Kalman Filter Worked Example Kalman Position Robot Model (odometry) True Position GPS Data COM2009-3009 Robotics: Lecture 12 slide 28 © 2023 The University of Sheffield The Kalman Filter Algorithm Measure • • • Update Predict This was a very simple example This is a really good resource if you want to find out more about Kalman filters: https://www.kalmanfilter.net/default.aspx In reality we need to use Extended Kalman Filters… COM2009-3009 Robotics: Lecture 12 slide 29 © 2023 The University of Sheffield The “M” in SLAM Task: Build a map of the local environment. Solution: We use the same approach to estimate the state of the objects in the world that would make up a map. 𝑢𝑡 Real World Dynamics Change of robot pose World Model COM2009-3009 Robotics: Lecture 12 𝑦𝑡 Measured Landmark Poses (𝜇𝐿, 𝜎𝐿2 ) 𝑥ෝ𝑡 Kalman Filter Optimal Landmark Pose Estimates Predicted Landmark Poses (𝜇𝐿, 𝜎𝐿2 ) slide 30 © 2023 The University of Sheffield The “M” in SLAM COM2009-3009 Robotics: Lecture 12 slide 31 © 2023 The University of Sheffield The “M” in SLAM COM2009-3009 Robotics: Lecture 12 slide 32 © 2023 The University of Sheffield The “M” in SLAM COM2009-3009 Robotics: Lecture 12 slide 33 © 2023 The University of Sheffield The “M” in SLAM COM2009-3009 Robotics: Lecture 12 slide 34 © 2023 The University of Sheffield Recursively Building a Map Measure Update Predict 1. Robot powers up: where am I? COM2009-3009 Robotics: Lecture 12 slide 35 © 2023 The University of Sheffield Recursively Building a Map Measure Update Predict 1. Robot powers up: where am I? 2. Measures distance to objects using LiDAR & updates map model. COM2009-3009 Robotics: Lecture 12 slide 36 © 2023 The University of Sheffield Recursively Building a Map Measure Update Predict 1. Robot powers up: where am I? 2. Measures distance to objects using LiDAR & updates map model. 3. Now the robot moves and we make a prediction of what the world model might look like in that new pose. COM2009-3009 Robotics: Lecture 12 slide 37 © 2023 The University of Sheffield Recursively Building a Map Measure Update Predict 1. Robot powers up: where am I? 2. Measures distance to objects using LiDAR & updates map model. 3. Now the robot moves and we make a prediction of what the world model might look like in that new pose. 4. More LiDAR measurements are obtained. COM2009-3009 Robotics: Lecture 12 slide 38 © 2023 The University of Sheffield Recursively Building a Map Measure Update Predict 1. Robot powers up: where am I? 2. Measures distance to objects using LiDAR & updates map model. 3. Now the robot moves and we make a prediction of what the world model might look like in that new pose. 4. More LiDAR measurements are obtained. 5. Sensory data used to update the map. COM2009-3009 Robotics: Lecture 12 slide 39 © 2023 The University of Sheffield Recursively Building a Map Measure Update Predict 1. Robot powers up: where am I? 2. Measures distance to objects using LiDAR & updates map model. 3. Now the robot moves and we make a prediction of what the world model might look like in that new pose. 4. More LiDAR measurements are obtained. 5. Sensory data used to update the map. 6. And so on… COM2009-3009 Robotics: Lecture 12 slide 40 © 2023 The University of Sheffield The “S” in SLAM SLAM is posed as exactly this type of recursive problem. It is possible to simultaneously refine estimates of the robot pose and its environment (e.g. pose of landmarks). Improving the accuracy of one measure will improve the other, allowing convergence over several iterations! Robot and landmark pose estimates shown in green and purple respectively COM2009-3009 Robotics: Lecture 12 slide 41 © 2023 The University of Sheffield (Improving localisation certainty) The green cloud is the real-time pose estimate …Notice how it improves as the robot explores the map COM2009-3009 Robotics: Lecture 12 slide 42 © 2023 The University of Sheffield SLAM Summary SLAM algorithms attempt to localise the robot while mapping the environment. We have demonstrated how the Kalman Filter can be applied to recursively build maps and provided insights into how SLAM algorithms could use similar approaches. COM2009-3009 Robotics: Lecture 12 slide 43 © 2023 The University of Sheffield SLAM Summary Real World SLAM – FABMAP 2.0 COM2009-3009 Robotics: Lecture 12 slide 44 © 2023 The University of Sheffield SLAM Summary Real World SLAM – Large Scale Dense RGB-D SLAM COM2009-3009 Robotics: Lecture 12 slide 45 © 2023 The University of Sheffield Summary 1. Robots must act appropriately in the real-world where there is lots of uncertainty. 2. The Kalman Filter is an optimal state estimator that can be used to solve both the localisation and mapping problem concurrently: SLAM. 3. Note: we have only shown the KF in its most basic form which relies on Gaussian noise and linear systems. In reality, this rarely applies to robots! 1. For this, we turn to Extended Kalman Filters (EKF), and particle swarm optimisation: e.g. Gmapping (which we used in the Week 3 lab). 4. Real world example SLAM algorithms using depth cameras are highly effective! COM2009-3009 Robotics: Lecture 12 slide 46 © 2023 The University of Sheffield Next Week: Lecture 13: Robot Learning (Tom) Lecture 14: Cognitive Robotics (Roger) COM2009-3009 Robotics: Lecture 12 slide 47 © 2023 The University of Sheffield Where to find out more 1. Chapter 5 – Mobile Robot Localisation of Introduction to Autonomous Mobile Robotics. 2. Probabilistic Robotics Textbook: Thrun, S., W. Burgard, and D. Fox. Probabilistic robotics. MIT press, 2005. 3. SLAM Tutorials: Bailey, T., and H. Durrant-Whyte. "Simultaneous localization and mapping (SLAM): Part II." IEEE Robotics & Automation Magazine 13.3 (2006): 108-117.MIT Durrant-Whyte, H., and T. Bailey. "Simultaneous localization and mapping: part I." IEEE robotics & automation magazine 13.2 (2006): 99-110 COM2009-3009 Robotics: Lecture 12 slide 48

Use Quizgecko on...
Browser
Browser