Introduction to Mobile Robotics Course - Probabilistic Robotics PDF

Document Details

SatisfactoryRhenium2021

Uploaded by SatisfactoryRhenium2021

Al-Balqa Applied University

Tags

robotics probabilistic robotics mobile robotics computer science

Summary

This document is a lecture or presentation on probabilistic robotics, an approach to robotics that considers uncertainty in robot perception and action. It covers concepts like state estimation, probability theory axioms, random variables, and Bayes filters. The document likely comprises lecture notes or similar material for a robotics or computer science course.

Full Transcript

1 Introduction to Mobile Robotics Course Chapter 6: Probabilistic Robotics 2 Probabilistic Robotics  Probabilistic robotics is a new approach to robotics that considers the uncertainty in robot percept...

1 Introduction to Mobile Robotics Course Chapter 6: Probabilistic Robotics 2 Probabilistic Robotics  Probabilistic robotics is a new approach to robotics that considers the uncertainty in robot perception and action.  The key idea of probabilistic robotics is to represent uncertainty explicitly, using the calculus of probability theory.  Uncertainty arises if the robot lacks critical information for carrying out its task.  It arises from four different factors: 1. Environments: Physical worlds are unpredictable 2. Sensors: Sensors are limited in what they can perceive 3. Robots: Robot actuation involves motors that are, at least to some extent, unpredictable, due effects like control noise and wear-and-tear 4. Computation: Robots are real-time systems, which limits the amount of computation that can be carried out. Many state-of-the-art algorithms are approximate, achieving timely response through sacrificing accuracy Probabilistic Approaches ▶ Uncertainty in robot motion and observation, so we use the probability theory to represent the uncertainty. We can say that the robot is likely to be somewhere in this region with different probabilities. The advantage of this can be showed over the time. For example if a robot located in this region and the sensor gave that it is 10 km away. This will conflict with the previous readings as well as the probabilities of the robot location. State Estimation ▶ Estimate the robot state ,typically the combination of position, velocity, orientation and angular velocity. ▶ Probability theory is the key tool for robust state estimation. ▶ 90% of the techniques presented in this course rely on it. Axioms of Probability Theory 5 P(A) denotes probability that proposition (statement) A is true.  P(True)  1 P(False)  0  0  P(A)  1  P(A  B)  P(A)  P(B)  P(A  B) A Closer Look at Axiom 3 6 P(A  B)  P(A)  P(B)  P(A  B) True A A B B If the events A and B are not mutually exclusive B Example 1 Using the Axioms solve P(AA)  P(A A) = P(A)  P(A)  P(A  A) = P(True) P(False) P(False)    Example2 8 Using the Axioms solve P(A) P(A A)  P(A)  P(A)  P(A A) P(True)  P(A)  P(A) P(False) 1  P(A)  P(A)  0 P(A)  1 P(A) Example 9 Example A. There are ten students in a group with the following statistics: An x means the student is taking the course. Name R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 Algebra x x x x x Biology x x x x Let A = "a student is taking Algebra", and B ="a student is taking Biology". Given: P(A) = 0.5 , P(B) = 0.4 , and P(A and B) = 0.3 , find: a. P(not B) = 1 - P(B) = 1 - 0.4 = 0.6. b. P(A or B) = P(A) + P(B) - P(A and B) = 0.5 + 0.4 - 0.3 = 0.6. c. P[not (A or B)] = 1 - P(A or B) = 1 - 0.6 = 0.4. d. P(not A and not B) = 0.4. =1−P(A)−P(B)+P(A∩B).=1−0.5−0.4+0.3. e. P(A and not B) = P(A) - P(A and B) = 0.5 - 0.3 = 0.2. f. P[not(A and B)] = 1 - P(A and B) = 1 - 0.3 = 0.7. g. P[not A or not B] = P(not A) + P(not B) - P(not A and not B) = 0.5 + 0.6 - 0.4 = 0.7. Random Variables ▶ A Random Variable conceptually does not have a single, fixed value , it can take on a set of possible different values, each with an associated probability. It could be discrete random variable or continuous random variable. ▶ Discrete Random Variable has a countable number of possible values. The probability of each value of a discrete random variable is between 0 and 1, and the sum of all the probabilities is equal to 1. The probability of the robot being at x=1 is 0.2 11 Discrete Random Variables  X denotes a random variable  X can take on a countable number of values in {x1, x2, …, xn}  P(X=xi) or P(xi) is the probability that the random variable X takes a particular value xi  P() is called probability mass function  Probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete density function. P(Room)  0.7,0.2, 0.08,0.02 Continuous Random Variables ▶ Continuous Random Variable is a random variable where the data can take infinitely many values. For example, a random variable measuring the time taken for something to be done is continuous since there are an infinite number of possible times that can be taken. The robot being at x=1 during the period of time from s to t 13 Continuous Random Variables  X takes on values in the continuum.  p(X=x) or p(x) is a probability density function When the PDF is graphically portrayed, the area under b the curve will indicate the P(x  [a, b])   p(x) dx interval in which the variable will fall. a PDF is used to specify the probability of the random variable falling within a particular range of values, This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range Probability Sums up to One 14 Discrete case Continuous case P(x) 1  p(x) dx  1 x 15 Joint and Conditional Probability Joint probability is a statistical measure that calculates the likelihood of two events occurring together and at the same point in time. Conditional probability is defined as the likelihood of an event or outcome occurring, based on the occurrence of a previous event or outcome.  P(X=x and Y=y) = P( x, y)  where X , Y are random variables and x ,y the values of them  If X and Y are independent then ( XCoin = x{head, tail}and YDice y={1,2,3,4,5,6}) P(x,y) = P(x) * P(y)  joint property Cont.  P(x | y) is the probability of x given y  what is the probability of x given that y has occurred P(x , y) = P (x | y) P(y) P(x | y) = P (x , y) / P(y)  conditional property  If X and Y are independent then P(x | y) = P(x) ▶ The joint probability can be calculated using the conditional probability; for example: P(A, B) = P(A | B) * P(B) This is called the product rule. ▶ Importantly, the joint probability is symmetrical, meaning that: P(A, B) = P(B, A) ▶ The conditional probability can be calculated using the joint probability; for example: P(A | B) = P(A, B) / P(B) ▶ The conditional probability is not symmetrical; for example: P(A | B) != P(B | A) M M ∑ S 20 20 40 S 35 45 80 Suppose M is a masters student and S he can sing ∑ 55 65 120 P(M,S) = 20 / 120 P(M, S) = 35/120 P(M) = 55/120 P(M)= 65/120 P(S) = 40/120 P(M|S) = P(M^S) / P(S) = (20/120) / (40/120) = 20/40 19 Law of Total Probability Conditional Property Discrete case Continuous case P ( x )   P ( x | y) P(y) p(x)   p(x | y) p(y) dy y Given B is an event and A is a small event within event B P(B) = P(B1)+P(B2)+P(B3)+P(B4) B B1 A^B1=A1 , A^B2=A2 , A^B3=A3 , A^B4=A4 P(A) = P(A1)+ P(A2)+ P(A3)+ P(A4) A1 A = P(A^B1) + P(A^B2) + P(A^B3) + P(A^B4) B4 But P(A | B) = P(A^B) / P (B) B2 A2 A4 SO, P(A^B) = P(A | B) * P (B) A3 But P(A) = P(A^B1) + P(A^B2) + P(A^B3) + P(A^B4) B3 SO, P(A)= P(A | B1) * P (B1) + P(A | B2) * P (B2) + P(A | B3) * P (B4) + P(A | B4) * P (B4) P(A) = 𝑖 𝑛 𝑃 𝐴 𝐵𝑖 ∗ 𝑃(𝐵𝑖) Law of Total Probability can be used when we want to calculate a probability of an event within another event 21 Bayes Formula Bayes Theorem provides a principled way for calculating a conditional probability without the joint probability. In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule(describes the probability of an event, based on prior knowledge of conditions that might be related to the event. P(x y)  P ( y | x) *P(x)  likelihood prior P(y) evidence P(A|B) = P(A^B) / P(B) ……….. Eq1 P(B|A) = P(A^B) / P(A) ……….. Eq2 From Eq2  P(A^B) = P(A) * P(B|A) …………. Eq3 Substitute Eq3 in Eq1 P(A|B) = [ P(A) * P(B|A) ] / P (B)  Bayes Formula Note that you can calculate P(B) from the from P(B) = P(B|A) * P(A) + P(B | not A) * P(not A) Which tells us: how often A happens given that B happens, written P(A|B), When we know: how often B happens given that A happens, written P(B|A) and how likely A is on its own, written P(A) and how likely B is on its own, written P(B) Let us say P(Fire) means how often there is fire, and P(Smoke) means how often we see smoke, then: P(Fire|Smoke) means how often there is fire when we can see smoke P(Smoke|Fire) means how often we can see smoke when there is fire So the formula kind of tells us "forwards" P(Fire|Smoke) when we know "backwards" P(Smoke|Fire) Bayes Rule 23 with Background Knowledge P (A | B) = P (A , B) / P (B) Proof P(x | y, z)  P(y | x, z) P(x | z) P(y |z) A version of the Bayes' theorem results from the addition of a third event Z Page 9 from probabilistic robotics Book State Estimation 24 At the core of probabilistic robotics is the idea of estimating state from sensor data. State estimation addresses the problem of estimating quantities from sensor data that are not directly observable, but that can be inferred. In most robotic applications, determining what to do is relatively easy if one only knew certain quantities. For example, moving a mobile robot is relatively easy if the exact location of the robot and all nearby obstacles are known. Unfortunately, these variables are not directly measurable. Instead, a robot has to rely on its sensors to gather this information. Sensors carry only partial information about those quantities, and their measurements are corrupted by noise. State estimation seeks to recover state variables from the data. Simple Example of State Estimation 25  The robot wants to know whether the door is opened or closed?  Suppose a robot obtains measurement z.  What is the probability that the door is open given the measurement (perception of the robot) z?  What is P(open | z)? 26 Cont. P(open | z)  P (z | open) P (open) P(z) P(B) = P(B |A) * P(A) + P(B | not A) * P(not A) 27 Example Given that :  P(z | open) = 0.6 P(z |open) = 0.3  P(open) = P(open) = 0.5 P(z |open) P(open) P(open | z)  P(z | open) p(open)  P(z | open) p(open) 0.6  0.5 0.3 P(open | z)    0.67 0.6 0.5  0.3 0.5 0.3  0.15  z raises the probability that the door is open 28 Combining Evidence  Suppose our robot obtains another observation z2 (second measurement)  How can we integrate this new information?  More generally, how can we estimate P(x | z1,..., zn )? 29 Example: Second Measurement P(y|z) = P (y | x) * P(x | z) + P (y | not x) * P (not x | z)  P(z2|open) = 0.25 P(z2|open) = 0.3  P(open|z1)=2/3 z2 lowers the probability that the door is open Actions  Often the world is dynamic since:  actions carried out by the robot,  actions carried out by other agents,  or just the time passing by change the world  How can we incorporate such actions? 23 Typical Actions  The robot turns its wheels to move  The robot uses its manipulator to grasp an object  Plants grow over time …  Actions are never carried out with absolute certainty  In contrast to measurements, actions generally increase the uncertainty 24 Modeling Actions  To incorporate the outcome of an action u into the current “belief”,we use the conditional pdf P(x | u, x’)  This term specifies the pdf that executing u changes the state from x’ to x. 25 Example: Closing the door 26 27 State Transitions P(x | u, x’) for u = “close door”: 0.9 0.1 open closed 1 0 If the door is open, the action “close door”succeeds in 90% of all cases Integrating the Outcome of Actions 28 Continuous case: P(x | u,x’ )   P(x | u, x ') P(x ' | u) dx ' Discrete case: P(x | u,x’ )   P ( x | u, x ') P(x ' | u) We will make an independence assumption to get rid of the u in the second factor in the sum. Bayes Filter (in robot localization) A Bayes filter is an algorithm used in computer science for calculating the probabilities of multiple beliefs to allow a robot to infer its position and orientation. Essentially, Bayes filters allow robots to continuously update their most likely position within a coordinate system, based on the most recently acquired sensor data. This is a recursive algorithm. ▶ In a simple example, a robot moving throughout a grid may have several different sensors that provide it with information about its surroundings. The robot may start out with certainty that it is at position (0,0). However, as it moves farther and farther from its original position, the robot has continuously less certainty about its position; using a Bayes filter, a probability can be assigned to the robot's belief about its current position, and that probability can be continuously updated from additional sensor information. Bayes Filters: Framework  Given:  Stream of observations z and action data u: dt  {u1, z1,…, ut, zt }  Sensor model P(z | x)  Action model P(x | u, x’)  Prior probability of the system state P(x)  Wanted:  Estimate of the state X of a dynamical system  The posterior of the state is also called Belief: Bel(xt )  P(xt | u1, z1,…, ut, zt ) 30 Cont. ▶ Estimate the state x of a system given observation z and control u. ▶ Use all the previous measurements during the time t. ▶ We need to find the probability of the location of the robot x, given all the measurements z during all the time, and all the commands (control u) given to the robot in order to move during that. ▶ Given how we ask the robot to move and what the robot saw through the sensors can we say where the robot is? This process is the process of state estimation Hidden Markov Model Hidden Markov Model (HMM) is a model that requires that there be an observable process Y whose outcomes are "influenced" by the outcomes of X in a known way. xt : the state at a time t zt : the observation at a time t P(zt | x0:t , z1:t1, u1:t )  P(zt | xt ) ut : the input at a time t P(xt | x1:t1, z1:t1, u1:t )  P(xt | xt1, ut ) 31 Bayes Filters Bel(xt ) = Recall Bayes Theorem Think of x as the state of the robot and z as the data we know Bel (xt ) = 𝑃 𝑧𝑡 𝑥𝑡, 𝑧1: 𝑡 − 1, 𝑢1: 𝑡 ∗ 𝑃 𝑥𝑡 𝑧 1 𝑡 1,𝑢1 𝑡) : − : 𝑃(𝑧 𝑡 |𝑧 1 𝑡 −1,𝑢 1 𝑡) : : 𝑃 𝑧𝑡 𝑥𝑡, 𝑧1: 𝑡 − 1, 𝑢1: 𝑡 ∗ 𝑃 𝑥𝑡 𝑧 1 𝑡 1 𝑢 1 𝑡) P(xt | z1:t , u1:t) = : − , : 𝑃(𝑧 𝑡 |𝑧 1 𝑡 𝑢 ) 1 1𝑡 :− , : P(xt | z1:t , u1:t) =  𝑃 𝑧𝑡 𝑥𝑡, 𝑧1: 𝑡 − 1, 𝑢1: 𝑡 ∗ 𝑃 𝑥𝑡 𝑧1: 𝑡 − 1, 𝑢1: 𝑡) https://www.youtube.com/watch?v=qDvd5lu80bA

Use Quizgecko on...
Browser
Browser