🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Markov_Chain 1-30 (2).pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Markov Chain Lecture 1 2 Review: Event and random variables R.V. Q: What is a random variable? A: Useful to quantify events. Sample space S: Is a set of all possible outcomes of an experiment. S={H, t} Coin flipping...

Markov Chain Lecture 1 2 Review: Event and random variables R.V. Q: What is a random variable? A: Useful to quantify events. Sample space S: Is a set of all possible outcomes of an experiment. S={H, t} Coin flipping S={1, 2, 3, 4, 5, 6} Rolling dice A random variable R.V. takes numerical values depending on the outcomes of the experiment. Ex. R.V. either (discrete or continuous) Ex. Of Random variables: 3 X: number on a die, X={1, 2, 3, 4, 5, 6}, X: discrete X: number of customers buying an item, X={0, 1, 2, 3, ….}, X: discrete X: inches of rain, X={1.22, 3.21, 0.92, …}, X: continuous X: time between two busses, X={3.22, 4.32, …}, X: continuous Stochastic Processes Suppose now we take a series of observations of a random variable. 𝑋0 , 𝑋1 , 𝑋2 , …. A stochastic process is an index collection of random variables {𝑋𝑡 }, where t is the index from a given set T. The index t often denotes times. 4 Ex1: Roll a die 10 times 𝑋𝑡 =number on a die on the 𝑡 𝑡ℎ roll, t=1, 2, 3, …., 10 Ex2: 𝑋𝑡 = items sold on day t, t=1, 2, 3, … The value of 𝑋𝑡 is the characteristic of interest. 𝑋𝑡 : may be continues or discrete. P(X=x), X: r.v. and x: number Ex3: {𝑋𝑡 }: number of defective items on day t. t= 0, 1, 2, 3, …. 5 𝑋0 =0, 𝑋1 =1, 𝑋2 =0, 𝑋3 =1, 𝑋4 =2, 𝑋5 =3 {𝑋𝑡 }: {𝑋0 =0, 𝑋1 =1, 𝑋2 =0, 𝑋3 =1, 𝑋4 =2, 𝑋5 =3} stochastic process. Let {𝑋𝑡 , t=0, 1, 2, 3, ….} be a stochastic process that takes on a finite or countable number of possible values (states). This set of possible values (states) of the process is denoted by non-negative integers {0, 1, 2, …, ,m}. These states are mutually exclusive and exhaustive. Mutually exclusive: Sates have no intersection. They cannot be in two different states at the same time. 6 Exhaustive: All states (all possible outcomes) are included. Def. If 𝑋𝑡 = 𝑖, the process is said to be in state i at time t. Ex. Forecasting the weather Suppose the chance of rain tomorrow depends on whether it is rainy today ( and not in past weather conditions). If it rains today, then it will rain tomorrow with probability α. 𝑃 𝑟𝑡𝑜𝑚 Τ𝑟𝑡𝑜𝑑𝑎𝑦 = α If it is sunny today, then it will be sunny tomorrow with probability β. 𝑃 𝑠𝑡𝑜𝑚 Τ𝑠𝑡𝑜𝑑𝑎𝑦 = β Q1: What are the random variables of interest 𝑋𝑡 ? Rainy or sunny on day t. Weather conditions. Q2: What are the possible values (states) of these random variables? Two states, State 0: when it is rainy and State 1: when it is sunny. Q3: What is the index t? Day. Ex. Gambling game 7 Consider a gambling game when you win $1 with probability 𝑃 and lose $1 with probability 1 − 𝑃 on each turn. The game ends when either a cumulative $3 or go broke (you have 0$). You start with $1. Q1: What are the random variables of interest 𝑋𝑡 ? $ on turn t. Q2: What are the possible values (states) of these random variables? State 0: 0$ on turn t. State 1: 1$ on turn t. State 2: 2$ on turn t. State 3: 3$ on turn t. Q3: What is the index t? turn t of the game. 8 Markov Chains A stochastic process {𝑋𝑡 } satisfies the Markovian property if: 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 , 𝑋𝑡−1 = 𝑖𝑡−1 , … …. , 𝑋1 = 𝑖1 , 𝑋0 = 𝑖0 = 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 which only depends on the previous day. For all t=0, 1, 2, … and for possible state j, i, 𝑖𝑡−1 , …., 𝑖1 , 𝑖0. This stochastic process is called Markov chain. Future depends on the present not on the past. 9 What is a stochastic process? A stochastic process is an index collection of random varibales {𝑋𝑡 }, where t is time=0, 1, 2, … Key: 𝑋𝑡 is not known with certainty before t. 𝑋𝑡 is a random varible. Types of stochastic process:  𝑋𝑡 , t=0, 1, … observations are taken over discrete points in time. (Discrete-time stochastic process).  𝑋𝑡 , state at an operation (r.v) can be viewed at any time. [not just at discrete instants in time]. (Continuous-time stochastic process). Lecture 2 10 What is a Markov chain? A discrete-time stochastic-process is called a Markov chain if t=0, 1, 2,.. and all states: 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 , 𝑋𝑡−1 = 𝑖𝑡−1 , … …. , 𝑋1 = 𝑖1 , 𝑋0 = 𝑖0 = 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 (Markov property). Probability of the states at time t+1 depends on state at time t (or state i). 551 The conditional probability 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 is called one-step transition probability. The one-step transition probabilities are called stationery for all t: 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 = 𝑃𝑖𝑗 𝑃𝑖𝑗 : probability that the process will, when in states i, make a transition into state j and is independent on time t. Ex. Poo rainy today storytumor 𝑃 𝑟𝑡𝑜𝑚 Τ𝑟𝑡𝑜𝑑𝑎𝑦 = 0.4 c's 𝑃 𝑠𝑡𝑜𝑚 Τ𝑠𝑡𝑜𝑑𝑎𝑦 = 0.7 ring t=0 (Wed) t=1 (Thu) t=2 (Fri) in 𝑃𝑖𝑗 is referred to as the transition probability for the Markov chain. 18 b Po gumby 11 j i 0 1 2 3 --- t t+1 Def. Any Markov chain that satisfies 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 = 𝑃𝑖𝑗 is called a stationery Markov chain. 12 Define 𝑞𝑖 to be the probability of the chain is in state i at time 0, 𝑃 𝑋𝑜 = 𝑖. my 𝑃 𝑋𝑜 = 0 = 𝑞0 53 𝑃 𝑋𝑜 = 1 = 𝑞1 Gi 𝑃 𝑋𝑜 = 2 = 𝑞2 𝑃 𝑋𝑜 = 3 = 𝑞3 We called the vector q [𝑞0 , 𝑞1 , 𝑞2 , …, 𝑞𝑚 ] the initial probability distribution for Markov chain. imdb.ro here The transition probabilities are commonly displayed as 𝑚 × 𝑚 13 transition probability matrix P. C b 𝑗 = 0 𝑗 = 1 𝑗 = 2.. 𝑗 = 𝑚 𝑖 = 0 𝑃00 𝑃01 𝑃02.. 𝑃0𝑚 𝑃= 𝑖 = 1 𝑃10 𝑃11 𝑃12.. 𝑃1𝑚............ s , one step transition is 𝑖 = 𝑚 𝑃𝑚0 𝑃𝑚1 𝑃𝑚2.. 𝑃𝑚𝑚 probability. js Given that the state at time t is i, the process must be somewhere at time t+1. Ii I This means that in the transition probability matrix: 𝑗=𝑚  σ𝑗=0 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 = 1 → σm j=0 Pij = 1  All entries in P must be ≥ 0 → 𝑃𝑖𝑗 ≥ 0 1 Ex. Weather forecast: 𝑃 𝑟𝑡𝑜𝑚 Τ𝑟𝑡𝑜𝑑𝑎𝑦 = 𝛼 → 𝑃 𝑠𝑡𝑜𝑚 Τ𝑟𝑡𝑜𝑑𝑎𝑦 = 1 − 𝛼 14 𝑃 𝑠𝑡𝑜𝑚 Τ𝑠𝑡𝑜𝑑𝑎𝑦 = 𝛽 → 𝑃 𝑟𝑡𝑜𝑚 Τ𝑠𝑡𝑜𝑑𝑎𝑦 = 1 − 𝛽 E i State 0 – R and State 1-S: 0 1 0 𝛼 1−𝛼 𝑃= 1 1−𝛽 𝛽 2×2 matrix State transition diagram: α β 15 Lecture 3 Ex. Weather forecast 0 1 0 𝛼 1−𝛼 𝑃= → Transition probability matrix (TPM). 1 1−𝛽 𝛽 Two-state (stationery) Markov chain. Ér Estimate transition probability from data. 16 Weather data for one month (0: R-rainy, 1: S-sunny). Poo It 𝑃00 𝑃01 I need to know: 𝑃 =. 𝑃10 𝑃11 Po 𝑅𝑅 𝑅𝑆 17 Number of =11 and Number =5. 00 01 11 5 𝑆𝑅 𝑆𝑆 So, 𝑃00 = and 𝑃01 =. Number of =5 and Number =6. 16 16 10 11 5 6 So, 𝑃10 = and 𝑃11 =. 11 11 11Τ 5Τ 16 16 So, the TPM for this example is: 𝑃 = 5Τ 6Τ 11 11 And the state transition diagram is: In 𝑃: All entries are ≥ 0. Each row's summation is 1. 11/16 0 1 ‫‪Ex. Gambling Game:‬‬ ‫‪18‬‬ ‫‪Probability p of winning $1 on any turn and 1 − 𝑃 of losing $1 on any‬‬ ‫‪turn. You start with $1 and stop when you have $3 or when you go‬‬ ‫‪broke ($0). Possible states for {$0, $1, $2, $3} are {0, 1, 2, 3}.‬‬ ‫‪𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 , … , 𝑋0 = 𝑖0 = 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖.‬‬ ‫بما أن اللعبة ستنتهي بمجرد‬ ‫االفالس بالتالي جميع احتماالت‬ ‫‪The TPM for this example is:‬‬ ‫الكسب في اللعبه القادمة ستكون ‪0‬‬ ‫‪0‬‬ ‫‪1‬‬ ‫‪2‬‬ ‫‪3‬‬ ‫‪0‬‬ ‫‪1‬‬ ‫‪0‬‬ ‫‪0‬‬ ‫‪0‬‬ ‫احتمال أن الالعب سيخسر ‪$1‬‬ ‫‪1‬‬ ‫𝑝‪1−‬‬ ‫‪0‬‬ ‫𝑝‬ ‫‪0‬‬ ‫=𝑃‬ ‫في اللعبة القادمة مع العلم أن لديه‬ ‫‪2‬‬ ‫‪0‬‬ ‫𝑝‪1−‬‬ ‫‪0‬‬ ‫𝑝‬ ‫‪ $1‬من اللعبة الحالية‬ ‫‪3‬‬ ‫‪0‬‬ ‫‪0‬‬ ‫‪0‬‬ ‫‪1‬‬ ‫احتمال أن الالعب سيكسب ‪$1‬‬ ‫في اللعبة القادمة مع العلم أن لديه‬ ‫‪In 𝑃:‬‬ ‫بما أن اللعبة ستنتهي بمجرد كسب‬ ‫‪ $1‬من اللعبة الحالية‬ ‫‪ All entries are ≥ 0.‬‬ ‫‪ $3‬بالتالي جميع احتماالت الخسارة‬ ‫‪ Each row's summation is 1.‬‬ ‫في اللعبه القادمة ستكون ‪0‬‬ And the state transition diagram is: 19 0 1 2 3 For p=0.4 then TPM will be: 0 1 2 3 0 1 0 0 0 1 0.6 0 0.4 0 𝑃= 2 0 0.6 0 0.4 3 0 0 0 1 Ex. Inventory example 20  A camera store stocks a particular model camera  Order must be placed on Saturday night and the camera will be delivered first thing Monday morning  The store uses an (s, S) policy If the number of cameras in inventory ≥ 𝑠, then do not order any camera If the number of cameras in inventory < 𝑠, then do order enough to bring the supply up to S (s, S) = (1, 3) (s, S) = (2, 3) If inventory = 0, order 3 If inventory = 0, order 3 =1, don’t order =1, order 2 =2, don’t order =2, don’t order =3, don’t order =3, don’t order 0 3 Pt Aqa gg want 5 𝑋𝑡 = number of cameras on hand (available in stock) at the 21 end of 𝑡 𝑡ℎ week. So, t=week. States are: {0, 1, 2, … S}. Let’s describe 𝑋𝑡+1 as a function of 𝑋𝑡 under the in.si is (𝑠 = 1, 𝑆 = 3) policy: Let 𝑋0 = initial number of cameras on hand. Let 𝐷𝑡 = the demand of cameras during week t. I Assume 𝐷𝑡 are independent and identically distributed (i.i.d) variables. The possible values for 𝑋𝑡+1 : max 3 − 𝐷𝑡 , 0 , 𝑖𝑓 𝑋𝑡 = 0 so order 𝑋𝑡+1 =ቊ (∗) max 𝑋𝑡 − 𝐷𝑡 , 0 𝑖𝑓 𝑋𝑡 > 0 don′ t order ‫‪ ‬إذا كانت عدد الكاميرات المتاحة في المخزن ‪ 𝑋𝑡 = 0‬بالتالي سنطلب ‪ 3‬لتعبئة المخزن بناء على سياسة ‪. 𝑠 = 1, 𝑆 = 3‬‬ ‫‪22‬‬ ‫ولكن هناك طلبات للعمالء ستخرج من المخزن مقدارها 𝑡𝐷‪ ،‬حيث أن قيمة 𝑡𝐷 مجهولة وقد تكون أكبر من ‪ 3‬ولكن عدد الكاميرات‬ ‫مستحيل يكون سالب بالتالي تم تعريف ‪ 𝑋𝑡+1‬بالشكل التالي‪:‬‬ ‫‪max 3 − 𝐷𝑡 ,0‬‬ ‫‪I‬‬ ‫‪O‬‬ ‫‪TE‬‬ ‫في حال كانت ‪ 𝐷𝑡 ≥ 3‬فإن عدد الكاميرات المطلوبة لتعبئة المخزن في االسبوع القادم سيساوي ‪ 0‬أي أن ‪𝑋𝑡+1 = 0‬‬ ‫‪Rfid‬‬ ‫‪ ‬إذا كانت عدد الكاميرات المتاحة في المخزن ‪ 𝑋𝑡 = 1‬بالتالي لن نطلب لتعبئة المخزن بناء على سياسة ‪. 𝑠 = 1, 𝑆 = 3‬‬ ‫‪ᵗ‬‬ ‫‪p‬‬ ‫ولكن هناك طلبات للعمالء ستخرج من المخزن مقدارها 𝑡𝐷‪ ،‬حيث أن قيمة 𝑡𝐷 مجهولة وقد تكون أكبر من ‪ 1‬ولكن عدد الكاميرات‬ ‫مستحيل يكون سالب بالتالي تم تعريف ‪ 𝑋𝑡+1‬بالشكل التالي‪:‬‬ ‫‪max 1 − 𝐷𝑡 ,0‬‬ ‫في حال كانت ‪ 𝐷𝑡 ≥ 1‬فإن عدد الكاميرات المطلوبة لتعبئة المخزن في االسبوع القادم سيساوي ‪ 0‬أي أن ‪𝑋𝑡+1 = 0‬‬ ‫‪M‬‬ ‫‪1 08‬‬ ‫‪04‬‬ ‫‪0‬‬ ‫‪ ‬بالمثل عندما ‪ 𝑋𝑡 > 1‬لن نطلب لتعبئة المخزن بناء على سياسة ‪. 𝑠 = 1, 𝑆 = 3‬‬ ‫‪Max 3‬‬ ‫‪Pt‬‬ ‫بالتالي تم تعريف ‪ 𝑋𝑡+1‬بالشكل التالي‪:‬‬ ‫‪Max t‬‬ ‫‪max 𝑋𝑡 − 𝐷𝑡 , 0‬‬ 0 1 2 3 23 3 𝑃= 0 1 𝑃00 𝑃10 𝑃20 𝑃01 𝑃11 𝑃21 𝑃02 𝑃12 𝑃22 𝑃03 𝑃13 𝑃23 I'm 2 3 𝑃30 𝑃31 𝑃32 𝑃33 4×4 We have four states: State 0: 0 camera Poo 0,73 State 1: 1 camera poo PCH.io PL Rt E State 2: 2 cameras State 3: 3 cameras To find 𝑃𝑖𝑗 we use the def. in (∗): 4 0 𝑃00 = 𝑃 𝑋𝑡+1 = 0 𝑋𝑡 = 0 = 𝑃(𝐷𝑡 ≥ 3) 𝑃23 = 𝑃 𝑋𝑡+1 = 3 𝑋𝑡 = 2 = 0 𝑃31 = 𝑃 𝑋𝑡+1 = 1 𝑋𝑡 = 3 = 𝑃(𝐷𝑡 = 2) Similarly for other 𝑃𝑖𝑗. The transition probability matrix is: P 24 0 1 2 3 0 𝑃 𝐷 ≥3 𝑃 𝐷=2 𝑃 𝐷=1 𝑃 𝐷=0 Poo 𝑃 𝐷 ≥1 𝑃 𝐷=0 0 0 1 𝑃= 2 𝑃 𝐷 ≥2 𝑃 𝐷=1 𝑃 𝐷=0 0 3 𝑃 𝐷 ≥3 𝑃 𝐷=2 𝑃 𝐷=1 𝑃 𝐷=0 0 The state transition diagram: 4 2 1 1 Numerical example for (inventory example): 25 Assume 𝐷𝑡 ~ Poisson 𝜆 = 1 for all t. The probability mass function for a 𝜆𝑛 𝑒 −𝜆 Poisson random variable is: 𝑃 𝑋 = 𝑛 = , 𝑛 = 0, 1, 2, 3, … 𝑛! 𝑛 1 𝑒 −1 𝑃 𝐷𝑡 = 𝑛 = , 𝑛 = 0, 1, 2, 3, …. 𝑛! 10 𝑒 −1 mate 𝑃 𝐷𝑡 = 0 = 0! = 𝑒 −1 = 0.368 11 𝑒 −1 𝑃 𝐷𝑡 = 1 = = 𝑒 −1 = 0.368 1! 12 𝑒 −1 𝑒 −1 1 𝑃 𝐷𝑡 = 2 = 2! = 2 = 0.184 213 Max 3 12 EFFETE  𝑃 𝐷𝑡 ≥ 3 = 1 − 𝑃 𝐷𝑡 ≤ 2 = 1 − 0.368 + 0.368 + 0.184 = 0.08 26  𝑃 𝐷𝑡 ≥ 2 = 1 − 𝑃 𝐷𝑡 ≤ 1 = 1 − 0.368 + 0.368 = 0.264  𝑃 𝐷𝑡 ≥ 1 = 1 − 𝑃 𝐷𝑡 = 0 = 1 − 0.368 = 0.632 The transition probability matrix will be: 0 1 2 3 0 0.08 0.184 0.368 0.368 1 0.632 0.368 0 0 𝑃= 2 0.264 0.368 0.368 0 3 0.08 0.184 0.368 0.368 Good max 3 17,0 The state transition diagram will be: 27 l Lecture 4 Multi-step transition probabilities 28 So far, we have only focused on one-step transition probabilities 𝑃𝑖𝑗 but these do not directly answer some interesting questions. Ex. If it is raining today, what is the probability that it will be raining day after tomorrow. Ex. What is the probability that you will go broke in three turns? EE 𝑃 𝑋𝑡+1 = 𝑗Τ𝑋𝑡 = 𝑖 – one-step probability 𝑃 𝑋𝑡+n = 𝑗Τ𝑋𝑡 = 𝑖 – n-step probability These are called n-step transition probabilities. In particular we want to find 𝑛 𝑃 𝑋𝑡+n = 𝑗Τ𝑋𝑡 = 𝑖 which is denoted 𝑃𝑖𝑗. Since we are dealing with a stationery Markov chain, these probabilities are independent of time: 𝑛 𝑃 𝑋𝑡+n = 𝑗Τ𝑋𝑡 = 𝑖 = 𝑃 𝑋𝑛 = 𝑗Τ𝑋0 = 𝑖 = 𝑃𝑖𝑗 𝑛 𝑃𝑖𝑗 : is called n-step probability of a transition from state i to state j. 29 n-step transition probabilities 𝑛 𝑃 𝑋𝑡+n = 𝑗Τ𝑋𝑡 = 𝑖 = 𝑃 𝑋𝑛 = 𝑗Τ𝑋0 = 𝑖 = 𝑃𝑖𝑗 1 Note that: 𝑃𝑖𝑗 = 𝑃𝑖𝑗. 2 To find 𝑃𝑖𝑗 , if the system now in state i, then for the system to end up in state j two periods from now. From now, we must go from state i to some state k and state k to state j. T are You 30 0 1 i j k Time 0 Time 2 m Time 1 2  𝑃𝑖𝑗 = 𝑃𝑖0 𝑃0𝑗 + 𝑃𝑖1 𝑃1𝑗 + ⋯ + 𝑃𝑖𝑘 𝑃𝑘𝑗 + ⋯ + 𝑃𝑖𝑚 𝑃𝑚𝑗 2  𝑃𝑖𝑗 = σ probability of transition of 𝑖 to 𝑘 × probabilties of transition of 𝑘 to 𝑗 2  𝑃𝑖𝑗 = σ𝑚 𝑘=0 𝑃𝑖𝑘 𝑃𝑘𝑗 − Chapman − Kolmogrov equation

Tags

Markov chains stochastic processes random variables
Use Quizgecko on...
Browser
Browser