Module 10 - Dynamic Programming (SAUDI ELECTRONIC UNIVERSITY)
Document Details
Uploaded by GallantReal
Saudi Electronic University
2021
SAUDI ELECTRONIC UNIVERSITY
Tags
Summary
This document is lecture notes for a module on dynamic programming, specifically from Saudi Electronic University in 2021. The document covers concepts like knapsack problem, Warshall's algorithm, and optimal binary search trees. It also demonstrates dynamic programming
Full Transcript
الجامعة السعودية االلكترونية الجامعة السعودية االلكترونية 26/12/2021 College of Computing and Informatics Design and Analysis of Algorithms Design and Analysis of Algorithms Module 10 Dynamic Programming Week Learning Outcomes Describe Dynamic Programmin...
الجامعة السعودية االلكترونية الجامعة السعودية االلكترونية 26/12/2021 College of Computing and Informatics Design and Analysis of Algorithms Design and Analysis of Algorithms Module 10 Dynamic Programming Week Learning Outcomes Describe Dynamic Programming and its usage Express the Knapsack Problem and Memory Functions Explain Optimal Binary Search Trees 4 Topic Outline The Knapsack Problem using dynamic programming Warshall’s algorithm Floyd’s Algorithms 5 What is Dynamic Programming? Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS “Programming” here means “planning” Main idea: - set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once - record solutions in a table - extract solution to the initial instance from that table 6 https://www.scaler.com/topics/data-structures/dynamic-programming/ What is Dynamic Programming? Dynamic programming general steps: Step 1: The given problem is divided into a number of subproblems as in “divide and conquer” strategy. But in divide and conquer, the subproblems are independent of each other but in dynamic programming case, there are all overlapping subproblems. A recursive formulation is formed of the given problem. Step 2: The problem, usually solved in the bottom-up manner. To avoid, repeated computation of multiple overlapping subproblems, a table is created. Whenever a subproblem is solved, then its solution is stored in the table so that in future its solutions can be reused. Then the solutions are combined to solve the overall problem. 7 Example: Fibonacci numbers Recall definition of Fibonacci numbers: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 Computing the nth Fibonacci number recursively (top-down): F(n) F(n- + F(n-2) 1) F(n-3)+F(n-2) +F(n-3) F(n-... 4) 8 Example: Fibonacci numbers Computing the nth Fibonacci number using bottom-up iteration and recording :results F(0) = 0 F(1) = 1 F(2) = 1+0 = 1 … = F(n-2) = F(n-1) F(n) = F(n-1) + F(n-2) 0 1 1. F (n-)2- F(n) ) 1.. Efficiency: F (n - time - space 9 Dynamic Programming Rules Rule1: Bellman’s principle of optimality states that an optimal solution to any instance of an optimization problem is composed of optimal solutions to its sub instances. Rule 2: dynamic programming problems have overlapping subproblems. Rule 3: dynamic programming computes in bottom-up fashion. Thus, the solutions of the subproblems are extracted and combined to give solution to the original problem. 1 0 Examples of DP algorithms Computing a binomial coefficient Warshall’s algorithm for transitive closure Floyd’s algorithm for all-pairs shortest paths Constructing an optimal binary search tree Some instances of difficult discrete optimization problems: - traveling salesman - knapsack 1 1 Knapsack Problem by DP Example : Given n item of integer weights: w w1 2 values: … a knapsack of integer w capacity W n v1 Find most valuable subset of the items that fit into the knapsack Consider instance defined by vfirst 2 items and capacity ( j). Let be optimal value of such … instance. Then vn With initial condition: 1 2 Knapsack Problem by DP This table illustrates the values obtained from previously mentioned equations For i, j > 0, to compute F(i, j), we compute the maximum of the entry in the previous row and the same column and the sum of vi and the entry in the previous row and wi columns to the left. The table can be filled either row by row or column by column. 1 3 Knapsack Problem by DP (Activity example) Example: Knapsack of capacity W = 5 item weight value 1 2 $12 2 1 $10 3 3 $20 4 2 $15 capacity j 0 1 2 0 3 4 5 w1 = 2, v1= 12 1 w2 = 1, v2= 10 2 w3 = 3, v3= 20 ? 3 1 4 Transitive Closure A binary relation R is said exists between two vertices, say u and v, can be mathematically represented as u R v is a path relation. Hence, a relation u R v indicates that, there is a path from u to v. A transitive relation states that if there is a binary relation between u to v and v to w, then there exists a relation from u to w v v u implies u w w If R is a set of transitive relations, then the adjacency matrix of R is called reachability matrix, connectivity matrix pr path matrix 1 5 Warshall’s Algorithm: Transitive Closure Warshall’s algorithm is for Computing the transitive closure of a relation Alternatively: existence of all nontrivial paths in a digraph Example of transitive closure: 1 1 3 3 2 2 4 4 0 0 1 0 0 0 1 0 1 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 6 Warshall’s Algorithm: Warshall’s Constructs transitive closure T as the last matrix in the sequence of matrices R(k)[i,j] = 1 iff there is nontrivial path from i to j with only first k vertices allowed as intermediate Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure) 1 7 Warshall’s Algorithm - recurrence On the k- th iteration, the algorithm determines for every pair of vertices i, j, if a path exists from i and j with just vertices 1,…,k allowed as intermediate { R [i,j] (path using just 1 ,…,k-1) = R(k)[i,j] R(k-1)[i,k] and R(k-1) (path from i to k and from k to using [k,j] just 1 ,…,k-1) k 1 3 2 1 8 Warshall’s Algorithm – Matrix Generation Recurrence relating elements R(k) to elements of R(k-1) is: R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j]) :The rules of constructing from ) is given below Rule 1 If an element in row i and column j is 1 in , it remains 1 in Rule 2 If an element in row i and column j is 0 in , it has to be changed to in if and only if the element in its row i and column k and the 1 element in its column j and row k are both 1’s in 1 9 Warshall’s Algorithm – Example 1 3 0 0 1 0 0 0 1 0 1 0 0 1 = 1 0 1 1 R(0) 0 0 0 0 = R(1) 2 4 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1 1 1 = R(2) 0 0 0 0 = R(3) 0 0 0 0 = R(4) 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 2 0 Warshall’s Algorithm – Matrix Generation Time efficiency: Θ(n3) Space efficiency: Matrices can be written over their predecessors 2 1 Activity !!! In groups: Think of a real application of Warshall’s Algorithm ? 2 2 Floyd’s Algorithm: All pairs shortest paths Problem: In a weighted (di)graph, find shortest paths between every pair of vertices Same idea: construct solution through series of matrices D(0), …, D (n) using increasing subsets of the vertices allowed as intermediate 4 Example: 1 3 6 1 1 5 2 4 3 2 3 Floyd’s Algorithm: Matrix generation On the k-th iteration, the algorithm determines shortest paths between every pair of vertices i, j that use only vertices among 1,…,k as intermediate D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]} D(k-1)[i,k ] i k D D(k-1)[k,j] [i,j]. (k-1) j 2 4 Floyd’s Algorithm: Example 2 1 0 3 ∞ 0∞ 3 2 ∞ 3 6 7 = D(0) 2 ∞ 0 = D(1) ∞ 3 ∞ 2 0 5∞ 4 1 ∞ 0 7 ∞ 7 0 1 1 6∞ 9 0 0 ∞ 3∞ 6 ∞ ∞0 10 3 4 10 0 2 0 5∞ 0 2 0 5 6 03 2 = D(2) 9 7 0 1 = D(3) 9 7 0 1 = D(4) 54 6 ∞ 9 0 6 16 9 0 616 6 79 7 0 1 2 5 Floyd’s Algorithm: Matrix generation Time efficiency: Θ(n3) Space efficiency: Matrices can be written over their predecessors Note: Shortest paths themselves can be found, too 2 7 Activity !!! In groups: Demonstrate a network routing algorithm using Floyd’s Algorithm? Draw example graph and implement the algorithm 2 8 Optimal Binary Search Trees One of its principal applications is to implement a dictionary, a set of elements with the operations of searching, insertion, and deletion. Problem: Given n keys a1 < …< an and probabilities p1 ≤ … ≤ pn searching for them, find BST with a minimum average number of comparisons in successful search. Since total number of BSTs with n nodes is given by C(2n,n)/ (n+1), which grows exponentially, brute force is not efficient. 2 9 Optimal Binary Search Trees Example: What is an optimal BST for keys A, B, C, and D with search probabilities 0.1, 0.2, 0.4, and 0.3, respectively? 3 0 DP for Optimal BST Problem Let be minimum average number of comparisons made in , optimal BST for keys Consider optimal BST among all BSTs with some as their root; is the best among them. min {pk · 1 + C[i,j] = i ≤ k ≤ j k-1∑ ps (level as in T[i,k-1] +1) + s =i j ∑ ps (level as in T[k+1,j] s =k+1 +1)} 3 1 Optimal Binary Search Trees After simplifications, we obtain the recurrence for C[i,j]: j C[i,j] = min { C[i,k-1] + C[k+1,j] } + ∑ ps for 1 ≤ i ≤ j≤n i ≤ k ≤ j s =i i C[i,i] = pi for 1 ≤ i ≤ 1 1 j j≤n n 2 0 p1 goal 0 p2 i C[i,j] 3 2 Optimal Binary Search Trees Example: What is an optimal BST for keys A, B, C, and D ? The tables below are filled diagonal by diagonal: the left one is filled the right one, for trees’ roots, records k’s values giving the minima C j 0 1 2 3 4 j 0 1 2 3 4 i i 1 0 1. 4. 1.1 1.7 1 1 2 3 3 B D 2 0 2. 8. 1.4 2 2 3 3 A 3 0 4. 1.0 3 3 3 4 0 3. 4 4 optimal BST 5 0 5 3 3 Optimal Binary Search Trees 3 4 Analysis of DB of Optimal Binary Search Trees Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking advantage of monotonicity of entries in the root table, i.e., is always in the range between and Space efficiency: Θ(n2) Method can be expended to include unsuccessful searches 3 5 Summary Dynamic programming is a technique for solving problems with overlapping subproblems. Typically, these subproblems arise from a recurrence relating a solution to a given problem with solutions to its smaller subproblems of the same type. Dynamic programming suggests solving each smaller subproblem once and recording the results in a table from which a solution to the original problem can be then obtained. Applicability of dynamic programming to an optimization problem requires the problem to satisfy the principle of optimality: an optimal solution to any of its instances must be made up of optimal solutions to its subinstances. Among many other problems, the change-making problem with arbitrary coin denominations can be solved by dynamic programming. 3 6 Summary Solving a knapsack problem by a dynamic programming algorithm exempli- fies an application of this technique to difficult problems of combinatorial optimization. Dynamic programming can be used for constructing an optimal binary search tree for a given set of keys and known probabilities of searching for them. Warshall’s algorithm for finding the transitive closure and Floyd’s algorithm for the all-pairs shortest-paths problem are based on the idea that can be interpreted as an application of the dynamic programming technique. 3 7 List of Readings Required Reading : Chapter 8 from Anany Levitin (2012). Introduction to the Design and Analysis of Algorithms. Pearson, 2012, 3rd edition Recommending Reading : Chapter 12 from Michael T.Goodrich, Roberto Tamassia (2014). Algorithm Design and Applications. First Edition Wiley. 3 8 Thank You