St. Peter’s Engineering College Question Bank 2023-24 PDF
Document Details
St. Peter's Engineering College
2024
Tags
Summary
This document is a question bank from St. Peter’s Engineering College for the 2023-24 academic year. It includes questions on topics like time complexity, asymptotic notation, merge sort, quick sort, binary search, and Strassen's matrix multiplication. The document targets third-year computer science students.
Full Transcript
SR 22 St. Peter’s Engineering College Dept. : CSE(AI ML), (Autonomous) AIDS Dullapally (P), Medchal, Hyderabad –...
SR 22 St. Peter’s Engineering College Dept. : CSE(AI ML), (Autonomous) AIDS Dullapally (P), Medchal, Hyderabad – 500100. Academic Year QUESTION BANK WITH ANSWERS 2023-24 Subject AS22-66PC03 Subject : DESIGN AND ANALYSIS OF : Code ALGORITHM Class/ B. Tech. Semes : Year : III : I Section ter BLOOMS LEVEL Remember L1 Understand L2 Apply L3 Analyze L4 Evaluate L5 Create L6 ***** SR 22 Q. No Question (s) Marks BL CO MODULE - I 1 a) Define Time Complexity? 1M L1 C311.1 b) Define Asymptotic Notation. 1M L1 C311.1 c) Define an Algorithm. 1M L1 C311.1 C311.1 d) Define Merge Sort. 1M L1 e) Define Big-O notation in algorithm analysis. 1M L1 C311.1 a) Define an Algorithm? Describe the characteristics of an C311.1 2 3M L1 Algorithm? b) Write about i) Big O notation ii) Big-Omega notation and C311.1 3M L2 iii) Big-theta notation for algorithm analysis? c) Derive the time complexity of Merge sort algorithm? 3M L4 C311.1 d) Write the algorithm for Quick Sort? 3M L2 C311.1 e) Write the algorithm for Binary Search? 3M L2 C311.1 a) Sort the records with the following index values in the L3 C311.1 3 ascending order using quick sort algorithm 30, 20, 10, 5M 50, 60, 40. b) Write the procedure for Strassen’s Matrix multiplication? 5M L2 C311.1 c) Using Merge sort algorithm sort the given list: 5M L3 C311.1 7,5,2,4,1,6,3,0. d) Describe about Asymptotic Notations. 5M L2 C311.1 e) Explain about Divide and Conquer technique in algorithm 5M L5 C311.1 design and write its applications? a) Sort the records with the following index values in the 10M L3 C311.1 4 ascending order using Quick Sort algorithm 2, 3, 8, 5, 4, 7, 6, 9, 1. b) Write an algorithm for Merge Sort and the complexity of the 10M L2 C311.1 algorithm with an example? c) Explain Strassen’s Matrix multiplication with one example? 10M L5 C311.1 What is the time complexity of this algorithm? 1. a)Define Time Complexity Time Complexity measures the amount of time an algorithm takes to complete as a function of the size of the input. It is a way to evaluate the efficiency of an algorithm in terms of the number of basic operations it performs. Purpose: To estimate how the execution time of an algorithm increases with the size of the input. SR 22 Expression: Usually expressed as a function, e.g., O(n)O(n)O(n), O(n2)O(n^2)O(n2), where nnn is the size of the input. Examples: o Linear time complexity: O(n)O(n)O(n) o Quadratic time complexity: O(n2)O(n^2)O(n2) b) Define Asymptotic Notation Asymptotic Notation provides a way to describe the behavior of algorithms as the input size grows towards infinity. It helps classify algorithms based on their growth rates. 1. Big-O Notation (OOO) o Definition: Describes the upper bound of an algorithm’s time complexity, representing the worst-case scenario. o Example: If f(n)=O(g(n))f(n) = O(g(n))f(n)=O(g(n)), then f(n)f(n)f(n) does not grow faster than g(n)g(n)g(n) for sufficiently large nnn. 2. Big-Omega Notation (Ω\OmegaΩ) o Definition: Describes the lower bound of an algorithm’s time complexity, representing the best-case scenario. o Example: If f(n)=Ω(g(n))f(n) = \Omega(g(n))f(n)=Ω(g(n)), then f(n)f(n)f(n) grows at least as fast as g(n)g(n)g(n) for sufficiently large nnn. 3. Big-Theta Notation (Θ\ThetaΘ) o Definition: Describes a tight bound on an algorithm’s time complexity, representing both the upper and lower bounds. o Example: If f(n)=Θ(g(n))f(n) = \Theta(g(n))f(n)=Θ(g(n)), then f(n)f(n)f(n) grows at the same rate as g(n)g(n)g(n) for sufficiently large nnn. c) Define an Algorithm Algorithm is a finite set of well-defined instructions for solving a problem or performing a computation. It takes some input and produces an output. Characteristics of an Algorithm: 1. Finiteness: The algorithm must terminate after a finite number of steps. 2. Definiteness: Each step must be precisely defined and unambiguous. 3. Input: The algorithm should have zero or more inputs. 4. Output: The algorithm should produce at least one output. 5. Effectiveness: The steps of the algorithm must be basic enough to be executed in a finite amount of time. SR 22 d) Define Merge Sort Merge Sort is a comparison-based sorting algorithm that uses the divide-and-conquer technique to sort elements. Steps: 1. Divide: Split the array into two halves. 2. Conquer: Recursively sort each half. 3. Combine: Merge the two sorted halves into a single sorted array. Example: o Given array: [38, 27, 43, 3, 9, 82, 10] o After sorting: [3, 9, 10, 27, 38, 43, 82] e) Define Big-O Notation in Algorithm Analysis Big-O Notation is used to describe the upper bound of the time complexity of an algorithm. It represents the worst-case scenario and provides a way to compare the efficiency of algorithms. Definition: If f(n)f(n)f(n) is O(g(n))O(g(n))O(g(n)), then there exist constants c>0c > 0c>0 and n0≥0n_0 \ge 0n0≥0 such that for all n≥n0n \ge n_0n≥n0, f(n)≤c⋅g(n)f(n) \ le c \cdot g(n)f(n)≤c⋅g(n). Examples: o Linear Time: O(n)O(n)O(n) o Quadratic Time: O(n2)O(n^2)O(n2) o Logarithmic Time: O(logn)O(\log n)O(logn) 2. a) Define an Algorithm and Describe its Characteristics Definition: An algorithm is a step-by-step procedure or formula for solving a problem. Characteristics: 1. Finiteness: The algorithm must end after a finite number of steps. 2. Definiteness: Each instruction must be clear and unambiguous. 3. Input: The algorithm can take zero or more inputs. 4. Output: The algorithm must produce at least one output. 5. Effectiveness: Each operation must be feasible and executable in a finite time. b) Notations for Algorithm Analysis 1. Big O Notation (OOO) SR 22 o Definition: Represents the upper bound of the time complexity. Describes the worst-case scenario. o Example: O(n2)O(n^2)O(n2) 2. Big Omega Notation (Ω\OmegaΩ) o Definition: Represents the lower bound of the time complexity. Describes the best-case scenario. o Example: Ω(n)\Omega(n)Ω(n) 3. Big Theta Notation (Θ\ThetaΘ) o Definition: Represents both the upper and lower bounds. Describes the average- case or tight bound of the time complexity. o Example: Θ(nlogn)\Theta(n \log n)Θ(nlogn) c) Derive the Time Complexity of Merge Sort Algorithm Merge Sort Complexity Analysis: 1. Divide Step: o Divides the array into two halves. This takes constant time O(1)O(1)O(1). 2. Conquer Step: o Recursively sorts each half. The number of levels in the recursion tree is log2n\ log_2 nlog2n, where nnn is the size of the array. 3. Combine Step: o Merging two sorted halves requires O(n)O(n)O(n) time. Overall Time Complexity: The time complexity of Merge Sort is O(nlogn)O(n \log n)O(nlogn) because it divides the problem into smaller pieces and requires merging at each level. d) Write the Algorithm for Quick Sort Quick Sort Algorithm: 1. Partition: Choose a pivot element and partition the array into elements less than the pivot and elements greater than the pivot. 2. Recursively Apply: Apply Quick Sort recursively to the subarrays on either side of the pivot. Algorithm: python Copy code def quick_sort(arr): SR 22 if len(arr) pivot] return quick_sort(left) + middle + quick_sort(right) e) Write the Algorithm for Binary Search Binary Search Algorithm: 1. Initialize: Set the initial low and high indices. 2. Iterate: Check the middle element of the array. If it is the target, return the index. If the target is less, repeat on the left half. If the target is greater, repeat on the right half. Algorithm: python Copy code def binary_search(arr, target): low = 0 high = len(arr) - 1 while low 1: mid = len(arr) // 2 left_half = arr[:mid] right_half = arr[mid:] merge_sort(left_half) merge_sort(right_half) i=j=k=0 while i < len(left_half) and j < len(right_half): if left_half[i] < right_half[j]: arr[k] = left_half[i] i += 1 else: arr[k] = right_half[j] j += 1 k += 1 while i < len(left_half): arr[k] = left_half[i] i += 1 k += 1 while j < len(right_half): arr[k] = right_half[j] j += 1 k += 1 return arr Complexity: Time Complexity: O(nlogn)O(n \log n)O(nlogn) Space Complexity: O(n)O(n)O(n) Example: Sorting [38, 27, 43, 3, 9, 82, 10] c) Explain Strassen’s Matrix Multiplication with Example and Time Complexity SR 22 Strassen’s Matrix Multiplication: Procedure: 1. Divide: Split each matrix into four submatrices. 2. Compute: Use seven products of the submatrices. 3. Combine: Calculate the final product matrix using these products. Example: Multiplying two 2×22 \times 22×2 matrices: A=[abcd],B=[efgh]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, \quad B = \begin{bmatrix} e & f \\ g & h \end{bmatrix}A=[ac bd],B=[egfh] o Compute seven products (P1 to P7) using Strassen’s formulas. o Combine these products to get the final matrix. Time Complexity: Time Complexity: O(nlog27)≈O(n2.81)O(n^{\log_2 7}) \approx O(n^{2.81})O(nlog2 7)≈O(n2.81) Q. No Question (s) Marks BL CO MODULE – II 1 a) Define Disjoint Sets. 1M L1 C311.2 b) Define Union Operation. 1M L1 C311.2 c) Define MIN Operation. 1M L1 C311.2 d) Define Backtracking. 1M L1 C311.2 e) Define Graph Coloring. 1M L1 C311.2 2 a) Describe about Disjoint Set Operations. 3M L4 C311.2 b) Explain about 4-Queens Problem with Backtracking. 3M L5 C311.2 c) Describe the Backtracking technique to the m-coloring graph. 3M L4 C311.2 d) Write the Algorithm for Collapsing Find? 3M L2 C311.2 e) Write the Algorithm for Weighted Union? 3M L2 C311.2 3 a) Describe about Backtracking. 5M L4 C311.2 b) Apply the backtracking algorithm to solve the 5M C311.2 following instance of the sum of subsets problem L2 S={1,3,4,5} and m=8. c) Write an Algorithm for N-Queens Problem? 5M L2 C311.2 d) Explain in detail about Different Applications of 5M C311.2 L3 Backtracking. e) Describe about Graph Coloring. 5M L4 C311.2 4 a) Explain in detail about Disjoint Set Operations. 10M L4 SR 22 C311.2 b) Explain in detail about N-Queens with Example. 10M L4 C311.2 c) Explain in detail about Weighted Union and 10M L4 C311.2 Collapsing Find Algorithms with Example. 1. a) Define Disjoint Sets Disjoint Sets (or Union-Find) is a data structure that manages a partition of a set into disjoint subsets. The primary purpose is to keep track of a set of elements partitioned into non- overlapping subsets. Operations: 1. Find: Determine which subset a particular element is in. 2. Union: Merge two subsets into a single subset. Applications: Used in network connectivity, image processing, and more. Example: Initial Sets: {1}, {2}, {3} Union Operation: Union {1} and {2} results in {1, 2}, {3} Find Operation: Find the subset containing 1 returns {1, 2}. b) Define Union Operation Union Operation is used to merge two disjoint subsets into a single subset. Purpose: To combine two subsets into one, so they belong to the same set. Method: Typically performed by linking the root of one set to the root of the other. Example: Before Union: {A, B}, {C, D} After Union of {A, B} and {C, D}: {A, B, C, D} c) Define MIN Operation MIN Operation finds the minimum element within a subset of elements. Purpose: To determine the smallest element in a subset. Usage: Often used in problems involving priority queues and minimum spanning trees. Example: Subset: {3, 7, 2, 9} MIN Operation Result: 2 d) Define Backtracking SR 22 Backtracking is a problem-solving technique that incrementally builds candidates for solutions and abandons a candidate as soon as it is determined that it cannot be extended to a valid solution. Purpose: To find all (or some) solutions to computational problems, particularly constraint satisfaction problems. Approach: Use a recursive approach to explore all possibilities and "backtrack" upon encountering a conflict. Example: Solving a maze by exploring all possible paths and retreating upon hitting dead-ends. e) Define Graph Coloring Graph Coloring is an assignment of colors to the vertices of a graph such that no two adjacent vertices share the same color. Purpose: To ensure that no two adjacent nodes (connected by an edge) have the same color. Applications: Scheduling problems, map coloring, and register allocation in compilers. Example: Graph: A triangle graph (three vertices connected in a cycle) Coloring: Requires 3 colors as each vertex is adjacent to two others. 2. a) Describe Disjoint Set Operations Disjoint Set Operations are fundamental to the Union-Find data structure. They manage partitioned sets with the following primary operations: 1. Find: Determines which set a particular element belongs to. It uses path compression to keep the tree flat, improving efficiency. 2. Union: Merges two sets. It uses union by rank or size to keep the trees balanced and improve performance. Detailed Operations: 1. Find with Path Compression: o Purpose: To flatten the structure of the tree whenever find is called, making future queries faster. o Algorithm: python Copy code def find(parent, i): if parent[i] == i: return i else: parent[i] = find(parent, parent[i]) # Path compression return parent[i] SR 22 2. Union by Rank: o Purpose: To keep the tree as flat as possible by attaching the smaller tree under the root of the larger tree. o Algorithm: python Copy code def union(parent, rank, x, y): rootX = find(parent, x) rootY = find(parent, y) if rootX != rootY: if rank[rootX] > rank[rootY]: parent[rootY] = rootX elif rank[rootX] < rank[rootY]: parent[rootX] = rootY else: parent[rootY] = rootX rank[rootX] += 1 b) Explain the 4-Queens Problem with Backtracking 4-Queens Problem: Place 4 queens on a 4x4 chessboard such that no two queens can attack each other. This means no two queens can share the same row, column, or diagonal. Backtracking Approach: 1. Start in the first row and try placing a queen in each column. 2. Move to the next row and try placing a queen in each column of this row. 3. Check for conflicts with previously placed queens. 4. If a conflict is found, backtrack and try the next column. 5. Repeat until all queens are placed correctly or all possibilities are exhausted. Algorithm: python Copy code def solve_queens(n): def is_safe(board, row, col): for i in range(row): if board[i] == col or \ board[i] - i == col - row or \ board[i] + i == col + row: return False return True def solve(board, row): if row == n: SR 22 solutions.append(board.copy()) return for col in range(n): if is_safe(board, row, col): board[row] = col solve(board, row + 1) board[row] = -1 # Backtrack solutions = [] board = [-1] * n solve(board, 0) return solutions Example Output: csharp Copy code [1, 3, 0, 2] [2, 0, 3, 1] Each list represents the column positions of queens in each row. c) Describe the Backtracking Technique for m-Coloring a Graph m-Coloring Graph Problem: Assign one of m colors to each vertex of a graph such that no two adjacent vertices share the same color. Backtracking Approach: 1. Assign a color to a vertex and check if it is valid (i.e., no adjacent vertices have the same color). 2. Move to the next vertex and assign a color. 3. If all vertices are colored and the graph is valid, the solution is found. 4. If the coloring is invalid, backtrack and try the next color for the previous vertex. Algorithm: python Copy code def graph_coloring(graph, m): def is_valid(v, color): for i in range(len(graph)): if graph[v][i] == 1 and color[i] == color[v]: return False return True def solve_coloring(v): if v == len(graph): return True for c in range(1, m + 1): if is_valid(v, c): SR 22 color[v] = c if solve_coloring(v + 1): return True color[v] = 0 # Backtrack return False color = * len(graph) if solve_coloring(0): return color return None Example: Graph: A triangle graph (3 vertices, all connected) Colors: 3 Output: [1, 2, 3] (each vertex assigned a different color) d) Write the Algorithm for Collapsing Find Collapsing Find is a technique used to speed up the find operation in a disjoint set data structure by making nodes directly point to the root of the set. Algorithm: python Copy code def find(parent, i): if parent[i] != i: parent[i] = find(parent, parent[i]) # Path compression return parent[i] Explanation: When finding the root of a set, nodes on the path are directly linked to the root, reducing future query times. e) Write the Algorithm for Weighted Union Weighted Union is a technique used to merge two sets, ensuring the smaller set is always added under the root of the larger set to keep the structure balanced. Algorithm: python Copy code def union(parent, rank, x, y): rootX = find(parent, x) rootY = find(parent, y) if rootX != rootY: if rank[rootX] > rank[rootY]: parent[rootY] = rootX elif rank[rootX] < rank[rootY]: parent[rootX] = rootY SR 22 else: parent[rootY] = rootX rank[rootX] += 1 Explanation: Use the rank array to track the depth of trees and attach the smaller tree to the root of the larger tree. 3. a) Describe Backtracking Backtracking is a general algorithmic technique used to find solutions to problems by incrementally building candidates for solutions and abandoning those that fail to meet the criteria. Approach: 1. Build Solution: Start with an empty solution and try to extend it. 2. Check Constraints: Check if the current partial solution is valid. 3. Recursive Calls: If valid, recursively build further. 4. Backtrack: If a conflict or invalid state is found, revert the last step and try a different path. Applications: Sudoku solver, N-Queens problem, combinatorial problems. b) Apply Backtracking Algorithm to Solve the Sum of Subsets Problem Problem: Given set S={1,3,4,5}S = \{1, 3, 4, 5\}S={1,3,4,5} and m=8m = 8m=8, find a subset that sums to mmm. Algorithm: python Copy code def sum_of_subsets(S, m): def solve(subset, index, current_sum): if current_sum == m: results.append(subset.copy()) return if index >= len(S) or current_sum > m: return # Include S[index] solve(subset + [S[index]], index + 1, current_sum + S[index]) # Exclude S[index] solve(subset, index + 1, current_sum) results = [] solve([], 0, 0) return results Example: SR 22 Output: [[1, 3, 4]] (since 1+3+4=81 + 3 + 4 = 81+3+4=8) c) Write an Algorithm for N-Queens Problem N-Queens Problem: Place NNN queens on an N×NN \times NN×N chessboard such that no two queens threaten each other. Algorithm: python Copy code def solve_n_queens(n): def is_safe(board, row, col): for i in range(row): if board[i] == col or \ board[i] - i == col - row or \ board[i] + i == col + row: return False return True def solve(board, row): if row == n: solutions.append(board.copy()) return for col in range(n): if is_safe(board, row, col): board[row] = col solve(board, row + 1) board[row] = -1 # Backtrack solutions = [] board = [-1] * n solve(board, 0) return solutions Example: For N=4N = 4N=4: Output could be: csharp Copy code [1, 3, 0, 2] [2, 0, 3, 1] d) Explain Different Applications of Backtracking Applications of Backtracking: 1. N-Queens Problem: Placing N queens on an N×NN \times NN×N chessboard such that no two queens attack each other. 2. Sudoku Solver: Filling a Sudoku board such that all constraints are met. SR 22 3. Subset Sum Problem: Finding subsets of a set that sum up to a specific value. 4. Traveling Salesman Problem: Finding the shortest path visiting all cities. Applications: Useful in problems involving permutations, combinations, and constraint satisfaction. e) Describe Graph Coloring Graph Coloring is the assignment of colors to vertices in a graph such that no two adjacent vertices share the same color. Purpose: To ensure that no two connected nodes share the same color. Applications: Scheduling problems, register allocation in compilers, map coloring. Example: Graph: A simple cycle graph with 4 vertices. Colors: 2 colors are sufficient. Algorithm (Greedy Coloring): 1. Sort Vertices: Based on a heuristic (e.g., degree). 2. Assign Colors: Start from the first vertex and assign the smallest color that hasn’t been used by adjacent vertices. 3. Repeat: Move to the next vertex and repeat. Complexity: The chromatic number is the smallest number of colors needed. This problem is NP-complete in general. 5.a) Disjoint Set Operations Disjoint Sets (or Union-Find data structure) is used to manage a collection of disjoint (non- overlapping) subsets. This structure supports two primary operations efficiently: 1. Find: Determine which subset a particular element belongs to. 2. Union: Merge two subsets into a single subset. 1. Find Operation: The Find operation is used to determine the root or representative of the set to which an element belongs. Path compression is used to optimize future queries by making nodes point directly to the root. Algorithm: python Copy code def find(parent, i): if parent[i] != i: parent[i] = find(parent, parent[i]) # Path compression return parent[i] Explanation: Initial State: Each element is its own parent (i.e., parent[i] = i). SR 22 Path Compression: While finding the root of an element, make each node on the path point directly to the root. This reduces the time complexity of future find operations. Example: Initial Parent Array: [0, 1, 2, 3, 4] (each element is its own parent) Find(3): Directly returns 3. If we perform find(1) and then find(3), the path from 1 to 3 will be compressed directly. 2. Union Operation: The Union operation merges two subsets into one. The union by rank or size technique ensures that the smaller set is attached under the root of the larger set to keep the tree shallow. Algorithm (Union by Rank): python Copy code def union(parent, rank, x, y): rootX = find(parent, x) rootY = find(parent, y) if rootX != rootY: if rank[rootX] > rank[rootY]: parent[rootY] = rootX elif rank[rootX] < rank[rootY]: parent[rootX] = rootY else: parent[rootY] = rootX rank[rootX] += 1 Explanation: Rank: Keeps track of the tree height. Union by Rank: Attach the smaller tree under the root of the larger tree. If ranks are the same, increase the rank of the new root. Example: Initial Parent Array: [0, 1, 2, 3, 4] Union(1, 2): Merge subsets containing 1 and 2. Update parent array to reflect this merge. b) N-Queens Problem N-Queens Problem: The goal is to place NNN queens on an N×NN \times NN×N chessboard such that no two queens threaten each other. A queen can attack another queen if they are on the same row, column, or diagonal. Backtracking Approach: 1. Place Queens Row by Row: Try to place a queen in each row and move to the next row if placement is valid. 2. Check Validity: Ensure no two queens share the same column, row, or diagonal. SR 22 3. Backtrack: If a placement is invalid or leads to no solution, backtrack and try a different placement. Algorithm: python Copy code def solve_n_queens(n): def is_safe(board, row, col): for i in range(row): if board[i] == col or \ board[i] - i == col - row or \ board[i] + i == col + row: return False return True def solve(board, row): if row == n: solutions.append(board.copy()) return for col in range(n): if is_safe(board, row, col): board[row] = col solve(board, row + 1) board[row] = -1 # Backtrack solutions = [] board = [-1] * n solve(board, 0) return solutions Explanation: is_safe: Checks if placing a queen at board[row] = col is valid. solve: Recursively places queens and backtracks if needed. solutions: Stores all valid solutions. Example: For N=4N = 4N=4: Possible solutions are: o [1, 3, 0, 2] o [2, 0, 3, 1] Explanation: The indices represent the column positions of the queens in each row. c) Weighted Union and Collapsing Find Algorithms 1. Weighted Union Algorithm: SR 22 Purpose: To keep the tree structure flat by always attaching the smaller tree under the root of the larger tree. This minimizes the tree height and speeds up union operations. Algorithm: python Copy code def union(parent, rank, x, y): rootX = find(parent, x) rootY = find(parent, y) if rootX != rootY: if rank[rootX] > rank[rootY]: parent[rootY] = rootX elif rank[rootX] < rank[rootY]: parent[rootX] = rootY else: parent[rootY] = rootX rank[rootX] += 1 Example: Initial Parent Array: [0, 1, 2, 3, 4] Initial Rank Array: [0, 0, 0, 0, 0] Union(1, 2): Merges subsets containing 1 and 2. o After Union: parent = [0, 1, 1, 3, 4], rank = [0, 1, 0, 0, 0] 2. Collapsing Find Algorithm: Purpose: To speed up the find operation by making nodes on the path point directly to the root. This reduces the path length and speeds up future operations. Algorithm: python Copy code def find(parent, i): if parent[i] != i: parent[i] = find(parent, parent[i]) # Path compression return parent[i] Example: Initial Parent Array: [0, 1, 2, 3, 4] Find(4): Traverses the parent array and compresses the path. o After Find(4): parent = [0, 1, 1, 1, 1] Q. No Question (s) Marks BL CO MODULE – III 1 a) Define Traveling Sales Person Problem. 1M L2 C311.3 b) Define Dynamic Programming. 1M L1 C311.3 SR 22 c) State the Principle of Optimality? 1M L1 C311.3 d) Define 0/1 Knapsack Problem. 1M L1 C311.3 e) Define All Pairs Shortest Path. 1M L1 C311.3 2 a) Write the applications of Dynamic programming? 3M L2 C311.3 b) What are the applications of Traveling Sales Person Problem? 3M L2 C311.3 c) Differentiate between Dynamic Programming and Divide and 3M L4 C311.3 Conquer. d) Differentiate between Dynamic Programming and Greedy 3M L4 C311.3 Approach. e) Define Optimal Binary Search Tree? Write the formulae to L2 C311.3 3M calculate W, C and r in construction of OBST. a) Discuss about All Pairs Shortest Problem using dynamic L2 C311.3 3 5M programming. b) Describe about Reliability Design. 5M L4 C311.3 c) Describe about All pairs shortest Path. 5M L4 C311.3 d) Describe about 0/1 Knapsack Problem. 5M L4 C311.3 e) Describe about travelling sales person problem. 5M L2 C311.3 4 a) Explain in detail about Optimal Binary Search Tree. 10M L3 C311.3 10M C311.3 b) Explain about 0/1 Knapsack problem in Dynamic programming L4 and hence solve the following 0/1 knapsack problem for n=4, (W1, W2, W3,W4) = (2, 4, 6,9), (P1, P2, P3,P4) =(10,10,12,18) and m=15. c) Explain in detail about Travelling Sales Person Problem. 10M L4 C311.3 1. a) Traveling Salesperson Problem (TSP): The Traveling Salesperson Problem (TSP) is an optimization problem that seeks to find the shortest possible route that visits each city exactly once and returns to the starting city. b) Dynamic Programming: Dynamic Programming (DP) is a method for solving complex problems by breaking them down into simpler subproblems, solving each subproblem just once, and storing their solutions to avoid redundant computations. c) Principle of Optimality: The Principle of Optimality states that an optimal solution to a problem contains optimal solutions to its subproblems. This principle is used to develop dynamic programming solutions. d) 0/1 Knapsack Problem: SR 22 The 0/1 Knapsack Problem is a combinatorial optimization problem where the goal is to select a subset of items with given weights and values to maximize the total value without exceeding a specified weight capacity, with each item either included or excluded (0/1 choice). 2. a) Applications of Dynamic Programming Dynamic Programming (DP) is used in a variety of fields and problems where optimal solutions can be built from optimal solutions to subproblems. Here are some key applications: 1. Optimization Problems: o Shortest Path Problems: Algorithms like Floyd-Warshall for all-pairs shortest paths and Bellman-Ford for single-source shortest paths. o Knapsack Problems: The 0/1 Knapsack problem, Fractional Knapsack problem (though this is often solved using a Greedy approach). 2. Sequence Alignment: o Bioinformatics: Algorithms such as Needleman-Wunsch and Smith-Waterman for aligning DNA, RNA, or protein sequences. 3. Resource Allocation: o Scheduling: Job scheduling problems like the Weighted Job Scheduling. o Investment: Optimal portfolio selection and resource distribution. 4. String Processing: o Edit Distance: Computing the minimum number of operations required to convert one string into another. o Longest Common Subsequence (LCS): Finding the longest subsequence common to two sequences. 5. Game Theory: o Optimal Strategy Games: Solving games like Nim, where dynamic programming can be used to determine winning strategies. 6. Finance: o Option Pricing: Calculating the price of financial options using models like the Binomial Option Pricing Model. b) Applications of Traveling Salesperson Problem (TSP) The Traveling Salesperson Problem (TSP) has various practical applications: 1. Logistics and Transportation: o Delivery Routes: Optimizing delivery routes for logistics companies to minimize travel distance and costs. o Service Calls: Planning the route for service technicians to visit multiple clients in the shortest path. SR 22 2. Manufacturing: o Tool Path Optimization: Designing the path for tools in CNC machines to minimize wear and operational time. 3. Telecommunications: o Network Design: Optimizing the layout of network cables or wireless communication links to minimize infrastructure costs. 4. Robotics: o Path Planning: Designing efficient paths for robots to navigate through a grid or environment. 5. Travel and Tourism: o Tour Planning: Creating efficient travel itineraries for tourists to visit multiple destinations. c) Differentiate between Dynamic Programming and Divide and Conquer Dynamic Programming (DP): Approach: Solves problems by breaking them down into overlapping subproblems and storing the results to avoid redundant computations (memoization or tabulation). Problems: Suitable for problems with overlapping subproblems and optimal substructure, like the Fibonacci sequence, knapsack problem, and shortest path problems. Subproblems: The same subproblems are solved multiple times. Divide and Conquer: Approach: Breaks a problem into non-overlapping subproblems, solves each subproblem independently, and combines their solutions. Problems: Suitable for problems that can be divided into distinct subproblems, such as merge sort and quicksort. Subproblems: Different subproblems are solved independently and do not overlap. d) Differentiate between Dynamic Programming and Greedy Approach Dynamic Programming (DP): Approach: Builds up solutions from optimal solutions to subproblems and is generally used when problems have overlapping subproblems and optimal substructure. Solution: Guarantees an optimal solution by exploring all possible solutions and combining them. Examples: 0/1 Knapsack, Longest Common Subsequence. Greedy Approach: Approach: Makes a series of choices, each of which looks best at the moment, without considering the global consequences. SR 22 Solution: May not always produce an optimal solution, but is generally simpler and faster. Examples: Fractional Knapsack, Prim’s and Kruskal’s algorithms for Minimum Spanning Trees. e) An Optimal Binary Search Tree (OBST) is a type of binary search tree that is constructed to minimize the expected cost of searching for keys. In the context of OBSTs, "cost" typically refers to the expected number of comparisons needed to find a key. The goal is to arrange the keys in a binary search tree such that this cost is minimized given the probabilities of accessing each key. Definitions 1. Weighted Path Length (W): This is the weighted sum of the depths of the nodes, where the weights are the probabilities of accessing those keys. For a tree with nodes k1,k2, …,knk_1, k_2, \ldots, k_nk1,k2,…,kn, where each node kik_iki has a probability pip_ipi of being accessed, the weighted path length WWW is given by: W=∑i=1npi⋅diW = \sum_{i=1}^n p_i \cdot d_iW=i=1∑npi⋅di Here, did_idi is the depth of node kik_iki in the tree. 2. Cost (C): This is the expected cost of searching for keys in the OBST, which is effectively the same as the weighted path length WWW. In many formulations, CCC and WWW are used interchangeably. If there are additional costs associated with searches, such as access costs or processing costs, those would be incorporated into CCC. 3. Optimal Cost (r): The optimal cost rrr refers to the minimum possible value of the weighted path length WWW for a given set of keys and their access probabilities. It is determined using dynamic programming and depends on the choice of the root and the structure of the tree. Formulas in OBST Construction To construct an OBST, dynamic programming is used to determine the optimal cost and the structure of the tree. Here’s a summary of the key formulas and concepts: 1. Cost Calculation Formulae: o e[i][j]e[i][j]e[i][j]: The cost of an optimal binary search tree containing keys ki,ki+1,…,kjk_i, k_{i+1}, \ldots, k_jki,ki+1,…,kj. e[i][j]=mini≤r≤j(e[i][r−1]+e[r+1][j]+sum(p,i,j))e[i][j] = \min_{i \leq r \leq j} \left( e[i][r-1] + e[r+1][j] + \text{sum}(p, i, j) \right)e[i][j]=i≤r≤jmin(e[i][r−1]+e[r+1][j]+sum(p,i,j)) where sum(p,i,j)\text{sum}(p, i, j)sum(p,i,j) is the sum of the access probabilities for the keys from kik_iki to kjk_jkj: sum(p,i,j)=∑k=ijpk\text{sum}(p, i, j) = \sum_{k=i}^j p_ksum(p,i,j)=k=i∑jpk o w[i][j]w[i][j]w[i][j]: The sum of the access probabilities from kik_iki to kjk_jkj: w[i][j]=sum(p,i,j)w[i][j] = \text{sum}(p, i, j)w[i][j]=sum(p,i,j) o r[i][j]r[i][j]r[i][j]: The root of the optimal binary search tree containing keys kik_iki to kjk_jkj. SR 22 r[i][j]=argmini≤r≤j(e[i][r−1]+e[r+1][j]+w[i][j])r[i][j] = \text{argmin}_{i \leq r \leq j} \left( e[i] [r-1] + e[r+1][j] + w[i][j] \right)r[i][j]=argmini≤r≤j(e[i][r−1]+e[r+1][j]+w[i][j]) 2. Initialization: o For the base case where i=ji = ji=j, the cost is simply the probability of accessing that single key: e[i][i]=pie[i][i] = p_ie[i][i]=pi o For empty ranges (when i>ji > ji>j), the cost is zero: e[i][j]=0 for i>je[i][j] = 0 \text{ for } i > je[i][j]=0 for i>j 3. Dynamic Programming Approach: o Compute the cost for all ranges of increasing length using previously computed values for smaller ranges. o Use the computed e[i][j]e[i][j]e[i][j] and r[i][j]r[i][j]r[i][j] to build the final optimal tree and compute the minimum cost. The dynamic programming approach efficiently finds the optimal binary search tree by breaking down the problem into smaller subproblems and using previously computed solutions to build up to the solution for the entire problem. 3. a) All Pairs Shortest Path Problem Using Dynamic Programming The All Pairs Shortest Path (APSP) problem involves finding the shortest paths between all pairs of nodes in a weighted graph. Dynamic programming is an efficient method to solve this problem. The classic algorithm used for this is the Floyd-Warshall algorithm. Floyd-Warshall Algorithm: 1. Initialization: o Create a distance matrix DDD where D[i][j]D[i][j]D[i][j] represents the shortest path distance from node iii to node jjj. o Initialize D[i][j]D[i][j]D[i][j] to the weight of the edge between iii and jjj if there is an edge; otherwise, initialize it to infinity. For i=ji = ji=j, D[i][j]=0D[i][j] = 0D[i][j]=0. 2. Dynamic Programming Iteration: o For each node kkk (which serves as an intermediate node), update the distance matrix DDD by considering whether a path through kkk improves the shortest path between any pair of nodes iii and jjj: D[i][j]=min(D[i][j],D[i][k]+D[k] [j])D[i][j] = \min(D[i][j], D[i][k] + D[k][j])D[i][j]=min(D[i][j],D[i][k]+D[k][j]) 3. Complexity: o Time Complexity: O(V3)O(V^3)O(V3), where VVV is the number of vertices. o Space Complexity: O(V2)O(V^2)O(V2) for storing the distance matrix. Applications: Network routing. SR 22 Urban traffic planning. b) Reliability Design Reliability Design involves creating systems or networks to ensure they function correctly under expected conditions. It focuses on improving system reliability by minimizing the probability of failure. Key aspects include: 1. Reliability Analysis: o Evaluate the probability that a system will perform its required functions without failure over a specified period. o Use methods like fault tree analysis, failure mode effects analysis (FMEA), and Markov models. 2. Design for Reliability: o Incorporate redundancy (e.g., backup components). o Use robust design principles to minimize the impact of component failures. o Apply reliability engineering techniques such as reliability block diagrams and Monte Carlo simulations. 3. Optimization: o Optimize the reliability of a system by determining the best configuration of components and redundancy to meet reliability requirements while considering cost constraints. Applications: Engineering systems (e.g., aerospace, automotive). Software reliability and testing. c) All Pairs Shortest Path The All Pairs Shortest Path problem is about finding the shortest paths between all pairs of nodes in a graph. This problem can be solved using: 1. Floyd-Warshall Algorithm (as discussed above): It is suitable for dense graphs and handles negative weights but not negative cycles. 2. Johnson's Algorithm: For sparse graphs, it uses Bellman-Ford to reweight edges and then applies Dijkstra’s algorithm to find shortest paths. This method works well for graphs with non-negative weights. Applications: Network routing. Geographic information systems (GIS). d) 0/1 Knapsack Problem SR 22 The 0/1 Knapsack Problem is a combinatorial optimization problem where the goal is to maximize the total value of items placed in a knapsack without exceeding its capacity. Each item can either be included (1) or excluded (0). Dynamic Programming Approach: 1. Define State: o Let dp[i][w]dp[i][w]dp[i][w] be the maximum value obtainable with the first iii items and weight capacity www. 2. Initialization: o dp[w]=0dp[w] = 0dp[w]=0 for all www (with 0 items, the maximum value is 0). 3. Transition: o For each item iii and each weight capacity www: dp[i][w]=max(dp[i−1] [w],dp[i−1][w−wi]+vi) if w≥widp[i][w] = \max(dp[i-1][w], dp[i-1][w-w_i] + v_i) \text{ if } w \geq w_idp[i][w]=max(dp[i−1][w],dp[i−1][w−wi]+vi) if w≥wi where wiw_iwi is the weight and viv_ivi is the value of item iii. 4. Complexity: o Time Complexity: O(n⋅W)O(n \cdot W)O(n⋅W), where nnn is the number of items and WWW is the knapsack capacity. o Space Complexity: O(n⋅W)O(n \cdot W)O(n⋅W). Applications: Resource allocation. Budget management. e) Traveling Salesman Problem (TSP) The Traveling Salesman Problem (TSP) involves finding the shortest possible route that visits a set of cities exactly once and returns to the origin city. It is a classic problem in optimization and computational complexity. Dynamic Programming Approach (Held-Karp Algorithm): 1. Define State: o Let dp[S][i]dp[S][i]dp[S][i] represent the minimum cost of visiting all cities in subset SSS ending at city iii. 2. Initialization: o dp[{0}]=0dp[\{0\}] = 0dp[{0}]=0 (starting at the origin city with only that city visited). 3. Transition: SR 22 o For each subset SSS and each city iii in SSS, update: dp[S][i]=minj∈S,j≠i(dp[S− {i}][j]+cost(j,i))dp[S][i] = \min_{j \in S, j \ne i} \left( dp[S - \{i\}][j] + \text{cost} (j, i) \right)dp[S][i]=j∈S,j=imin(dp[S−{i}][j]+cost(j,i)) 4. Final Solution: o Find the minimum cost to return to the starting city: Result=mini≠0(dp[All cities][i]+cost(i,0))\text{Result} = \min_{i \ne 0} \ left( dp[\text{All cities}][i] + \text{cost}(i, 0) \right)Result=i=0min (dp[All cities][i]+cost(i,0)) 5. Complexity: o Time Complexity: O(n2⋅2n)O(n^2 \cdot 2^n)O(n2⋅2n), where nnn is the number of cities. o Space Complexity: O(n⋅2n)O(n \cdot 2^n)O(n⋅2n). Applications: Logistics and route planning. Network design. Each of these problems represents a significant challenge in computer science and operations research, with various algorithms and methods used to tackle them based on the specific requirements and constraints of the problem. 4. a) Optimal Binary Search Tree (OBST) Definition An Optimal Binary Search Tree (OBST) is a binary search tree where the keys are arranged in a way that minimizes the expected search cost, given the probability distribution of access for each key. The objective is to minimize the average search time for accessing the keys. Problem Statement Given a set of keys k1,k2,…,knk_1, k_2, \ldots, k_nk1,k2,…,kn with corresponding access probabilities p1,p2,…,pnp_1, p_2, \ldots, p_np1,p2,…,pn (where pip_ipi is the probability of accessing key kik_iki), and a set of probabilities for not accessing any key between kik_iki and ki+1k_{i+1}ki+1 denoted as qiq_iqi, the goal is to construct a binary search tree such that the total cost of searches is minimized. Definitions 1. Weighted Path Length (W): This is the sum of the products of each key's access probability and its depth in the tree. For a node kik_iki at depth did_idi, the contribution to the weighted path length is pi⋅dip_i \cdot d_ipi⋅di. 2. Cost (C): This is the expected search cost, which is the same as the weighted path length WWW. It represents the average number of comparisons needed to find a key. 3. Optimal Cost (r): The optimal cost rrr is the minimum possible value of WWW for a given set of keys and their access probabilities. It is found using dynamic programming. SR 22 Dynamic Programming Approach 1. Define States: o Let e[i][j]e[i][j]e[i][j] be the cost of the optimal binary search tree that contains keys kik_iki to kjk_jkj. o Let w[i][j]w[i][j]w[i][j] be the sum of probabilities from kik_iki to kjk_jkj, including the dummy keys qqq. 2. Initialization: o For a single key, the cost is simply the probability of that key: e[i][i]=pie[i][i] = p_ie[i][i]=pi o For an empty subproblem (when i>ji > ji>j), the cost is zero: e[i][j]=0 for i>je[i] [j] = 0 \text{ for } i > je[i][j]=0 for i>j 3. Transition: o For each subproblem from iii to jjj, choose each key rrr in the range as the root and calculate the cost: e[i][j]=mini≤r≤j(e[i][r−1]+e[r+1][j]+w[i][j])e[i][j] = \ min_{i \leq r \leq j} \left( e[i][r-1] + e[r+1][j] + w[i][j] \right)e[i][j]=i≤r≤jmin (e[i][r−1]+e[r+1][j]+w[i][j]) where: w[i][j]=∑k=ijpk+ (probabilities of dummy keys)w[i][j] = \sum_{k=i}^j p_k + \text{(probabilities of dummy keys)}w[i][j]=k=i∑jpk+(probabilities of dummy keys) 4. Compute the Optimal Cost: o Compute e[i][j]e[i][j]e[i][j] for all subproblems iteratively, starting from smaller subproblems and building up to the full range. 5. Result: o The minimum cost for the complete range e[n]e[n]e[n] gives the optimal search cost for the entire set of keys. b) 0/1 Knapsack Problem in Dynamic Programming Definition The 0/1 Knapsack Problem is a combinatorial optimization problem where you need to maximize the total value of items placed in a knapsack of fixed capacity. Each item can either be included in the knapsack or not. Dynamic Programming Approach 1. Define State: o Let dp[i][w]dp[i][w]dp[i][w] be the maximum value obtainable using the first iii items and a knapsack capacity of www. 2. Initialization: o For zero items, the maximum value is zero for any capacity: dp [w]=0 for all wdp[w] = 0 \text{ for all } wdp[w]=0 for all w SR 22 3. Transition: o For each item iii and each capacity www: dp[i][w]=max(dp[i−1][w],dp[i−1] [w−wi]+vi) if w≥widp[i][w] = \max(dp[i-1][w], dp[i-1][w-w_i] + v_i) \text{ if } w \geq w_idp[i][w]=max(dp[i−1][w],dp[i−1][w−wi]+vi) if w≥wi where wiw_iwi is the weight and viv_ivi is the value of item iii. 4. Result: o The result is dp[n][m]dp[n][m]dp[n][m], where nnn is the number of items and mmm is the knapsack capacity. Example Problem Given: Weights: W=(2,4,6,9)W = (2, 4, 6, 9)W=(2,4,6,9) Values: P=(10,10,12,18)P = (10, 10, 12, 18)P=(10,10,12,18) Knapsack Capacity: m=15m = 15m=15 Step-by-Step Solution: 1. Initialization: o Create a 2D array dpdpdp of size (n+1)×(m+1)(n+1) \times (m+1)(n+1)×(m+1). 2. Fill the DP Table: o For i=1i = 1i=1: For w=0w = 0w=0 to 151515: dp[w]={10if w≥20otherwisedp[w] = \begin{cases} 10 & \text{if } w \geq 2 \\ 0 & \text{otherwise} \ end{cases}dp[w]={100if w≥2otherwise o For i=2i = 2i=2: For w=0w = 0w=0 to 151515: dp[w]=max(dp[w],dp [w−4]+10) if w≥4dp[w] = \max(dp[w], dp[w-4] + 10) \text{ if } w \geq 4dp[w]=max(dp[w],dp[w−4]+10) if w≥4 o For i=3i = 3i=3: For w=0w = 0w=0 to 151515: dp[w]=max(dp[w],dp [w−6]+12) if w≥6dp[w] = \max(dp[w], dp[w-6] + 12) \text{ if } w \geq 6dp[w]=max(dp[w],dp[w−6]+12) if w≥6 o For i=4i = 4i=4: For w=0w = 0w=0 to 151515: dp[w]=max(dp[w],dp [w−9]+18) if w≥9dp[w] = \max(dp[w], dp[w-9] + 18) \text{ if } w \geq 9dp[w]=max(dp[w],dp[w−9]+18) if w≥9 3. Result: SR 22 o The value in dpdpdp will be the maximum value achievable. c) Traveling Salesperson Problem (TSP) Definition The Traveling Salesperson Problem (TSP) involves finding the shortest possible route that visits a set of cities exactly once and returns to the origin city. It is a well-known combinatorial optimization problem. Dynamic Programming Approach (Held-Karp Algorithm) 1. Define State: o Let dp[S][i]dp[S][i]dp[S][i] be the minimum cost of visiting all cities in subset SSS and ending at city iii. 2. Initialization: o Start with a single city, with a cost of zero: dp[{0}]=0dp[\{0\}] = 0dp[{0}] =0 o For all other subsets: dp[S][i]=∞ for i≠0dp[S][i] = \infty \text{ for } i \neq 0dp[S][i]=∞ for i=0 3. Transition: o For each subset SSS and each city i∈Si \in Si∈S: dp[S][i]=minj∈S,j≠i(dp[S− {i}][j]+cost(j,i))dp[S][i] = \min_{j \in S, j \ne i} \left( dp[S - \{i\}][j] + \text{cost} (j, i) \right)dp[S][i]=j∈S,j=imin(dp[S−{i}][j]+cost(j,i)) 4. Result: o The minimum cost of returning to the starting city after visiting all cities: Result=mini≠0(dp[All cities][i]+cost(i,0))\text{Result} = \min_{i \ne 0} \ left( dp[\text{All cities}][i] + \text{cost}(i, 0) \right)Result=i=0min (dp[All cities][i]+cost(i,0)) Complexity Time Complexity: O(n2⋅2n)O(n^2 \cdot 2^n)O(n2⋅2n), where nnn is the number of cities. Space Complexity: O(n⋅2n)O(n \cdot 2^n)O(n⋅2n). Applications Logistics and route optimization. Network design. SR 22 Q. No Question (s) Marks BL CO MODULE-IV 1 a) Define Spanning Tree? 1M L1 C311.4 C311.4 b) Define feasible solution? 1M L1 c) Define optimal Solution. 1M L1 C311.4 C311.4 d) Define Deadline of a Problem. 1M L1 C311.4 e) Define Greedy Method. 1M L1 2 a) Describe about Prim’s Algorithm. 3M L4 C311.4 b) Explain General Procedure of Greedy Method. 3M L4 C311.4 c) Explain Knapsack Problem in Greedy method? 3M L4 C311.4 d) Explain the Problem of Job Sequencing with Deadlines? 3M L4 C311.4 e) Describe about Single Source shortest Path Problem. 3M L4 C311.4 3 a) Describe about Job Sequencing with Deadlines. 5M L4 C311.4 b) Explain about single source shortest path problem in Greedy 5M L4 C311.4 method with a simple example. c) Compute a minimum cost spanning tree for the graph shown 5M L2 C311.4 below using Prim’s algorithm. SR 22 d) Write and explain the Kruskal’s algorithm with an illustrative 5M C311.4 L2 example. e) Compute a minimum cost spanning tree for the graph shown 5M C311.4 below using Kruskal’s algorithm. L2 a) Solve the below Job sequencing with deadline problem using C311.4 4 Greedy method n=4, Profits (p1, p2, p3,p4)=(100,10,15,27) and 10M L3 deadlines (d1,d2,d3,d4)=(2,1,2,1). b) Find the optimal solution of the Knapsack problem using 10M C311.4 Greedy method where n=4, m=15, (p1-p4) = (10,10,12,18) L2 And (w1-w4) = (2,4,6,9). c) Explain in detail about Minimum Cost Spanning Tree, 10M C311.4 L4 Kruskal’s and Prim’s Algorithm with example. 1. a) Spanning Tree A spanning tree of a connected graph is a subgraph that includes all the vertices of the graph and is a tree (i.e., it has no cycles) with the minimum number of edges necessary to connect all vertices. b) Feasible Solution A feasible solution is a solution to an optimization problem that satisfies all the constraints imposed by the problem's conditions or requirements. c) Optimal Solution An optimal solution is a feasible solution that provides the best possible value for the objective function of an optimization problem, whether it is maximizing or minimizing that function. d) Deadline of a Problem The deadline of a problem refers to a specific time by which a solution to the problem must be obtained or a task must be completed. e) Greedy Method The greedy method is an algorithmic approach that builds up a solution piece by piece, always choosing the next piece that offers the most immediate benefit or highest value according to a specific criterion. 2. a) Prim’s Algorithm SR 22 Prim's Algorithm is a classic algorithm for finding the Minimum Spanning Tree (MST) of a connected, undirected graph. The MST is a subset of the edges that connects all vertices together, without any cycles and with the minimum possible total edge weight. Steps of Prim's Algorithm: 1. Initialization: o Start with an arbitrary vertex as the initial vertex of the MST. o Create a set to keep track of the vertices included in the MST (initially containing only the starting vertex). o Initialize the total weight of the MST to zero. 2. Iterative Process: o While there are vertices not yet included in the MST: For each vertex uuu already included in the MST, examine all edges connecting uuu to vertices not yet included. Select the edge with the minimum weight that connects a vertex in the MST to a vertex outside the MST. Add this edge to the MST and include the new vertex in the MST set. Update the total weight of the MST. 3. Termination: o The process continues until all vertices are included in the MST. o The resulting set of edges forms the Minimum Spanning Tree. Complexity: Time Complexity: O(ElogV)O(E \log V)O(ElogV) using a priority queue (min-heap), where VVV is the number of vertices and EEE is the number of edges. Space Complexity: O(V2)O(V^2)O(V2) for the adjacency matrix representation, or O(E)O(E)O(E) for the adjacency list representation. b) General Procedure of Greedy Method The Greedy Method is an algorithmic paradigm used for optimization problems. It builds up a solution piece by piece, always choosing the next piece that offers the most immediate benefit or best choice based on a specific criterion. General Procedure: 1. Define a Greedy Choice: o Identify a local choice or decision that seems best at each step of the algorithm. 2. Check Feasibility: SR 22 o Ensure that the chosen option does not violate any problem constraints and is feasible. 3. Make the Choice: o Include the selected choice in the solution and update the problem state accordingly. 4. Repeat: o Repeat the process of making the choice, checking feasibility, and updating the solution until no more choices can be made or a complete solution is formed. 5. Construct the Solution: o The final set of choices forms the solution to the problem. 6. Verify Optimality: o Ensure that the solution is optimal by confirming that no better solution can be obtained by making different choices at each step. c) Knapsack Problem in Greedy Method The Knapsack Problem in the context of the greedy method is typically solved using the Fractional Knapsack approach, where items can be broken into fractions. This differs from the 0/1 Knapsack problem, where items must be taken or left whole. Fractional Knapsack Problem: 1. Define the Problem: o Given a set of items, each with a weight wiw_iwi and value viv_ivi, and a knapsack with a maximum capacity WWW, determine the maximum value that can be carried in the knapsack if fractional items are allowed. 2. Calculate Value-to-Weight Ratio: o For each item, compute its value-to-weight ratio viwi\frac{v_i}{w_i}wivi. 3. Sort Items: o Sort the items in descending order of their value-to-weight ratio. 4. Select Items: o Add items to the knapsack starting with the item with the highest value-to-weight ratio. o If the item fits completely into the remaining capacity, add its full value to the knapsack. o If the item does not fit completely, add as much of it as possible (i.e., take a fraction of the item) and fill the knapsack to its capacity. SR 22 5. Terminate: o Stop when the knapsack is full or all items have been considered. Complexity: Time Complexity: O(nlogn)O(n \log n)O(nlogn), due to sorting, where nnn is the number of items. d) Job Sequencing with Deadlines The Job Sequencing with Deadlines problem involves scheduling jobs to maximize the total profit, given that each job must be completed before its deadline. Problem Statement: You are given a set of jobs, each with a deadline and a profit. Each job takes one unit of time. The goal is to schedule jobs such that the total profit is maximized and each job is completed before its deadline. Solution Approach: 1. Sort Jobs: o Sort jobs in descending order of profit. 2. Initialize: o Create a result array or schedule of size equal to the maximum deadline, initialized to -1 or empty, to represent time slots. 3. Schedule Jobs: o For each job, attempt to schedule it by finding a free slot before its deadline. o If a slot is found, place the job in that slot and update the slot to be occupied. 4. Terminate: o Continue scheduling until all jobs have been considered or all deadlines have been checked. 5. Result: o The resulting schedule will include jobs that maximize the total profit while respecting deadlines. Complexity: Time Complexity: O(nlogn)O(n \log n)O(nlogn) due to sorting, where nnn is the number of jobs. e) Single Source Shortest Path Problem The Single Source Shortest Path (SSSP) problem involves finding the shortest path from a single source vertex to all other vertices in a weighted graph. The graph can be directed or undirected and may have non-negative edge weights. SR 22 Algorithms for SSSP: 1. Dijkstra’s Algorithm: o Used when edge weights are non-negative. o Works by iteratively selecting the vertex with the minimum distance from the source and updating the distances to its neighboring vertices. 2. Bellman-Ford Algorithm: o Handles graphs with negative edge weights. o Iteratively relaxes all edges and can detect negative weight cycles. 3. Algorithm Steps (Dijkstra’s Algorithm): o Initialization: Set the distance to the source vertex to 0 and to all other vertices to infinity. Use a priority queue to manage the vertices. o Relaxation: Extract the vertex with the minimum distance from the priority queue. Update the distances to its neighbors. o Termination: Continue until all vertices have been processed. Complexity: Dijkstra’s Algorithm: o Time Complexity: O(V2)O(V^2)O(V2) with an adjacency matrix or O((V+E)logV)O((V + E) \log V)O((V+E)logV) with a priority queue. o Space Complexity: O(V)O(V)O(V) for storing distances and priority queue. Bellman-Ford Algorithm: o Time Complexity: O(V⋅E)O(V \cdot E)O(V⋅E), where VVV is the number of vertices and EEE is the number of edges. o Space Complexity: O(V)O(V)O(V) for storing distances. 3. a) Job Sequencing with Deadlines The Job Sequencing with Deadlines problem is a scheduling problem where the goal is to maximize the total profit by scheduling jobs within their deadlines. Each job must be completed by its deadline, and each job takes exactly one unit of time. Problem Statement: You are given nnn jobs, each with: o A deadline did_idi: The latest time by which the job should be completed. o A profit pip_ipi: The profit earned if the job is completed by its deadline. Each job takes one unit of time. SR 22 The objective is to schedule the jobs such that the total profit is maximized, while ensuring that no job is scheduled beyond its deadline. Solution Approach: 1. Sort Jobs: o Sort the jobs in descending order of profit. This ensures that the most profitable jobs are considered first. 2. Initialize: o Create a result array (or schedule array) of size equal to the maximum deadline, initialized to -1 or a placeholder value, to represent available time slots. 3. Schedule Jobs: o For each job in the sorted list: Try to find a time slot before the job’s deadline. If a slot is available, schedule the job in that slot. If the slot is occupied, try the previous slots until you find an available one or exhaust the possible slots. 4. Result: o The result will be the list of scheduled jobs in the time slots, which maximizes the total profit. Example: Consider the following jobs with deadlines and profits: Job 1: Deadline = 4, Profit = 20 Job 2: Deadline = 1, Profit = 10 Job 3: Deadline = 2, Profit = 40 Job 4: Deadline = 3, Profit = 30 Steps: 1. Sort Jobs by Profit: o Sorted Jobs: Job 3 (40), Job 4 (30), Job 1 (20), Job 2 (10) 2. Initialize Schedule: o Schedule array: [-1, -1, -1, -1] (size 4, assuming maximum deadline is 4) 3. Schedule Jobs: o Job 3 (Profit 40): Deadline 2. Place in slot 2 (Schedule: [-1, -1, 3, -1]). o Job 4 (Profit 30): Deadline 3. Place in slot 3 (Schedule: [-1, -1, 3, 4]). SR 22 o Job 1 (Profit 20): Deadline 4. Place in slot 4 (Schedule: [-1, -1, 3, 4]). o Job 2 (Profit 10): Deadline 1. Place in slot 1 (Schedule: [-1, 2, 3, 4]). 4. Result: o Scheduled jobs with maximum profit: Job 2, Job 3, Job 4, and Job 1. b) Single Source Shortest Path Problem Using Greedy Method The Single Source Shortest Path (SSSP) problem involves finding the shortest paths from a single source vertex to all other vertices in a graph. The Greedy Method for solving this problem is primarily implemented using Dijkstra's Algorithm. Dijkstra's Algorithm Dijkstra’s Algorithm finds the shortest path from a source vertex to all other vertices in a graph with non-negative edge weights. Here’s a step-by-step explanation using a simple example: Example: Consider the following graph with vertices A, B, C, and D and the edge weights as shown: css Copy code 1 A ---- B |\ | 4| \ 2| | \ | C ---- D 3 1. Initialization: o Set the distance to the source vertex (A) to 0 and all others to infinity. o Initialize a priority queue (or min-heap) with the source vertex. Initial distances: dist[A] = 0, dist[B] = ∞, dist[C] = ∞, dist[D] = ∞ Priority Queue: [(0, A)] 2. Iterative Process: o Step 1: Extract vertex A with the minimum distance (0). Update distances to adjacent vertices: For B: dist[B] = min(∞, 0 + 1) = 1 For C: dist[C] = min(∞, 0 + 4) = 4 Updated distances: dist[A] = 0, dist[B] = 1, dist[C] = 4, dist[D] = ∞ Priority Queue: [(1, B), (4, C)] o Step 2: Extract vertex B with the minimum distance (1). SR 22 Update distances to adjacent vertices: For D: dist[D] = min(∞, 1 + 2) = 3 Updated distances: dist[A] = 0, dist[B] = 1, dist[C] = 4, dist[D] = 3 Priority Queue: [(3, D), (4, C)] o Step 3: Extract vertex D with the minimum distance (3). Update distances to adjacent vertices: For C: dist[C] = min(4, 3 + 3) = 4 (no change) Updated distances: dist[A] = 0, dist[B] = 1, dist[C] = 4, dist[D] = 3 Priority Queue: [(4, C)] o Step 4: Extract vertex C with the minimum distance (4). No further updates since C has no remaining unprocessed adjacent vertices with a shorter path. 3. Result: o Shortest distances from vertex A to all other vertices are: A to B: 1 A to C: 4 A to D: 3 Complexity: Time Complexity: O(V2)O(V^2)O(V2) with an adjacency matrix or O((V+E)logV)O((V + E) \log V)O((V+E)logV) with a priority queue. Space Complexity: O(V)O(V)O(V) for distance storage and priority queue. c) To compute the Minimum Spanning Tree (MST) using Prim's Algorithm, let's apply the algorithm step-by-step to a specific example graph. For the sake of clarity, I'll use a sample graph with vertices and edge weights. Example Graph Let's consider the following undirected graph with vertices and weighted edges: css Copy code 2 A-------B |\ /| SR 22 |\ /| 4 \/ 6 | X | | /\ | |/ \| C-------D 3 Edges and their weights: A↔BA \leftrightarrow BA↔B with weight 2 A↔CA \leftrightarrow CA↔C with weight 4 A↔DA \leftrightarrow DA↔D with weight 6 B↔CB \leftrightarrow CB↔C with weight 3 B↔DB \leftrightarrow DB↔D with weight 5 C↔DC \leftrightarrow DC↔D with weight 3 Steps to Compute MST using Prim’s Algorithm 1. Initialization: o Choose an arbitrary starting vertex. Let’s start with vertex A. o Initialize a priority queue (min-heap) to keep track of the edges with the minimum weight connecting the MST to the rest of the graph. o Initially, the MST contains only the starting vertex. o Initial State: MST: {A} Priority Queue: {(2,A,B),(4,A,C),(6,A,D)}\{(2, A, B), (4, A, C), (6, A, D)\}{(2,A,B),(4,A,C),(6,A,D)} (Edges from A to other vertices) 2. Iterative Process: o Step 1: Extract the edge with the minimum weight from the priority queue. The minimum edge is (2,A,B)(2, A, B)(2,A,B). Add vertex B to the MST. Update the priority queue with edges connecting B to vertices not in the MST: (6,B,D)(6, B, D)(6,B,D) and (3,B,C)(3, B, C)(3,B,C) Updated MST: {A, B} SR 22 Updated Priority Queue: {(3,B,C),(6,B,D),(4,A,C)}\{(3, B, C), (6, B, D), (4, A, C)\}{(3,B,C),(6,B,D),(4,A,C)} o Step 2: Extract the edge with the minimum weight from the priority queue. The minimum edge is (3,B,C)(3, B, C)(3,B,C). Add vertex C to the MST. Update the priority queue with edges connecting C to vertices not in the MST: (3,C,D)(3, C, D)(3,C,D) (Edge between C and D) Updated MST: {A, B, C} Updated Priority Queue: {(4,A,C),(6,B,D),(3,C,D)}\{(4, A, C), (6, B, D), (3, C, D)\}{(4,A,C),(6,B,D),(3,C,D)} o Step 3: Extract the edge with the minimum weight from the priority queue. The minimum edge is (3,C,D)(3, C, D)(3,C,D). Add vertex D to the MST. All vertices are now included in the MST. Updated MST: {A, B, C, D} Updated Priority Queue: {(4,A,C),(6,B,D)}\{(4, A, C), (6, B, D)\} {(4,A,C),(6,B,D)} (no further edges are needed) 3. Result: o The Minimum Spanning Tree (MST) includes the edges: (A,B)(A, B)(A,B) with weight 2 (B,C)(B, C)(B,C) with weight 3 (C,D)(C, D)(C,D) with weight 3 o Total Cost of MST: 2+3+3=82 + 3 + 3 = 82+3+3=8 Summary The Minimum Spanning Tree for the given graph using Prim's Algorithm consists of the edges: (A,B)(A, B)(A,B) with weight 2 (B,C)(B, C)(B,C) with weight 3 (C,D)(C, D)(C,D) with weight 3 SR 22 The total cost of the MST is 8. d) Kruskal’s Algorithm Kruskal’s Algorithm is a classic algorithm for finding the Minimum Spanning Tree (MST) of a connected, undirected graph. The algorithm works by sorting all edges in non-decreasing order of their weights and adding them one by one to the growing spanning tree, ensuring no cycles are formed. Steps of Kruskal’s Algorithm: 1. Sort Edges: o List all edges of the graph and sort them by their weights in ascending order. 2. Initialize: o Create an empty set or list to store the MST. o Initialize a union-find data structure to keep track of connected components. 3. Process Edges: o Iterate over the sorted edge list and for each edge: Check if the edge forms a cycle with the edges already included in the MST using the union-find data structure. If the edge does not form a cycle, add it to the MST and unite the components of the two vertices connected by the edge. 4. Terminate: o Continue the process until the MST contains V−1V - 1V−1 edges, where VVV is the number of vertices. 5. Output: o The set of edges in the MST. Example: Consider the following graph: css Copy code 10 A--------B |\ /| |\ /| | \ / | 6 \ 15 | \/| | C | | | | | 4 | | | | SR 22 D--------E 2 Edges and their weights: A↔BA \leftrightarrow BA↔B with weight 10 A↔CA \leftrightarrow CA↔C with weight 6 A↔DA \leftrightarrow DA↔D with weight 15 B↔CB \leftrightarrow CB↔C with weight 4 C↔EC \leftrightarrow EC↔E with weight 2 D↔ED \leftrightarrow ED↔E with weight 2 Steps to Find the MST using Kruskal’s Algorithm: 1. Sort Edges by Weight: o Sorted edges: (C,E)(C, E)(C,E) (2), (D,E)(D, E)(D,E) (2), (B,C)(B, C)(B,C) (4), (A,C)(A, C)(A,C) (6), (A,B)(A, B)(A,B) (10), (A,D)(A, D)(A,D) (15) 2. Initialize: o MST: {} o Union-Find Structure: Each vertex is its own parent initially. 3. Process Edges: o Edge (C,E)(C, E)(C,E) with weight 2: No cycle formed. Add to MST. Union(C, E) MST: {(C, E)} o Edge (D,E)(D, E)(D,E) with weight 2: No cycle formed. Add to MST. Union(D, E) MST: {(C, E), (D, E)} o Edge (B,C)(B, C)(B,C) with weight 4: No cycle formed. Add to MST. Union(B, C) MST: {(C, E), (D, E), (B, C)} o Edge (A,C)(A, C)(A,C) with weight 6: SR 22 No cycle formed. Add to MST. Union(A, C) MST: {(C, E), (D, E), (B, C), (A, C)} o Edge (A,B)(A, B)(A,B) with weight 10: Adding this edge would form a cycle in the MST. Skip it. MST remains unchanged. o Edge (A,D)(A, D)(A,D) with weight 15: Adding this edge would form a cycle in the MST. Skip it. MST remains unchanged. 4. Terminate: o The MST has V−1=5−1=4V - 1 = 5 - 1 = 4V−1=5−1=4 edges. 5. Result: o The Minimum Spanning Tree (MST) includes the edges: (C,E)(C, E)(C,E) with weight 2 (D,E)(D, E)(D,E) with weight 2 (B,C)(B, C)(B,C) with weight 4 (A,C)(A, C)(A,C) with weight 6 o Total Cost of MST: 2+2+4+6=142 + 2 + 4 + 6 = 142+2+4+6=14 e) Compute a Minimum Cost Spanning Tree Using Kruskal’s Algorithm Let's apply Kruskal’s Algorithm to the same graph shown in the example: css Copy code 10 A--------B |\ /| |\ /| | \ / | 6 \ 15 | \/| | C | | | | | 4 | | | | D--------E 2 SR 22 Edges and their weights: A↔BA \leftrightarrow BA↔B with weight 10 A↔CA \leftrightarrow CA↔C with weight 6 A↔DA \leftrightarrow DA↔D with weight 15 B↔CB \leftrightarrow CB↔C with weight 4 C↔EC \leftrightarrow EC↔E with weight 2 D↔ED \leftrightarrow ED↔E with weight 2 Steps to Find the MST using Kruskal’s Algorithm: 1. Sort Edges by Weight: o Sorted edges: (C,E)(C, E)(C,E) (2), (D,E)(D, E)(D,E) (2), (B,C)(B, C)(B,C) (4), (A,C)(A, C)(A,C) (6), (A,B)(A, B)(A,B) (10), (A,D)(A, D)(A,D) (15) 2. Initialize: o MST: {} o Union-Find Structure: Each vertex is its own parent initially. 3. Process Edges: o Edge (C,E)(C, E)(C,E) with weight 2: No cycle formed. Add to MST. Union(C, E) MST: {(C, E)} o Edge (D,E)(D, E)(D,E) with weight 2: No cycle formed. Add to MST. Union(D, E) MST: {(C, E), (D, E)} o Edge (B,C)(B, C)(B,C) with weight 4: No cycle formed. Add to MST. Union(B, C) MST: {(C, E), (D, E), (B, C)} o Edge (A,C)(A, C)(A,C) with weight 6: No cycle formed. Add to MST. Union(A, C) SR 22 MST: {(C, E), (D, E), (B, C), (A, C)} o Edge (A,B)(A, B)(A,B) with weight 10: Adding this edge would form a cycle in the MST. Skip it. MST remains unchanged. o Edge (A,D)(A, D)(A,D) with weight 15: Adding this edge would form a cycle in the MST. Skip it. MST remains unchanged. 4. Terminate: o The MST has V−1=5−1=4V - 1 = 5 - 1 = 4V−1=5−1=4 edges. 5. Result: o The Minimum Spanning Tree (MST) includes the edges: (C,E)(C, E)(C,E) with weight 2 (D,E)(D, E)(D,E) with weight 2 (B,C)(B, C)(B,C) with weight 4 (A,C)(A, C)(A,C) with weight 6 o Total Cost of MST: 2+2+4+6=142 + 2 + 4 + 6 = 142+2+4+6=14 In both cases, the MST for the graph has a total cost of 14, demonstrating that Kruskal’s Algorithm effectively finds the minimum spanning tree by adding edges with the smallest weights while avoiding cycles. 4. a) Job Sequencing with Deadlines Problem In the Job Sequencing with Deadlines problem, the goal is to maximize the total profit by scheduling jobs within their deadlines. The approach involves sorting jobs based on profit and then scheduling them to maximize profit while respecting deadlines. Given: Number of jobs, n=4n = 4n=4 Profits p=[100,10,15,27]p = [100, 10, 15, 27]p=[100,10,15,27] Deadlines d=[2,1,2,1]d = [2, 1, 2, 1]d=[2,1,2,1] Steps to Solve: 1. Create a list of jobs with profit and deadline: o Job 1: (Profit: 100, Deadline: 2) o Job 2: (Profit: 10, Deadline: 1) o Job 3: (Profit: 15, Deadline: 2) o Job 4: (Profit: 27, Deadline: 1) 2. Sort jobs based on profit in descending order: o Sorted Jobs: SR 22 1. Job 1 (Profit: 100, Deadline: 2) 2. Job 4 (Profit: 27, Deadline: 1) 3. Job 3 (Profit: 15, Deadline: 2) 4. Job 2 (Profit: 10, Deadline: 1) 3. Initialize an array to keep track of free time slots (using the maximum deadline): o Free time slots: [False, False] (since the maximum deadline is 2) 4. Schedule each job: o Job 1 (Profit: 100, Deadline: 2): Try slot 2 (deadline 2): Available, so schedule Job 1 in slot 2. Free time slots after scheduling: [False, True] o Job 4 (Profit: 27, Deadline: 1): Try slot 1 (deadline 1): Available, so schedule Job 4 in slot 1. Free time slots after scheduling: [True, True] o Job 3 (Profit: 15, Deadline: 2): Slot 2 is occupied, try slot 1 but it's occupied as well. Cannot schedule Job 3. o Job 2 (Profit: 10, Deadline: 1): Slot 1 is occupied. Cannot schedule Job 2. 5. Result: o Scheduled Jobs: Job 1 in slot 2, Job 4 in slot 1. o Total Profit = 100 + 27 = 127. b) Knapsack Problem using Greedy Method In the Knapsack Problem, the goal is to maximize the total profit without exceeding the weight capacity of the knapsack. The Greedy method often uses the profit-to-weight ratio to determine the order of items to include. Given: Number of items, n=4n = 4n=4 Capacity of knapsack, m=15m = 15m=15 Profits p=[10,10,12,18]p = [10, 10, 12, 18]p=[10,10,12,18] Weights w=[2,4,6,9]w = [2, 4, 6, 9]w=[2,4,6,9] Steps to Solve: 1. Calculate profit-to-weight ratio for each item: o Item 1: Ratio = 10 / 2 = 5.0 o Item 2: Ratio = 10 / 4 = 2.5 o Item 3: Ratio = 12 / 6 = 2.0 o Item 4: Ratio = 18 / 9 = 2.0 2. Sort items by ratio in descending order: o Sorted Items: 1. Item 1 (Ratio: 5.0, Weight: 2, Profit: 10) 2. Item 2 (Ratio: 2.5, Weight: 4, Profit: 10) 3. Item 3 (Ratio: 2.0, Weight: 6, Profit: 12) 4. Item 4 (Ratio: 2.0, Weight: 9, Profit: 18) 3. Include items in the knapsack based on sorted order: o Item 1 (Weight: 2, Profit: 10): SR 22 Capacity left: 15 - 2 = 13 Include Item 1. Total Profit = 10. o Item 2 (Weight: 4, Profit: 10): Capacity left: 13 - 4 = 9 Include Item 2. Total Profit = 10 + 10 = 20. o Item 3 (Weight: 6, Profit: 12): Capacity left: 9 - 6 = 3 Include Item 3. Total Profit = 20 + 12 = 32. o Item 4 (Weight: 9, Profit: 18): Capacity left: 3 - 9 = -6 (exceeds capacity, cannot include Item 4). 4. Result: o Items included: Item 1, Item 2, Item 3 o Total Profit = 32. c) Minimum Cost Spanning Tree (MST) and Algorithms A Minimum Cost Spanning Tree (MST) of a graph is a subgraph that connects all vertices together with the minimum total edge weight and without any cycles. Kruskal’s Algorithm 1. Sort all edges in non-decreasing order of their weight. 2. Initialize an empty MST. 3. Process each edge in sorted order: o Add the edge to the MST if it doesn't form a cycle (using Union-Find data structure). Example: Graph with edges and weights: o (A-B, 4), (A-C, 3), (B-C, 2), (B-D, 6), (C-D, 5) Steps: 1. Sort edges: (B-C, 2), (A-C, 3), (A-B, 4), (C-D, 5), (B-D, 6) 2. Initialize MST: {} 3. Add (B-C, 2) → MST: {(B-C, 2)} 4. Add (A-C, 3) → MST: {(B-C, 2), (A-C, 3)} 5. Add (A-B, 4) → MST: {(B-C, 2), (A-C, 3), (A-B, 4)} 6. Skip (C-D, 5) and (B-D, 6) to avoid cycles. MST Edges: {(B-C, 2), (A-C, 3), (A-B, 4)}. Total weight = 2 + 3 + 4 = 9. Prim’s Algorithm 1. Initialize an MST with a single vertex. 2. Grow the MST by adding the smallest edge connecting the MST to a vertex not yet in the MST. 3. Repeat until all vertices are included in the MST. Example: Graph with edges and weights: o (A-B, 4), (A-C, 3), (B-C, 2), (B-D, 6), (C-D, 5) Steps: 1. Start with vertex A. 2. Choose the smallest edge (A-C, 3). MST: {A, C}. 3. Choose the smallest edge connecting MST to a new vertex (B-C, 2). MST: {A, C, B}. 4. Add the smallest edge (A-B, 4). MST: {A, B, C}. SR 22 5. Skip (C-D, 5) and (B-D, 6) to avoid cycles. MST Edges: {(A-C, 3), (B-C, 2), (A-B, 4)}. Total weight = 2 + 3 + 4 = 9. In both Kruskal’s and Prim’s algorithms, the resulting MST has the same edges and total weight for the given example. The choice of algorithm often depends on the graph’s structure and edge density. SR 22 Q. No Question (s) Marks BL CO MODULE – V 1 a) Define LC Branch and Bound. 1M L1 C311.5 b) Define FIFO BB. 1M L1 C311.5 c) Define NP-Complete. 1M L2 C311.6 d) Define NP-Hard. 1M L2 C311.5 e) Define Knapsack Problem. 1M L2 C311.5 a) Describe different search techniques followed in Brand and C311.5 2 3M L4 Bound method. 3M C311.5 b) Describe about LC Branch and Bound. L4 c) Write the applications of Branch and Bound technique? 3M L3 C311.5 d) Write the problem statement for Knapsack problem using 3M C311.6 L1 branch & bound? e) Write the problem statement for Travelling sales person C311.5 3M L1 problem using branch & bound? 3 a) Explain in detail about Non-Deterministic Algorithms. 5M L4 C311.5 5M C311.5 b) Consider the following cost adjacency matrix and solve the following travelling sales person problem using LCBB L3 SR 22 c) Define Non-Deterministic Algorithm? Write non-deterministic 5M C311.5 L2 algorithm for searching an element in an array? d) Differentiate between NP-complete and NP-Hard. 5M L5 C311.5 e) Explain in detail about NP-Hard and NP-Complete. 5M L4 C311.6 a) Draw the portion of state space tree generated by LCBB for the 10M C311.5 0/1 Knapsack instance: n=4, (p1,p2,p3,p4)=(10,10,12,18), 4 L2 (w1,w2,w3,w4)=(2,4,6,9) and m=15 and also find an optimal solution of the same. b) Explain in detail about different Applications of Branch and 10M C311.5 L4 Bound. c) Explain in detail about Cook’s Theorem. 10M L4 C311.6 1. a) LC Branch and Bound LC (Linear Combination) Branch and Bound: A variant of the Branch and Bound algorithm used for solving optimization problems where the objective function or constraints can be expressed as linear combinations. It systematically explores branches of the solution space, using linear bounds to prune suboptimal branches. b) FIFO BB FIFO (First-In, First-Out) Branch and Bound: A strategy where nodes or subproblems are explored in the order they are generated, using a queue to ensure the earliest generated nodes are processed first. c) NP-Complete NP-Complete: A class of problems that are both in NP (verifiable in polynomial time) and NP- Hard (every problem in NP can be reduced to it in polynomial time). d) NP-Hard NP-Hard: A class of problems that are at least as hard as the hardest problems in NP. They may not be in NP themselves, meaning they may not have solutions that can be verified in polynomial time. e) Knapsack Problem Knapsack Problem: An optimization problem where given a set of items with weights and values and a knapsack with a weight capacity, the objective is to select items to maximize the total value without exceeding the weight capacity. 2. a) Different Search Techniques in Branch and Bound Branch and Bound uses various search techniques to explore the solution space of optimization problems. The main techniques include: 1. Breadth-First Search (BFS): o Explores all nodes at the present depth level before moving on to nodes at the next depth level. o Suitable for finding the shortest path or exploring solutions in a level-by-level manner. 2. Depth-First Search (DFS): o Explores as far as possible along a branch before backtracking to explore other branches. o Often used to find a feasible solution or to explore a path completely before considering other options. 3. Best-First Search: SR 22 o Uses a priority queue to explore nodes based on a heuristic or evaluation function, such as the cost or potential bound of the solution. o A common implementation is the A* algorithm, which combines the actual cost and estimated cost to guide the search. 4. Dynamic Search: o Uses dynamic programming techniques to manage and explore subproblems, often applying memoization or tabulation to avoid redundant calculations. b) LC Branch and Bound LC (Linear Combination) Branch and Bound is a specialized approach within the Branch and Bound framework where the problem’s constraints and objective function can be expressed as linear combinations. This technique is used i