DAA Endsem 2023 FlyHigh Services PDF

Summary

This document explains greedy strategies, the knapsack problem, and the job scheduling problem in detail. It provides examples and pseudocode.

Full Transcript

Greedy Strategy: Principle: - Local Optimization: Greedy algorithms make decisions by selecting the most favorable option at each step without considering the overall problem. - Example: In the "coin change" problem, choosing the largest denomination coin available first. - No Reconsideration: Once...

Greedy Strategy: Principle: - Local Optimization: Greedy algorithms make decisions by selecting the most favorable option at each step without considering the overall problem. - Example: In the "coin change" problem, choosing the largest denomination coin available first. - No Reconsideration: Once a decision is made, it remains final; there's no reconsideration or backtracking. - Example: In the "activity selection" problem, after picking an activity, it's scheduled and moves to the next available activity. Control Abstraction: - High-Level Perspective: Greedy algorithms abstract the problem-solving approach by focusing on decision-making flow rather than intricate details - Example: In Huffman coding, prioritize merging least frequent characters first without considering future implications. - Identify Problem & Criteria: Understanding the problem requirements and establishing the rule for making locally optimal choices. - Example: In Dijkstra's algorithm for shortest path, selecting the next node based on the shortest known path length. - Systematic Procedure: Implement a step-by-step approach for decision-making without backtracking, ensuring each step contributes to the solution. - Example: Prim's algorithm for minimum spanning tree, continually selecting the minimum edge that connects an already selected vertex. Time Analysis of Control Abstraction:- Efficient Time Complexity: Greedy algorithms often exhibit efficient complexities, such as linear or polynomial, especially suitable for large datasets. - Example: Kruskal's algorithm for minimum spanning tree generally operates in O(E log V) time. - Simple Structure: Typically, Greedy algorithms involve straightforward operations or iteration through elements, leading to efficient time complexities. - Example: Fractional Knapsack problem selects items based on maximum value-to-weight ratio in a single pass. - Sum of Individual Steps' Time Complexities: The overall time complexity is the sum of complexities of each step, leading to an aggregate computational cost. - Example: In Huffman coding, constructing the tree involves summing up complexities of individual character merges. Knapsack Problem Explanation: The Knapsack problem is a classic optimization problem where the goal is to maximize the value of items placed into a knapsack or container with a limited weight capacity. There are two main variations: 1. 0/1 Knapsack Problem: In this version, items cannot be broken into fractions. An item is either taken entirely or not at all. 2. Fractional Knapsack Problem: Here, items can be divided, allowing for fractions of items to be taken. Fractional Knapsack Problem: Example: Suppose you have a knapsack with a weight capacity of 15 units. You have several items with their weights and values: Item Weight Value A 10 60 B 20 100 C 30 120 Pseudocode for Fractional Knapsack Problem: Function fractionalKnapsack(items[], capacity) Sort items by value per weight in descending order totalValue = 0 for each item in items if capacity >= item.weight take item entirely totalValue += item.value capacity -= item.weight else take fraction of item to fill the knapsack fraction = capacity / item.weight totalValue += fraction item.value capacity = 0 break return totalValue Step-by-Step Explanation: 1. Sorting: - Sort items in descending order of their value per weight ratio. 2. Iteration: - Traverse through each item in the sorted order. 3. Taking Items: - If the knapsack's capacity allows, take the entire item. - If the item doesn’t fit entirely, take a fraction of it that fits into the knapsack, maximizing the total value. 4. Updating Capacity: - Adjust the capacity of the knapsack after taking items. 5. Calculating Total Value: - Keep track of the total value of the items selected. Job Scheduling Problem Explanation: The Job Scheduling problem involves scheduling a set of jobs with respective start and finish times while maximizing the number of jobs completed. Example: Consider the following jobs with their start and finish times: | Job | Start Time | Finish Time | |--------|------------|-------------| |A |1 |3 | |B |2 |5 | |C |4 |6 | |D |6 |7 | Pseudocode for Job Scheduling: Function jobScheduling(jobs[]) Sort jobs by finish times in ascending order selectedJobs = [] lastFinishTime = -infinity for each job in jobs if job.start >= lastFinishTime select job lastFinishTime = job.finish add job to selectedJobs return selectedJobs Step-by-Step Explanation: 1. Sorting: - Sort jobs based on their finish times in ascending order. 2. Iteration: - Traverse through each job in the sorted order. 3. Selecting Jobs: - If the job's start time is greater than or equal to the last job's finish time, select the job. - Update the last finish time accordingly. 4. Storing Selected Jobs - Store the selected jobs in a list. Activity Selection Problem Explanation: The Activity Selection problem involves selecting a maximum number of non-overlapping activities, given their start and finish times. Example: Consider the following activities with their start and finish times: | Activity | Start Time | Finish Time | |----------|------------|-------------| |A |1 |4 | |B |3 |5 | |C |0 |6 | |D |5 |7 | |E |3 |8 | |F |5 |9 | |G |6 | 10 | |H |8 | 11 | Pseudocode for Activity Selection: Function activitySelection(activities[]) Sort activities by finish times in ascending order selectedActivities = [] lastFinishTime = -infinity for each activity in activities if activity.start >= lastFinishTime select activity lastFinishTime = activity.finish add activity to selectedActivities return selectedActivities Step-by-Step Explanation: 1. Sorting: - Sort activities based on their finish times in ascending order. 2. Iteration: - Traverse through each activity in the sorted order. 3. Selecting Activities: - If the activity's start time is greater than or equal to the last activity's finish time, select the activity. - Update the last finish time accordingly. 4. Storing Selected Activities: - Store the selected activities in a list. Dynamic Programming→ Principle: - Optimal Substructure: Breaking down a complex problem into smaller overlapping subproblems, solving each subproblem just once, and storing the solutions to avoid redundant computations. – Memorization: Storing the solutions of subproblems in a table or cache to be utilized when needed. Control Abstraction: - Top-Down (Memorization): Recursive approach breaking down the problem into smaller subproblems and storing solutions in a memorization table to avoid recalculating them. - Bottom-Up (Tabulation): Iterative approach solving smaller subproblems first and using their solutions to build up to the final solution. Time Analysis of Control Abstraction: - Memorization (Top- Down): - Time Complexity: Depends on the number of unique subproblems solved. - Space Complexity: Utilizes space for memorization tables proportional to the number of subproblems. - Tabulation (Bottom-Up): - Time Complexity: Depends on the number of iterations or steps required to solve the problem. - Space Complexity: Utilizes space for tables or arrays to store intermediate results, often proportional to the size of the input. Python Program Fibonacci Sequence using Memorization (Top-Down): def fibonacci(n, memo={}): if n in memo: return memo[n] if n = board.size: // All queens are successfully placed return true for each column in board: if isSafe(board, row, column): // Place queen on the board board[row][column] = 1 // Recursively check for the next row if solveNQueens(board, row + 1): return true // If placing the queen leads to a conflict, backtrack board[row][column] = 0 // If no solution is found for this row return false function isSafe(board, row, column): // Check if there is no queen in the current row for each col until column: if board[row][col] == 1: return false // Check upper diagonal on the left side for i, j from (row, column) to (0, 0) step -1: if board[i][j] == 1: return false // Check lower diagonal on the left side for i, j from (row, column) to (board.size - 1, 0) step (-1, -1): if board[i][j] == 1: return false return true Explanation: - `solveNQueens()` is a recursive function that tries to place queens on the board row by row, checking for conflicts and backtracking if necessary. - `isSafe()` checks whether it's safe to place a queen at a given position by examining the current row, upper and lower diagonals on the left side. Graph Coloring Problem: The graph coloring problem involves coloring the vertices of a graph in such a way that no two adjacent vertices have the same color while using the fewest number of colors. Pseudo code: function graphColoring(graph, numColors, vertex): if vertex == graph.size: // All vertices are colored return true for each color from 1 to numColors: if isSafeColor(graph, vertex, color): // Assign the color to the vertex graph[vertex].color = color // Recursively color the next vertex if graphColoring(graph, numColors, vertex + 1): return true // If the coloring doesn't lead to a solution, backtrack graph[vertex].color = 0 // If no solution is found for this vertex return false function isSafeColor(graph, vertex, color): for each adjacentVertex in graph[vertex]: if adjacentVertex.color == color: return false return true Explanation: - `graphColoring()` is a recursive function that tries to color the vertices one by one while checking for conflicts. - `isSafeColor()` checks if assigning a certain color to a vertex is safe by verifying if any adjacent vertices have the same color. Sum of Subsets Problem: The sum of subsets problem involves finding subsets of a set where the sum of elements in each subset equals a given target sum. Pseudo code: function findSubsets(nums, targetSum, subset, index): if targetSum == 0: // Subset with the target sum is found print(subset) return if index >= nums.size or targetSum < 0: // No subset found return // Include current element in the subset subset.add(nums[index]) findSubsets(nums, targetSum - nums[index], subset, index + 1) subset.removeLast() // Backtrack // Exclude current element from the subset findSubsets(nums, targetSum, subset, index + 1) Explanation: - `findSubsets()` is a recursive function that searches for subsets by including or excluding elements and backtracking when necessary. - It explores two possibilities at each step: including the current element or excluding it in the subset while maintaining the target sum. Branch and Bound is a systematic algorithmic technique used for solving optimization problems, particularly combinatorial optimization problems where the goal is to find the best solution among a large set of feasible solutions. It works by systematically searching the solution space while pruning the search tree by employing bounds or criteria, thereby reducing the search space and improving efficiency. Principle: 1. Branching: The algorithm begins with an initial feasible solution and progressively explores the solution space by branching into smaller subproblems. 2. Bounding: At each step, the algorithm establishes bounds or criteria to discard or prune parts of the search space that cannot possibly contain the optimal solution. This pruning helps avoid unnecessary exploration. Control Abstraction: 1. Divide and Conquer: The problem space is divided into smaller subproblems or branches, usually represented as a tree structure. Each node in the tree represents a subproblem. 2. Bounding: To efficiently explore the solution space, bounds or criteria are applied to evaluate the nodes. Nodes that cannot lead to an optimal solution (either due to being worse than the current best solution found so far or due to other criteria) are pruned, meaning they are not further explored. 3. Optimal Solution Tracking: Throughout the search process, the algorithm keeps track of the best solution found so far, updating it whenever a better solution is encountered. Time Analysis of Control Abstraction: - Worst-case Time Complexity: For some problems, especially those with exponential complexity, the worst-case time complexity of Branch and Bound can still be exponential. - Bounding Efficiency: The efficiency of the bounding technique significantly impacts the time complexity. If the bounding function is tight and effectively prunes a large portion of the search space, it can lead to substantial reductions in computation time. - Branching Factor: The number of branches created at each level of the search tree also plays a crucial role. A lower branching factor reduces the size of the tree and therefore decreases the overall time complexity. FIFO (First-In-First-Out): FIFO strategy follows a queue-based approach where the nodes are explored in the order they were generated. In Branch and Bound, this means that the nodes are added to a queue data structure, and the nodes at the front of the queue (the oldest nodes) are explored first. - Usage in Branch and Bound: In this approach, nodes are processed in the order they are generated. New nodes are added to the end of the queue, and the oldest nodes are explored first. This strategy can be useful in ensuring that the search explores broader areas of the solution space before going deeper. - Control Abstraction: FIFO maintains fairness in exploring nodes generated at the same level of the search tree. It can lead to a more systematic exploration but may not necessarily prioritize the most promising nodes first. LIFO (Last-In-First-Out): LIFO strategy follows a stack-based approach where the nodes are explored in the reverse order of their generation. In Branch and Bound, this means that the nodes are added to a stack data structure, and the most recently generated node is explored first. - Usage in Branch and Bound: LIFO explores the deepest unexplored node first, essentially going as deep as possible along a branch before backtracking. It can be efficient in certain scenarios where the most promising solutions are likely to be found deeper in the search tree. - Control Abstraction: LIFO tends to prioritize exploring deeper nodes, which can lead to quick convergence if a good solution is found deep in the search tree. However, it might overlook potentially better solutions found at shallower levels. LC (Least Cost): LC strategy selects the node to explore based on some cost or evaluation function associated with the nodes. In the context of Branch and Bound, this means selecting the node that is expected to lead to the most promising solution based on some criterion. - Usage in Branch and Bound: LC strategy involves selecting nodes with the least cost or evaluation function value. This approach aims to prioritize exploring nodes that are likely to lead to the best solution or prune unproductive nodes efficiently. - Control Abstraction: LC uses a heuristic or evaluation function to guide the exploration, allowing for a more informed choice in selecting nodes to explore. It tends to focus on nodes that are more likely to lead to an optimal or better solution. Traveling Salesman Problem (TSP: function tspBranchAndBound(graph): N = number of cities infinity = a very large value # Initialize variables bestTourLength = infinity bestTourPath = [] # Implement a recursive function to explore the search space function exploreTSP(currentPath, currentCost): if length of currentPath == N: currentCost += graph[currentPath[N - 1]][currentPath] # Complete the tour if currentCost < bestTourLength: bestTourLength = currentCost bestTourPath = currentPath else: for each city from 0 to N: if city not in currentPath: newCost = currentCost + graph[currentPath[-1]][city] if newCost < bestTourLength: exploreTSP(currentPath + [city], newCost) # Start the exploration from each city as a starting point for startCity from 0 to N: exploreTSP([startCity], 0) return bestTourLength, bestTourPath Knapsack Problem: function knapsack(weights, values, capacity): N = length of weights maxValues = 2D array with dimensions (N+1) x (capacity+1) for i from 0 to N: for w from 0 to capacity: if i == 0 or w == 0: maxValues[i][w] = 0 else if weights[i - 1]

Use Quizgecko on...
Browser
Browser