Analysis of Algorithm Chapter Two PDF

Summary

This document provides an outline and analysis of divide-and-conquer algorithms, specifically focusing on binary search, merge sort, and quick sort techniques. It also introduces the concept and control abstraction for divide and conquer algorithms.

Full Transcript

Analysis of Algorithm Chapter Two Divide – and – conquer By Tesfahun N. Outline  Divide – and – conquer  Binary search,  Finding, maximum and minimum  Merge sort  Quick sort  Selection sort What is divide – and – conquer ...

Analysis of Algorithm Chapter Two Divide – and – conquer By Tesfahun N. Outline  Divide – and – conquer  Binary search,  Finding, maximum and minimum  Merge sort  Quick sort  Selection sort What is divide – and – conquer  Divide and conquer is a design strategy which is well known to breaking down efficiency blocks.  When the method applies, it often leads to a large improvement in time complexity.  For example, from O (n2) to O (n log n) to sort the elements.  Divide and conquer strategy is divide the problem instance into two or more smaller instances of the same problem. Cont….  The recursion stops when an instance is reached which is too small to divide. Divide and conquer algorithm consists of two parts:  Divide: Divide the problem into a number of sub problems.  Conquer : The solution to the original problem is then formed from the solutions to the sub problems (patching together the answers). Control Abstraction of Divide and Conquer  A control abstraction is a procedure whose flow of control is clear but whose primary operations are specified by other procedures whose precise meanings are left undefined.  The control abstraction for divide and conquer technique is DANDC(P), where P is the problem to be solved. cont.… DANDC (P) { if SMALL (P) then return S (p); else { divide p into smaller instances p1, p2, …. Pk, k  1; apply DANDC to each of these sub problems; return (COMBINE (DANDC (p1) , DANDC (p2),…., DANDC (pk)); } } cont.…  SMALL (P) is a Boolean valued function which determines whether the input size is small enough so that the answer can be computed without splitting.  If this is so function ‘S’ is invoked otherwise, the problem ‘p’ into smaller sub problems.  These sub problems p1, p2,... , pk are solved by recursive application of DANDC. cont.…  If the sizes of the two sub problems are approximately equal then the computing time of DANDC is: Where,  T (n) is the time for DANDC on ‘n’ inputs  g (n) is the time to complete the answer directly for small inputs and  f (n) is the time for Divide and Combine Binary Search  If we have ‘n’ records which have been ordered by keys so that x1 < x2 < … < xn. When we are given a element ‘x’, binary search is used to find the corresponding element from the list.  In case ‘x’ is present, we have to determine a value ‘j’ such that a[j] = x (successful search).  If ‘x’ is not in the list then j is to set to zero (un successful search). cont.…  In Binary search we jump into the middle of the file, where we find key a[mid], and compare ‘x’ with a[mid].  If x = a[mid] then the desired record has been found.  If x < a[mid] then ‘x’ must be in that portion of the file that precedes a[mid], if there at all.  Similarly, if a[mid] > x, then further search is only necessary in that past of the file which follows a[mid]. cont.…  If we use recursive procedure of finding the middle key a[mid] of the un- searched portion of a file, then every un-successful comparison of ‘x’ with a[mid] will eliminate roughly half the un-searched portion from consideration.  Since the array size is approximately halved often each comparison between ‘x’ and a[mid], and since an array of length ‘n’ can be halved only about log2n times before reaching a trivial length, the worst case complexity of Binary search is about log2n Binary search Algorithm BINSRCH (a, n, x) //array a(1 : n) of elements in increasing order, n0, //determine whether ‘x’ is present, and if so, set j such that x = a(j) //else return j { low :=1 ; high :=n ; while (low < high) do { mid :=|(low + high)/2| if (x < a [mid]) then high:=mid – 1; else if (x > a [mid]) then low:= mid + 1 else return mid; } return 0; } cont.…  low and high are integer variables such that each time through the loop either ‘x’ is found or low is increased by at least one or high is decreased by at least one.  Thus we have two sequences of integers approaching each other and eventually low will become greater than high causing termination in a finite number of steps if ‘x’ is not present. Example for Binary Search  Let us illustrate binary search on the following 9 elements:  The number of comparisons required for searching different elements is as follows:  cont..  cont…  Continuing in this manner the number of element comparisons needed to find each of nine elements is: cont…  No element requires more than 4 comparisons to be found. Summing the comparisons needed to find all nine items and dividing by 9, yielding 25/9 or approximately 2.77 comparisons per successful search on the average.  There are ten possible ways that an un-successful search may terminate depending upon the value of x. Cont….  How to calculate the average comparison of successful and unsuccessful search ?   Now lets convert this array in to binary tree  formula for  Left child n=2i where i is the index  Right child n=2i+1 where i is the index  Parent is p=i/2  No element requires more than 4 comparisons to be found. Summing the comparisons needed to find all nine items and dividing by 9, yielding 25/9 or approximately 2.77 comparisons per successful search on the average.  There are ten possible ways that an un-successful search may terminate depending upon the value of x. Cont.…  Take the node at i=1 is -15 as a root  Then left childe of -15 is 2*i=2*1=-index is -6  The right child of -15 is 2*i+1=2*1+1= which is 0  The left childe for -6 is 2*2= which is 7  The right child of -6 is 2*2+1= which is 9  The left childe for 0 is 2*3= which is 23  The right child of 0 is 2*2+1=[7 ]which is 54  The left childe for 7 is 2*4= which is 82  The right child of 7 is 2*4+1= which is 101 To calculate average for successful searching= (sum of depth)/n Cont.… cont.… Step 1:Calculate the Depth of Each Node  Depth is defined as the number of edges from the root node to the node in question.  Depth of node -15 (root): 0  Depth of node -6: 1  Depth of node 0: 1  Depth of node 7: 2  Depth of node 9: 2  Depth of node 23: 2  Depth of node 54: 2  Depth of node 82: 3  Depth of node 101: 3 cont.… Step 3: Calculate the Average Number of Comparisons for Successful Searches 1. Sum the depths of all nodes:  Sum of Depths=0+1+1+2+2+2+2+3+3=16 2. Count the number of nodes n:  N=9 3. Calculate the average number of comparisons for successful searches: ACFSS= (16+9)/9=2.77 cont.… To calculate the average comparison for unsuccessful search is Step 4: Identify Null Pointers for Unsuccessful Searches Null pointers are potential child positions where a search would end unsuccessfully, meaning the algorithm reached a point where the target value is not found in the tree. For the example BST: Node 82: 2 null pointers (no left and right children) Node 101: 2 null pointers (no left and right children) Node 9: 2 null pointers (no left and right children) Node 23: 2 null pointers (no left and right children) Node 54: 2 null pointers (no left and right children) cont.… Step 5: Calculate the Average Number of Comparisons for Unsuccessful Searches 1. Calculate the total depth of all null pointers: 1. Null pointers for nodes 82, 101, are at depth 4 and for 9,23 and 54 at depth 3. 2. Total Depth of Null Pointers=4*4+3*6=34 where the red color4 and 6 is the null pointer at depth 3 and 4 3. Count the total number of null pointers m=10 4. Average Comparisons for Unsuccessful Searches(ACFUS)  ACFUS=  ACFUS=34/10=3.4 cont.…  The time complexity for a successful search is O(log n) and for an unsuccessful search is Θ(log n).  Binary Search: When the key is not found in a sorted array, the search still takes O(log⁡n) time, since we still go through the same process of halving the search space. cont.…  Analysis for worst case  Let T (n) be the time complexity of Binary search  The algorithm sets mid to [n+1 / 2]  Therefore, T(0) = 0 T(n) = 1 if x = a [mid] = 1 + T([(n + 1) / 2] – 1) if x < a [mid] = 1 + T(n – [(n + 1)/2]) if x > a [mid] Cont.… Cont.… Cont.… Part Two Loading External and Internal path length  The lines connecting nodes to their non-empty sub trees are called edges.  A non empty binary tree with n nodes has n–1 edges.  The size of the tree is the number of nodes it contains.  Example : cont…  The tree given above in which the empty sub trees appear as square nodes is as follows:  The square nodes are called as external nodes E(T).  The square node version is sometimes called an extended binary tree.  The round nodes are called internal nodes I(T).  A binary tree with n internal nodes has n+1 external nodes.  The height h(x) of node ‘x’ is the number of edges on the longest path leading down from ‘x’ in the extended tree. cont  The depth of a node 𝑖 , denoted depth(𝑖), is the number of edges from the root to that node.  The root node has a depth of 0, and as you move down the tree, the depth increases by 1 for each level. cont…  For example, the following tree has depths written inside its nodes:  Where n is the number of nodes in the tree and depth(i) is the depth of the i- th node.  For the above binary tree  IPL=0+1+1+2=4 or  IPL= the sum of NO node*level  1*0+2*1+1*2=4 cont…  External Path length can be calculated as follow  For the above binary tree  number of external node  In index i=2 is 3 and the  In index i=3 is 2  EPL= the nu 2*3+3*2= 12  Or the sum of dumy node*level cont..  Internal Node: A node with at least one child. These nodes are not the endpoints of the tree; they branch out to other nodes.  External Node (Leaf Node): A node that does not have any children. These nodes are the endpoints of the tree. cot… Example:- A binary tree corresponding to binary search when n = 16 is ample:- A binary tree corresponding to binary search when n = 16 is cont..  Example Two find 12 cont.. Merge Sort  It is a classic example of divide and conquer.  To sort an array, recursively, sort its left and right halves separately and then merge them.  The time complexity of merge sort in the best case, worst case and average case is O(n log n).  This strategy is so simple, and so efficient but the problem here is that there seems to be no easy way to merge two adjacent sorted arrays together in place cont..  The fundamental operation in this algorithm is merging two sorted lists.  Because the lists are sorted, this can be done in one pass through the input, if the output is put in a third list. cont.….  The basic merging algorithm takes two input arrays ‘a’ and ’b’, an output array ‘c’, and three counters, a ptr, b ptr and c ptr, which are initially set to the beginning of their respective arrays.  The smaller of a[a ptr] and b[b ptr] is copied to the next entry in ‘c’, and the appropriate counters are advanced.  When either input list is exhausted, the remainder of the other list is copied to ‘c’. cont…  To illustrate how merge process works.  For example, let us consider the array ‘a’ containing 1, 13, 24, 26 and ‘b’ containing 2, 15, 27, 38.  First a comparison is done between 1 and 2. 1 is copied to ‘c’. Increment a ptr and c ptr cont.…  cont.…   cont.… Algorithm MERGESORT (low, high) // a (low : high) is a global array to be sorted. { if (low < high) { mid := (low + high)/2 //finds where to split the set MERGESORT(low, mid) //sort one subset MERGESORT(mid+1, high) //sort the other subset MERGE(low, mid, high) // combine the results } } cont.…  Example :- let us select the following 8 entries 7, 2, 9, 4, 3, 8, 6, 1 to illustrate merge sort algorithm: Tree Calls of MERGESORT(1, 8)  The following figure represents the sequence of recursive calls that are produced by MERGESORT when it is applied to 8 elements.  The values in each node are the values of the parameters low and high. Examples cont.… Cont.…. Cont.…. Cont.…. Cont.…. Analysis of Merge Sort  We will assume that ‘n’ is a power of 2, so that we always split into even halves, so we solve for the case n = 2 k.  For n = 1, the time to merge sort is constant, which we will be denote by 1.  Otherwise, the time to merge sort ‘n’ numbers is equal to the time to do two recursive merge sorts of size n/2, plus the time to merge, which is linear.  The equation says this exactly: T(1) = 1 T(n) = 2 T(n/2) + n This is a standard recurrence relation, which can be solved several ways.  We will solve by substituting recurrence relation continually on the right–hand side. We have, T(n) = 2T(n/2) + n cont.…  We have assumed that n = 2 k. The analysis can be refined to handle cases when ‘n’ is not a power of 2. The answer turns out to be almost identical.  Although merge sort’s running time is O(n log n), it is hardly ever used for main memory sorts.  The main problem is that merging two sorted lists requires linear extra memory and the additional work spent copying to the temporary array and back, throughout the algorithm, has the effect of slowing down the sort considerably.  The Best and worst case time complexity of Merge sort is O(n log n). Quick Sort  The main reason for the slowness of Algorithms like SIS is that all comparisons and exchanges between keys in a sequence w1, w2,.... , wn take place between adjacent pairs.  In this way it takes a relatively long time for a key that is badly out of place to work its way into its proper position in the sorted sequence cont.….  In essence, the quick sort algorithm partitions the original array by rearranging it into two groups.  The first group contains those elements less than some arbitrary chosen value taken from the set, and the second group contains those elements greater than or equal to the chosen value.  The chosen value is known as the pivot element. Once the array has been rearranged in this way with respect to the pivot, the very same partitioning is recursively applied to each of the two subsets.  When all the subsets have been partitioned and rearranged, the original array is sorted. cont.…  The function partition() makes use of two pointers ‘i’ and ‘j’ which are moved toward each other in the following fashion:  Repeatedly increase the pointer ‘i’ until a[i] >= pivot.  Repeatedly decrease the pointer ‘j’ until a[j] i, interchange a[j] with a[i]  Repeat the steps 1, 2 and 3 till the ‘i’ pointer crosses the ‘j’ pointer. If ‘i’ pointer crosses ‘j’ pointer, the position for pivot is found and place pivot element in ‘j’ pointer position. cont…  The program uses a recursive function quicksort(). The algorithm of quick sort function sorts all elements in an array ‘a’ between positions ‘low’ and ‘high’.  It terminates when the condition low >= high is satisfied. This condition will be satisfied only when the array is completely sorted.  Here we choose the first element as the ‘pivot’. So, pivot = x[low]. Now it calls the partition function to find the proper position j of the element x[low] i.e. pivot. Then we will have two sub-arrays x[low], x[low+1],....... x[j-1] and x[j+1], x[j+2], x[high]. cont.…  It calls itself recursively to sort the left sub-array x[low], x[low+1],....... x[j-1] between positions low and j-1 (where j is returned by the partition function).  It calls itself recursively to sort the right sub-array x[j+1], x[j+2],......... x[high] between positions j+1 and high. Algorithm QUICKSORT(low, high) { if low < high then { j := PARTITION(a, low, high+1); // J is the position of the partitioning element QUICKSORT(low, j – 1); QUICKSORT(j + 1 , high); }}  Example  Select first element as the pivot element. Move ‘i’ pointer from left to right in search of an element larger than pivot. Move the ‘j’ pointer from right to left in search of an element smaller than pivot.  If such elements are found, the elements are swapped.  This process continues till the ‘i’ pointer crosses the ‘j’ pointer.  If ‘i’ pointer crosses ‘j’ pointer, the position for pivot is found and interchange pivot and element at ‘j’ position. cont…..  cont.…  cont…..  cont…..  cont…  cont…..  cont…..  cont…..  cont…..  cont…..  cont…..  cont…..  cont…..  cont…..  cont….. cont…..  cont….. cont.. Cont… cont…..  selection sort What is Selection Sort? Definition: A simple comparison-based sorting algorithm. Concept: Repeatedly select the smallest (or largest) element from the unsorted portion and move it to the sorted portion. cont.…. Selection Sort Algorithm Steps 1. Find the Minimum Element in the unsorted portion of the array. 2. Swap it with the first element of the unsorted portion. 3. Move the boundary of the sorted portion one position to the right. 4. Repeat until the entire array is sorted. cont.… Detailed Example  Example Array: [64, 25, 12, 22, 11]  Step-by-Step Execution:  Find Minimum (11): Swap with 64 → [11, 25, 12, 22, 64]  Find Minimum (12): Swap with 25 → [11, 12, 25, 22, 64]  Find Minimum (22): Swap with 25 → [11, 12, 22, 25, 64]  Final State: Array is sorted → [11, 12, 22, 25, 64] selection sort second itration  Third round  cont…..  Continue unto the last iteration complexity summery  cont…..  Binary Search: This is used for searching in a sorted array, not for sorting. Its time complexities are for searching operations.  Merge Sort: A stable sort with consistent O(nlog⁡n)O(n \log n)O(nlogn) performance.  Quick Sort: Average-case performance is O(nlog⁡n)O(n \log n)O(nlogn), but can degrade to O(n2)O(n^2)O(n2) if the pivot choices are poor. It is often faster in practice due to better cache performance and lower constant factors.  Selection Sort: Simple but inefficient, with quadratic time complexity in all cases. The End Thanks ?

Use Quizgecko on...
Browser
Browser