Lecture 9: Introduction to Sorting PDF
Document Details
Uploaded by WellRoundedSkunk
Tags
Related
- TMF1434 Data Structure and Algorithms PDF
- Lecture 3 - CS50x 2024 Introduction to Computer Science PDF
- Algorithm & Programming Quiz PDF
- Data Structures and Algorithms Lecture Notes PDF
- Unit-2 Searching-Sorting-Linked Lists PDF
- Analysis and Design of Algorithms Past Paper PDF - GUJARAT Technical University - Summer 2022
Summary
This lecture provides an introduction to sorting algorithms, discussing techniques like Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, and Quick Sort. It details the fundamental concepts of each method, offering illustrative examples and analyses.
Full Transcript
Introduction to Sorting What is Sorting Given a set (container) of n elements E.g. array, set of words, etc. Goal Arrange the elements in ascending order 1 23 2 56 9 8 10 100 End 1 2 8 9 10 23 56 100 (Ascending) What is descending? Start Why Sort and Examples Consider: ...
Introduction to Sorting What is Sorting Given a set (container) of n elements E.g. array, set of words, etc. Goal Arrange the elements in ascending order 1 23 2 56 9 8 10 100 End 1 2 8 9 10 23 56 100 (Ascending) What is descending? Start Why Sort and Examples Consider: Sorting Books in Library Sorting Students by Student ID Sorting Courses by Course Name (Alphabetical) Sorting Numbers (Sequential) Types of Sorting Algorithms There are many, many different types of sorting algorithms, but the primary ones are: Bubble Sort ● Selection Sort ● Insertion Sort ● Merge Sort ● Shell Sort ● Heap Sort ● Quick Sort ● Radix Sort ● Swap Sort ● Bubble Sort Simplest sorting algorithm Idea: 1. Repeat n-1 times (n is the size of the array) 2. Traverse the array and compare pairs of two elements (E1 and E2) 1.1 If E1 E2 - No Change 1.2 If E1 > E2 then Switch(E1, E2) What happens? Example of bubble sort 7 2 8 5 4 2 7 5 4 8 2 5 4 7 8 2 4 5 7 8 2 7 8 5 4 2 7 5 4 8 2 5 4 7 8 2 4 5 7 8 2 7 8 5 4 2 5 7 4 8 2 4 5 7 8 (done) 2 7 5 8 4 2 5 4 7 8 3rd iteration 4th iteration 2 7 5 4 8 2nd iteration 1st iteration Code for bubble sort public static void bubbleSort(int[] a) { int outer, inner; for (i = a.length - 1; i > 0; i--) { // counting down for (j = 0; j < i; j++) { // bubbling up if (a[j] > a[j + 1]) { // if out of order... int temp = a[j]; // ...then swap 7 2 8 5 a[j] = a[j + 1]; 2 7 8 5 a[j + 1] = temp; } 2 7 8 5 } } 2 7 5 8 } 4 4 4 4 2 7 5 4 8 Analysis of bubble sort for (i = a.length - 1; i > 0; i--) { for (j = 0; j < i; j++) { if (a[j] > a[j + 1]) { // code for swap omitted } } } Let n = a.length = size of the array The outer loop is executed n-1 times (call it n, that’s close enough) Each time the outer loop is executed, the inner loop is executed Inner loop executes n-1 times at first, linearly dropping to just once On average, inner loop executes about n/2 times for each execution of the outer loop In the inner loop, the comparison is always done (constant time), the swap might be done (also constant time) Result is n * n/2 * k, that is, O(n2/2 + k) = O(n2) Selection sort Given an array of length n, Search elements 0 through n-1 and select the smallest Swap it with the element in location 0 Search elements 1 through n-1 and select the smallest Swap it with the element in location 1 Search elements 2 through n-1 and select the smallest Swap it with the element in location 2 Search elements 3 through n-1 and select the smallest Swap it with the element in location 3 Continue in this fashion until there’s nothing left to search Example and analysis of selection sort 7 2 8 5 4 2 7 8 5 4 2 4 8 5 7 The selection sort might swap an array element with itself--this is harmless, and not worth checking for Analysis: 2 4 5 8 7 2 4 5 7 8 The outer loop executes n-1 times The inner loop executes about n/2 times on average (from n to 2 times) Work done in the inner loop is constant (swap two array elements) Time required is roughly (n-1)*(n/2) You should recognize this as O(n2) Code for selection sort public static void selectionSort(int[] a) { int outer, inner, min; for (outer = 0; outer < a.length - 1; outer++) { // outer counts down min = outer; for (inner = outer + 1; inner < a.length; inner++) { if (a[inner] < a[min]) { min = inner; } // Invariant: for all i, if outer <= i <= inner, then a[min] <= a[i] } // a[min] is least among a[outer]..a[a.length - 1] int temp = a[outer]; a[outer] = a[min]; a[min] = temp; // Invariant: for all i <= outer, if i < j then a[i] <= a[j] } } Insertion Sort while some elements unsorted: Using linear search, find the location in the sorted portion where the 1st element of the unsorted portion should be inserted Move all the elements after the insertion location up one position to make space for the new element 45 38 45 60 60 66 45 66 79 47 13 74 36 21 94 22 57 16 29 81 the fourth iteration of this loop is shown here Another Example of insertion sort sorted next to be inserted 3 4 7 12 14 14 20 21 33 38 10 55 9 23 28 16 less than 10 3 4 7 10 12 14 14 20 21 33 38 55 9 23 28 16 sorted temp 10 Insert Action: i=1 temp 8 20 8 5 10 7 8 20 20 5 10 7 --- 8 20 5 10 7 i = 1, first iteration Insert Action: i=2 temp 5 8 20 5 10 7 5 8 20 20 10 7 5 8 8 20 10 7 --- 5 8 20 10 7 i = 2, second iteration Insert Action: i=3 temp 10 5 8 20 10 7 10 5 8 20 20 7 --- 5 8 10 20 7 i = 3, third iteration Insert Action: i=4 temp 7 5 8 10 20 7 7 5 8 10 20 20 7 5 8 10 10 20 7 5 8 8 10 20 --- 5 7 8 10 20 i = 4, forth iteration Analysis of insertion sort We run once through the outer loop, inserting each of n elements; this is a factor of n On average, there are n/2 elements already sorted The inner loop looks at (and moves) half of these This gives a second factor of n/4 Hence, the time required for an insertion Merge Sort 7 29 4 2 4 7 9 72 2 7 77 22 94 4 9 99 44 Divide-and-Conquer • Divide-and conquer is a general algorithm design paradigm: – Divide: divide the input data S in two disjoint subsets S1 and S2 – Recur: solve the subproblems associated with S1 and S2 – Conquer: combine the solutions for S1 and S2 into a solution for S • The base case for the recursion are subproblems of size 0 or 1 Merge-Sort • Merge-sort on an input sequence S with n elements consists of three steps: – Divide: partition S into two sequences S1 and S2 of about n/2 elements each – Recur: recursively sort S1 and S2 – Conquer: merge S1 and S2 into a unique sorted sequence Algorithm mergeSort(S, C) Input sequence S with n elements, comparator C Output sequence S sorted according to C if S.size() > 1 (S1, S2) partition(S, n/2) mergeSort(S1, C) mergeSort(S2, C) S merge(S1, S2) Merging Two Sorted Sequences The conquer step of merge-sort consists of merging two sorted sequences A and B into a sorted sequence S containing the union of the elements of A and B Merge-Sort: Merge Example A: L : 3 2 5 5 3 5 7 28 8 30 1 15 15 28 4 6 5 14 6 10 R : 6 Temporary Arrays 10 14 22 Merge-Sort: Merge Example A: 3 1 5 15 28 30 6 10 14 k=0 L : 23 i=0 15 3 28 7 30 8 R : 16 j=0 10 4 14 5 22 6 Merge-Sort: Merge Example A: 1 2 5 15 28 30 6 10 14 k=1 L : 23 i=0 5 3 15 7 28 8 R : 16 10 4 14 5 22 6 j=1 Merge-Sort: Merge Example A: 1 2 3 28 30 15 6 10 14 k=2 L : 2 3 i=1 7 8 R : 16 10 4 14 5 22 6 j=1 Merge-Sort: Merge Example A: 1 2 3 4 6 10 14 k=3 L : 2 3 7 i=2 8 R : 16 10 4 14 5 22 6 j=1 Merge-Sort: Merge Example A: 1 2 3 4 5 6 10 14 k=4 L : 2 3 7 i=2 8 R : 16 10 4 14 5 22 6 j=2 Merge-Sort: Merge Example A: 1 2 3 4 5 6 10 14 k=5 L : 2 3 7 i=2 8 R : 16 10 4 14 5 22 6 j=3 Merge-Sort: Merge Example A: 1 2 3 4 5 6 7 14 k=6 L : 2 3 7 i=2 8 R : 16 10 4 14 5 22 6 j=4 Merge-Sort: Merge Example A: 1 2 3 4 5 6 7 8 14 k=7 L : 23 5 3 15 7 28 8 i=3 R : 16 10 4 14 5 22 6 j=4 Merge-Sort: Merge Example A: 1 2 3 4 5 6 7 8 k=8 L : 23 5 3 R : 16 15 7 28 8 i=4 10 4 14 5 22 6 j=4 Merge-Sort Tree An execution of merge-sort is depicted by a binary tree each node represents a recursive call of merge-sort and stores unsorted sequence before the execution and its partition sorted sequence at the end of the execution the root is the initial call the leaves are calls on subsequences of size 0 or 1 7 2 7 9 4 2 4 7 9 2 2 7 77 22 9 4 4 9 99 44 Execution Example Partition 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 2 9 4 2 4 7 9 7 2 2 7 77 22 3 8 6 1 1 3 8 6 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Recursive call, partition 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 7 2 2 7 77 22 3 8 6 1 1 3 8 6 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Recursive call, partition 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 722 7 77 22 3 8 6 1 1 3 8 6 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Recursive call, base case 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 722 7 77 22 3 8 6 1 1 3 8 6 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Recursive call, base case 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 722 7 77 22 9 4 4 9 99 44 3 8 6 1 1 3 8 6 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Merge 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 722 7 77 22 3 8 6 1 1 3 8 6 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Recursive call, …, base case, merge 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 722 7 77 22 3 8 6 1 1 3 8 6 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Merge 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 722 7 77 22 3 8 6 1 1 3 8 6 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Recursive call, …, merge, merge 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 7 29 4 2 4 7 9 722 7 77 22 3 8 6 1 1 3 6 8 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Execution Example (cont.) Merge 7 2 9 43 8 6 1 1 2 3 4 6 7 8 9 3 8 6 1 1 3 6 8 7 29 4 2 4 7 9 722 7 77 22 9 4 4 9 99 44 3 8 3 8 33 88 6 1 1 6 66 11 Analysis of Merge-Sort The height h of the merge-sort tree is O(log n) at each recursive call we divide in half the sequence, The overall amount or work done at the nodes of depth i is O(n) we partition and merge 2i sequences of size n/2i we make 2i+1 recursive calls Thus, the total running time of merge-sort is O(n log n) depth #seqs size 0 1 n 1 2 n/2 i 2i n/2i … … … Quick-Sort Another divide-and-conquer sorting algorihm To understand quick-sort, let’s look at a high-level description of the algorithm 1) Divide : If the sequence S has 2 or more elements, select an element x from S to be your pivot. Any arbitrary element, like the last, will do. Remove all the elements of S and divide them into 3 sequences: L, holds S’s elements less than x E, holds S’s elements equal to x G, holds S’s elements greater than x 2) Recurse: Recursively sort L and G 3) Conquer: Finally, to put elements back into S in order, first inserts the elements of L, then those of E, and those of G. Here are some diagrams.... Idea of Quick Sort 1) Select: pick an element 2) Divide: rearrange elements so that x goes to its final position E 3) Recurse and Conquer: recursively sort Quick-Sort Tree Two key steps How to pick a pivot? How to partition? In-place Partition A better partition Want to partition an array A[left .. right] First, get the pivot element out of the way by swapping it with the last element. (Swap pivot and A[right]) Let i start at the first element and j start at the next-to-last element (i = left, j = right – 1) swap 5 6 4 6 3 12 19 pivot 5 i 6 4 19 3 12 6 j Want to have A[x] <= pivot, for x < i A[x] >= pivot, for x > j <= pivot >= pivot j i When i < j Move i right, skipping over elements smaller than the pivot Move j left, skipping over elements greater than the pivot When both i and j have stopped 5 i 6 A[i] >= pivot A[j] <= pivot 4 19 3 12 6 j 5 6 4 19 3 12 6 i j When i and j have stopped and i is to the left of j Swap A[i] and A[j] After swapping The large element is pushed to the right and the small element is pushed to the left A[i] <= pivot A[j] >= pivot Repeat the process until i and j cross swap 5 6 4 19 3 12 6 i j 5 3 4 19 6 12 6 i j When i and j have crossed 5 3 4 19 6 12 6 i j 3 4 19 6 12 6 Swap A[i] and pivot Result: A[x] <= pivot, for x < i A[x] >= pivot, for x > i 5 5 3 j i 4 6 6 12 19 j i Partition: In-Place Quick-Sort (another way) Divide step: l scans the sequence from the left, and r from the right. A swap is performed when l is at an element larger than the pivot and r is at one smaller than the pivot. In Place Quick Sort (cont’d) A final swap with the pivot completes the divide step Summary of Sorting Algorithms Algorithm Time selection-sort O(n2) insertion-sort O(n2) Notes slow ● in-place ● for small data sets (< 1K) ● slow ● in-place ● for small data sets (< 1K) ● fast ● in-place ● for large data sets (1K — 1M) ● heap-sort O(n log n) fast ● sequential data access ● for huge data sets (> 1M) ● merge-sort O(n log n) Bubble-sort O(n2) Slow ●sequential data access ● O Notation O-notation Introduction Exact counting of operations is often difficult (and tedious), even for simple algorithms Often, exact counts are not useful due to other factors, e.g. the language/machine used, or the implementation of the algorithm (different types of operations do not take the same time anyway) O-notation is a mathematical language for evaluating the running-time (and memory usage) of algorithms Growth Rate of an Algorithm We often want to compare the performance of algorithms When doing so we generally want to know how they perform when the problem size (n) is large Since cost functions are complex, and may be difficult to compute, we approximate them using O notation Example of a Cost Function Cost Function: tA(n) = n2 + 20n + 100 Which term dominates? It depends on the size of n n = 2, tA(n) = 4 + 40 + 100 n = 10, tA(n) = 100 + 200 + 100 20n is the dominating term n = 100, tA(n) = 10,000 + 2,000 + 100 The constant, 100, is the dominating term n2 is the dominating term n = 1000, tA(n) = 1,000,000 + 20,000 + 100 n2 is the dominating term Big O Notation O notation approximates the cost function of an algorithm The approximation is usually good enough, especially when considering the efficiency of algorithm as n gets very large Allows us to estimate rate of function growth Instead of computing the entire cost function we only need to count the number of times that an algorithm executes its barometer instruction(s) The instruction that is executed the most number of times in an algorithm (the highest order term) Big O Notation Given functions tA(n) and g(n), we can say that the efficiency of an algorithm is of order g(n) if there are positive constants c and m such that tA(n) we write tA(n) tA(n) · c.g(n) for all n ¸ m is O(g(n)) and we say that is of order g(n) e.g. if an algorithm’s running time is 3n + 12 then the algorithm is O(n). If c is 3 and m is 12 then: 4 * n 3n + 12 for all n 12 In English… The cost function of an algorithm A, tA(n), can be approximated by another, simpler, function g(n) which is also a function with only 1 variable, the data size n. The function g(n) is selected such that it represents an upper bound on the efficiency of the algorithm A (i.e. an upper bound on the value of tA(n)). This is expressed using the big-O notation: O(g(n)). For example, if we consider the time efficiency of algorithm A then “tA(n) is O(g(n))” would mean that A cannot take more “time” than O(g(n)) to execute or that (more than c.g(n) for some constant c) the cost function tA(n) grows at most as fast as g(n) The general idea is … when using Big-O notation, rather than giving a precise figure of the cost function using a specific data size n express the behaviour of the algorithm as its data size n grows very large so ignore lower order terms and constants O Notation Examples All these expressions are O(n): All these expressions are O(n2): n, 3n, 61n + 5, 22n – 5, … n2, 9 n2, 18 n2+ 4n – 53, … All these expressions are O(n log n): n(log n), 5n(log 99n), 18 + (4n – 2)(log (5n + 3)), … sorted next to be inserted 14 20 28 38 22 55 9 23 28 16 Insert here sorted next to be inserted 14 20 28 38 40 55 22 23 28 16 i j 14 20 j 23 28 16 i Execution Example Partition 11 6 13 8 7 12 10 5