Introduction To Algorithms (4th Edition) PDF
Document Details
Uploaded by Deleted User
2022
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein
Tags
Summary
This textbook, "Introduction to Algorithms (4th Edition)" by Cormen, Leiserson, Rivest, and Stein, provides a comprehensive guide to algorithms and data structures. It covers a wide range of topics from fundamental concepts to advanced techniques, making it a valuable resource for students and professionals alike in computer science.
Full Transcript
Introduction to Algorithms Fourth Edition Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Fourth Edition The MIT Press Cambridge, Massachusetts London, England c 2022 Massachusetts Institut...
Introduction to Algorithms Fourth Edition Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Fourth Edition The MIT Press Cambridge, Massachusetts London, England c 2022 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. The MIT Press would like to thank the anonymous peer reviewers who provided comments on drafts of this book. The generous work of academic experts is essential for establishing the authority and quality of our publications. We acknowledge with gratitude the contributions of these otherwise uncredited readers. This book was set in Times Roman and MathTime Professional II by the authors. Names: Cormen, Thomas H., author. j Leiserson, Charles Eric, author. j Rivest, Ronald L., author. j Stein, Clifford, author. Title: Introduction to algorithms / Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein. Description: Fourth edition. j Cambridge, Massachusetts : The MIT Press, j Includes bibliographical references and index. Identiûers: LCCN 2021037260 j ISBN 9780262046305 Subjects: LCSH: Computer programming. j Computer algorithms. Classiûcation: LCC QA76.6.C662 2022 j DDC 005.13--dc23 LC record available at http://lccn.loc.gov/2021037260 10 9 8 7 6 5 4 3 2 1 Contents Preface xiii I Foundations Introduction 3 1 The Role of Algorithms in Computing 5 1.1 Algorithms 5 1.2 Algorithms as a technology 12 2 Getting Started 17 2.1 Insertion sort 17 2.2 Analyzing algorithms 25 2.3 Designing algorithms 34 3 Characterizing Running Times 49 3.1 O -notation, -notation, and ‚-notation 50 3.2 Asymptotic notation: formal deûnitions 53 3.3 Standard notations and common functions 63 4 Divide-and-Conquer 76 4.1 Multiplying square matrices 80 4.2 Strassen’s algorithm for matrix multiplication 85 4.3 The substitution method for solving recurrences 90 4.4 The recursion-tree method for solving recurrences 95 4.5 The master method for solving recurrences 101 ? 4.6 Proof of the continuous master theorem 107 ? 4.7 Akra-Bazzi recurrences 115 vi Contents 5 Probabilistic Analysis and Randomized Algorithms 126 5.1 The hiring problem 126 5.2 Indicator random variables 130 5.3 Randomized algorithms 134 ? 5.4 Probabilistic analysis and further uses of indicator random variables 140 II Sorting and Order Statistics Introduction 157 6 Heapsort 161 6.1 Heaps 161 6.2 Maintaining the heap property 164 6.3 Building a heap 167 6.4 The heapsort algorithm 170 6.5 Priority queues 172 7 Quicksort 182 7.1 Description of quicksort 183 7.2 Performance of quicksort 187 7.3 A randomized version of quicksort 191 7.4 Analysis of quicksort 193 8 Sorting in Linear Time 205 8.1 Lower bounds for sorting 205 8.2 Counting sort 208 8.3 Radix sort 211 8.4 Bucket sort 215 9 Medians and Order Statistics 227 9.1 Minimum and maximum 228 9.2 Selection in expected linear time 230 9.3 Selection in worst-case linear time 236 III Data Structures Introduction 249 10 Elementary Data Structures 252 10.1 Simple array-based data structures: arrays, matrices, stacks, queues 252 10.2 Linked lists 258 10.3 Representing rooted trees 265 Contents vii 11 Hash Tables 272 11.1 Direct-address tables 273 11.2 Hash tables 275 11.3 Hash functions 282 11.4 Open addressing 293 11.5 Practical considerations 301 12 Binary Search Trees 312 12.1 What is a binary search tree? 312 12.2 Querying a binary search tree 316 12.3 Insertion and deletion 321 13 Red-Black Trees 331 13.1 Properties of red-black trees 331 13.2 Rotations 335 13.3 Insertion 338 13.4 Deletion 346 IV Advanced Design and Analysis Techniques Introduction 361 14 Dynamic Programming 362 14.1 Rod cutting 363 14.2 Matrix-chain multiplication 373 14.3 Elements of dynamic programming 382 14.4 Longest common subsequence 393 14.5 Optimal binary search trees 400 15 Greedy Algorithms 417 15.1 An activity-selection problem 418 15.2 Elements of the greedy strategy 426 15.3 Huffman codes 431 15.4 Ofüine caching 440 16 Amortized Analysis 448 16.1 Aggregate analysis 449 16.2 The accounting method 453 16.3 The potential method 456 16.4 Dynamic tables 460 viii Contents V Advanced Data Structures Introduction 477 17 Augmenting Data Structures 480 17.1 Dynamic order statistics 480 17.2 How to augment a data structure 486 17.3 Interval trees 489 18 B-Trees 497 18.1 Deûnition of B-trees 501 18.2 Basic operations on B-trees 504 18.3 Deleting a key from a B-tree 513 19 Data Structures for Disjoint Sets 520 19.1 Disjoint-set operations 520 19.2 Linked-list representation of disjoint sets 523 19.3 Disjoint-set forests 527 ? 19.4 Analysis of union by rank with path compression 531 VI Graph Algorithms Introduction 547 20 Elementary Graph Algorithms 549 20.1 Representations of graphs 549 20.2 Breadth-ûrst search 554 20.3 Depth-ûrst search 563 20.4 Topological sort 573 20.5 Strongly connected components 576 21 Minimum Spanning Trees 585 21.1 Growing a minimum spanning tree 586 21.2 The algorithms of Kruskal and Prim 591 22 Single-Source Shortest Paths 604 22.1 The Bellman-Ford algorithm 612 22.2 Single-source shortest paths in directed acyclic graphs 616 22.3 Dijkstra’s algorithm 620 22.4 Difference constraints and shortest paths 626 22.5 Proofs of shortest-paths properties 633 Contents ix 23 All-Pairs Shortest Paths 646 23.1 Shortest paths and matrix multiplication 648 23.2 The Floyd-Warshall algorithm 655 23.3 Johnson’s algorithm for sparse graphs 662 24 Maximum Flow 670 24.1 Flow networks 671 24.2 The Ford-Fulkerson method 676 24.3 Maximum bipartite matching 693 25 Matchings in Bipartite Graphs 704 25.1 Maximum bipartite matching (revisited) 705 25.2 The stable-marriage problem 716 25.3 The Hungarian algorithm for the assignment problem 723 VII Selected Topics Introduction 745 26 Parallel Algorithms 748 26.1 The basics of fork-join parallelism 750 26.2 Parallel matrix multiplication 770 26.3 Parallel merge sort 775 27 Online Algorithms 791 27.1 Waiting for an elevator 792 27.2 Maintaining a search list 795 27.3 Online caching 802 28 Matrix Operations 819 28.1 Solving systems of linear equations 819 28.2 Inverting matrices 833 28.3 Symmetric positive-deûnite matrices and least-squares approximation 838 29 Linear Programming 850 29.1 Linear programming formulations and algorithms 853 29.2 Formulating problems as linear programs 860 29.3 Duality 866 30 Polynomials and the FFT 877 30.1 Representing polynomials 879 30.2 The DFT and FFT 885 30.3 FFT circuits 894 x Contents 31 Number-Theoretic Algorithms 903 31.1 Elementary number-theoretic notions 904 31.2 Greatest common divisor 911 31.3 Modular arithmetic 916 31.4 Solving modular linear equations 924 31.5 The Chinese remainder theorem 928 31.6 Powers of an element 932 31.7 The RSA public-key cryptosystem 936 ? 31.8 Primality testing 942 32 String Matching 957 32.1 The naive string-matching algorithm 960 32.2 The Rabin-Karp algorithm 962 32.3 String matching with ûnite automata 967 ? 32.4 The Knuth-Morris-Pratt algorithm 975 32.5 Sufûx arrays 985 33 Machine-Learning Algorithms 1003 33.1 Clustering 1005 33.2 Multiplicative-weights algorithms 1015 33.3 Gradient descent 1022 34 NP-Completeness 1042 34.1 Polynomial time 1048 34.2 Polynomial-time veriûcation 1056 34.3 NP-completeness and reducibility 1061 34.4 NP-completeness proofs 1072 34.5 NP-complete problems 1080 35 Approximation Algorithms 1104 35.1 The vertex-cover problem 1106 35.2 The traveling-salesperson problem 1109 35.3 The set-covering problem 1115 35.4 Randomization and linear programming 1119 35.5 The subset-sum problem 1124 VIII Appendix: Mathematical Background Introduction 1139 A Summations 1140 A.1 Summation formulas and properties 1140 A.2 Bounding summations 1145 Contents xi B Sets, Etc. 1153 B.1 Sets 1153 B.2 Relations 1158 B.3 Functions 1161 B.4 Graphs 1164 B.5 Trees 1169 C Counting and Probability 1178 C.1 Counting 1178 C.2 Probability 1184 C.3 Discrete random variables 1191 C.4 The geometric and binomial distributions 1196 ? C.5 The tails of the binomial distribution 1203 D Matrices 1214 D.1 Matrices and matrix operations 1214 D.2 Basic matrix properties 1219 Bibliography 1227 Index 1251 Preface Not so long ago, anyone who had heard the word 0. Conclude that we can drop the asymptotics on a driving function in any Akra-Bazzi recurrence without affecting its asymptotic solution. 4.7-2 Show that f.n/ D n2 satisûes the polynomial-growth condition but that f.n/ D 2n does not. 4.7-3 Let f.n/ be a function that satisûes the polynomial-growth condition. Prove that f.n/ is asymptotically positive, that is, there exists a constant n0 0 such that f.n/ 0 for all n n0. Problems for Chapter 4 119 ? 4.7-4 Give an example of a function f.n/ that does not satisfy the polynomial-growth condition but for which f.‚.n// D ‚.f.n//. 4.7-5 Use the Akra-Bazzi method to solve the following recurrences. a. T.n/ D T.n=2/ C T.n=3/ C T.n=6/ C n lg n. b. T.n/ D 3T.n=3/ C 8T.n=4/ C n = lg n. 2 c. T.n/ D.2=3/T.n=3/ C.1=3/T.2n=3/ C lg n. d. T.n/ D.1=3/T.n=3/ C 1=n. e. T.n/ D 3T.n=3/ C 3T.2n=3/ C n. 2 ? 4.7-6 Use the Akra-Bazzi method to prove the continuous master theorem. Problems 4-1 Recurrence examples Give asymptotically tight upper and lower bounds for T.n/ in each of the following algorithmic recurrences. Justify your answers. a. T.n/ D 2T.n=2/ C n. 3 b. T.n/ D T.8n=11/ C n. c. T.n/ D 16T.n=4/ C n. 2 d. T.n/ D 4T.n=2/ C n 2 lg n. e. T.n/ D 8T.n=3/ C n. 2 f. T.n/ D 7T.n=2/ C n lg n. 2 p g. T.n/ D 2T.n=4/ C n. h. T.n/ D T.n 2/ C n.2 120 Chapter 4 Divide-and-Conquer 4-2 Parameter-passing costs Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N -element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies: 1. Arrays are passed by pointer. Time D ‚.1/. 2. Arrays are passed by copying. Time D ‚.N /, where N is the size of the array. 3. Arrays are passed by copying only the subrange that might be accessed by the called procedure. Time D ‚.n/ if the subarray contains n elements. Consider the following three algorithms: a. The recursive binary-search algorithm for ûnding a number in a sorted array (see Exercise 2.3-6). b. The M ERGE -S ORT procedure from Section 2.3.1. c. The M ATRIX -M ULTIPLY-R ECURSIVE procedure from Section 4.1. Give nine recurrences Ta1.N; n/; Ta2.N; n/; : : : ; Tc3.N; n/ for the worst-case run- ning times of each of the three algorithms above when arrays and matrices are passed using each of the three parameter-passing strategies above. Solve your re- currences, giving tight asymptotic bounds. 4-3 Solving recurrences with a change of variables Sometimes, a little algebraic manipulation can make an unknown recurrence simi- lar to one you have seen before. Let’s solve the recurrence p ã ä T.n/ D 2T n C ‚.lg n/ (4.25) by using the change-of-variables method. a. Deûne m D lg n and S.m/ D T.2 m /. Rewrite recurrence (4.25) in terms of m and S.m/. b. Solve your recurrence for S.m/. c. Use your solution for S.m/ to conclude that T.n/ D ‚.lg n lg lg n/. d. Sketch the recursion tree for recurrence (4.25), and use it to explain intuitively why the solution is T.n/ D ‚.lg n lg lg n/. Solve the following recurrences by changing variables: Problems for Chapter 4 121 p e. T.n/ D 2T. n/ C ‚.1/. p f. T.n/ D 3T. 3 n/ C ‚.n/. 4-4 More recurrence examples Give asymptotically tight upper and lower bounds for T.n/ in each of the following recurrences. Justify your answers. a. T.n/ D 5T.n=3/ C n lg n. b. T.n/ D 3T.n=3/ C n= lg n. p c. T.n/ D 8T.n=2/ C n n. 3 d. T.n/ D 2T.n=2 2/ C n=2. e. T.n/ D 2T.n=2/ C n= lg n. f. T.n/ D T.n=2/ C T.n=4/ C T.n=8/ C n. g. T.n/ D T.n 1/ C 1=n. h. T.n/ D T.n 1/ C lg n. i. T.n/ D T.n 2/ C 1= lg n. p p j. T.n/ D n T. n/ C n. 4-5 Fibonacci numbers This problem develops properties of the Fibonacci numbers, which are deûned by recurrence (3.31) on page 69. We’ll explore the technique of generating func- tions to solve the Fibonacci recurrence. Deûne the generating function (or formal power series) F as X1 F.´/ D Fi ´ i i D0 D 0 C ´ C ´ C 2´ C 3´ C 5´ C 8´ C 13´ C 21´ C ; 2 3 4 5 6 7 8 where Fi is the i th Fibonacci number. a. Show that F.´/ D ´ C ´F.´/ C ´2 F.´/. 122 Chapter 4 Divide-and-Conquer b. Show that ´ F.´/ D 1 ´´ 2 ´ D y Î ´/.1 ´/.1 Ï 1 1 1 D p 1 ´ y ; 5 1 ´ where is the golden ratio, and y is its conjugate (see page 69). c. Show that X1 F.´/ D p1. i yi /´i : i D0 5 You may usePwithout proof the generating-function version of equation (A.7) on 1 page 1142, k D0 x k D 1=.1 x/. Because this equation involves a generating function, x is a formal variable, not a real-valued variable, so that you don’t have to worry about convergence of the summation or about the requirement in equation (A.7) that jx j < 1, which doesn’t make sense here. p ˇ Fi D = d. Use part (c) to proveˇthat 5 for i > 0, rounded to the nearest integer. i (Hint: Observe that ˇ yˇ < 1.) e. Prove that Fi C2 i for i 0. 4-6 Chip testing Professor Diogenes has n supposedly identical integrated-circuit chips that in prin- ciple are capable of testing each other. The professor’s test jig accommodates two chips at a time. When the jig is loaded, each chip tests the other and reports whether it is good or bad. A good chip always reports accurately whether the other chip is good or bad, but the professor cannot trust the answer of a bad chip. Thus, the four possible outcomes of a test are as follows: Chip A says Chip B says Conclusion B is good A is good both are good, or both are bad B is good A is bad at least one is bad B is bad A is good at least one is bad B is bad A is bad at least one is bad a. Show that if at least n=2 chips are bad, the professor cannot necessarily deter- mine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor. Problems for Chapter 4 123 Now you will design an algorithm to identify which chips are good and which are bad, assuming that more than n=2 of the chips are good. First, you will determine how to identify one good chip. b. Show that bn=2c pairwise tests are sufûcient to reduce the problem to one of nearly half the size. That is, show how to use bn=2c pairwise tests to obtain a set with at most dn=2e chips that still has the property that more than half of the chips are good. c. Show how to apply the solution to part (b) recursively to identify one good chip. Give and solve the recurrence that describes the number of tests needed to identify one good chip. You have now determined how to identify one good chip. d. Show how to identify all the good chips with an additional ‚.n/ pairwise tests. 4-7 Monge arrays An m n array A of real numbers is a Monge array if for all i , j , k , and l such that 1 හ i < k හ m and 1 හ j < l හ n, we have AŒi; j C AŒk; l හ AŒi; l C AŒk; j : In other words, whenever we pick two rows and two columns of a Monge array and consider the four elements at the intersections of the rows and the columns, the sum of the upper-left and lower-right elements is less than or equal to the sum of the lower-left and upper-right elements. For example, the following array is Monge: 10 17 13 28 23 17 22 16 29 23 24 28 22 34 24 11 13 6 17 7 45 44 32 37 23 36 33 19 21 6 75 66 51 53 34 a. Prove that an array is Monge if and only if for all i D 1; 2; :::; m 1 and j D 1; 2; :::; n 1, we have AŒi; j C AŒi C 1; j C 1 හ AŒi; j C 1 C AŒi C 1; j : (Hint: For the 0 or ni D Fi (the i th Fibonacci number4see equation (3.31) on page 69). For this problem, assume that n2b 1 is large enough that the probability of an overüow error is negligible. a. Show that the expected value represented by the counter after n I NCREMENT operations have been performed is exactly n. b. The analysis of the variance of the count represented by the counter depends on the sequence of the ni. Let us consider a simple case: ni D 100i for all i 0. Estimate the variance in the value represented by the register after n I NCREMENT operations have been performed. 5-2 Searching an unsorted array This problem examines three algorithms for searching for a value x in an unsorted array A consisting of n elements. Consider the following randomized strategy: pick a random index i into A. If AŒi D x , then terminate; otherwise, continue the search by picking a new random index into A. Continue picking random indices into A until you ûnd an index j such that AŒj D x or until every element of A has been checked. This strategy may examine a given element more than once, because it picks from the whole set of indices each time. a. Write pseudocode for a procedure R ANDOM -S EARCH to implement the strat- egy above. Be sure that your algorithm terminates when all indices into A have been picked. b. Suppose that there is exactly one index i such that AŒi D x. What is the expected number of indices into A that must be picked before x is found and R ANDOM -S EARCH terminates? c. Generalizing your solution to part (b), suppose that there are k 1 indices i such that AŒi D x. What is the expected number of indices into A that must be picked before x is found and R ANDOM -S EARCH terminates? Your answer should be a function of n and k. d. Suppose that there are no indices i such that AŒi D x. What is the expected number of indices into A that must be picked before all elements of A have been checked and R ANDOM -S EARCH terminates? Now consider a deterministic linear search algorithm. The algorithm, which we call D ETERMINISTIC -S EARCH, searches A for x in order, considering AŒ1 ; AŒ2 ; Notes for Chapter 5 155 until either it ûnds AŒi D x or it reaches the end of the array. AŒ3 ; : : : ; AŒn Assume that all possible permutations of the input array are equally likely. e. Suppose that there is exactly one index i such that AŒi D x. What is the average-case running time of D ETERMINISTIC -S EARCH? What is the worst- case running time of D ETERMINISTIC -S EARCH? f. Generalizing your solution to part (e), suppose that there are k 1 indices i such that AŒi D x. What is the average-case running time of D ETERMINISTIC - S EARCH? What is the worst-case running time of D ETERMINISTIC -S EARCH? Your answer should be a function of n and k. g. Suppose that there are no indices i such that AŒi D x. What is the average-case running time of D ETERMINISTIC -S EARCH? What is the worst-case running time of D ETERMINISTIC -S EARCH? Finally, consider a randomized algorithm S CRAMBLE -S EARCH that ûrst randomly permutes the input array and then runs the deterministic linear search given above on the resulting permuted array. h. Letting k be the number of indices i such that AŒi D x , give the worst-case and expected running times of S CRAMBLE -S EARCH for the cases in which k D 0 and k D 1. Generalize your solution to handle the case in which k 1. i. Which of the three searching algorithms would you use? Explain your answer. Chapter notes Bollob´as , Hofri , and Spencer contain a wealth of advanced prob- abilistic techniques. The advantages of randomized algorithms are discussed and surveyed by Karp and Rabin. The textbook by Motwani and Raghavan gives an extensive treatment of randomized algorithms. The R ANDOMLY-P ERMUTE procedure is by Durstenfeld , based on an ear- lier procedure by Fisher and Yates [143, p. 34]. Several variants of the hiring problem have been widely studied. These problems are more commonly referred to as x Figure 7.3 The two cases for one iteration of procedure P ARTITION. (a) If AŒj >x , the only action is to increment j , which maintains the loop invariant. (b) If AŒj හ x , index i is incremented, AŒi and AŒj are swapped, and then j is incremented. Again, the loop invariant is maintained. Exercise 7.1-3 asks you to show that the running time of PARTITION on a sub- array AŒp W r of n D r p C 1 elements is ‚.n/. Exercises 7.1-1 Using Figure 7.1 as a model, illustrate the operation of PARTITION on the array A D h13; 19; 9; 5; 12; 8; 7; 4; 21; 2; 6; 11 i. 7.2 Performance of quicksort 187 7.1-2 What value of q does PARTITION return when all elements in the subarray AŒp W r have the same value? Modify PARTITION so that q D b.p C r/=2c when all elements in the subarray AŒp W r have the same value. 7.1-3 Give a brief argument that the running time of PARTITION on a subarray of size n is ‚.n/. 7.1-4 Modify Q UICKSORT to sort into monotonically decreasing order. 7.2 Performance of quicksort The running time of quicksort depends on how balanced each partitioning is, which in turn depends on which elements are used as pivots. If the two sides of a parti- tion are about the same size4the partitioning is balanced4then the algorithm runs asymptotically as fast as merge sort. If the partitioning is unbalanced, however, it can run asymptotically as slowly as insertion sort. To allow you to gain some intu- ition before diving into a formal analysis, this section informally investigates how quicksort performs under the assumptions of balanced versus unbalanced partition- ing. But ûrst, let’s brieüy look at the maximum amount of memory that quicksort re- quires. Although quicksort sorts in place according to the deûnition on page 158, the amount of memory it uses4aside from the array being sorted4is not constant. Since each recursive call requires a constant amount of space on the runtime stack, outside of the array being sorted, quicksort requires space proportional to the max- imum depth of the recursion. As we’ll see now, that could be as bad as ‚.n/ in the worst case. Worst-case partitioning The worst-case behavior for quicksort occurs when the partitioning produces one subproblem with n 1 elements and one with 0 elements. (See Section 7.4.1.) Let us assume that this unbalanced partitioning arises in each recursive call. The partitioning costs ‚.n/ time. Since the recursive call on an array of size 0 just returns without doing anything, T.0/ D ‚.1/, and the recurrence for the running time is 188 Chapter 7 Quicksort T.n/ D T.n 1/ C T.0/ C ‚.n/ D T.n 1/ C ‚.n/ : By summing the costs incurred at each level of the recursion, we obtain an arithmetic series (equation (A.3) on page 1141), which evaluates to ‚.n2 /. In- deed, the substitution method can be used to prove that the recurrence T.n/ D T.n 1/ C ‚.n/ has the solution T.n/ D ‚.n2 /. (See Exercise 7.2-1.) Thus, if the partitioning is maximally unbalanced at every recursive level of the algorithm, the running time is ‚.n2 /. The worst-case running time of quicksort is therefore no better than that of insertion sort. Moreover, the ‚.n2 / running time occurs when the input array is already completely sorted4a situation in which insertion sort runs in O.n/ time. Best-case partitioning In the most even possible split, PARTITION produces two subproblems, each of size no more than n=2, since one is of size b.n 1/=2c හ n=2 and one of size d.n 1/=2e 1 හ n=2. In this case, quicksort runs much faster. An upper bound on the running time can then be described by the recurrence T.n/ D 2T.n=2/ C ‚.n/ : By case 2 of the master theorem (Theorem 4.1 on page 102), this recurrence has the solution T.n/ D ‚.n lg n/. Thus, if the partitioning is equally balanced at every level of the recursion, an asymptotically faster algorithm results. Balanced partitioning As the analyses in Section 7.4 will show, the average-case running time of quicksort is much closer to the best case than to the worst case. By appreciating how the balance of the partitioning affects the recurrence describing the running time, we can gain an understanding of why. Suppose, for example, that the partitioning algorithm always produces a 9-to-1 proportional split, which at ûrst blush seems quite unbalanced. We then obtain the recurrence T.n/ D T.9n=10/ C T.n=10/ C ‚.n/ ; on the running time of quicksort. Figure 7.4 shows the recursion tree for this re- currence, where for simplicity the ‚.n/ driving function has been replaced by n, which won’t affect the asymptotic solution of the recurrence (as Exercise 4.7-1 on page 118 justiûes). Every level of the tree has cost n, until the recursion bot- toms out in a base case at depth log 10 n D ‚.lg n/, and then the levels have cost 7.2 Performance of quicksort 189 n n 1 9 10 n 10 n n log10 n 1 9 9 81 100 n 100 n 100 n 100 n n log10=9 n 81 729 1 1000 n 1000 n n හn 1 හn O.n lg n/ Figure 7.4 A recursion tree for Q UICKSORT in which PARTITION always produces a 9-to-1 split, yielding a running time of O.n lg n/. Nodes show subproblem sizes, with per-level costs on the right. at most n. The recursion terminates at depth log 10=9 n D ‚.lg n/. Thus, with a 9-to-1 proportional split at every level of recursion, which intuitively seems highly unbalanced, quicksort runs in O.n lg n/ time4asymptotically the same as if the split were right down the middle. Indeed, even a 99-to-1 split yields an O.n lg n/ running time. In fact, any split of constant proportionality yields a recursion tree of depth ‚.lg n/, where the cost at each level is O.n/. The running time is therefore O.n lg n/ whenever the split has constant proportionality. The ratio of the split affects only the constant hidden in the O -notation. Intuition for the average case To develop a clear notion of the expected behavior of quicksort, we must assume something about how its inputs are distributed. Because quicksort determines the sorted order using only comparisons between input elements, its behavior depends on the relative ordering of the values in the array elements given as the input, not on the particular values in the array. As in the probabilistic analysis of the hiring problem in Section 5.2, assume that all permutations of the input numbers are equally likely and that the elements are distinct. When quicksort runs on a random input array, the partitioning is highly unlikely to happen in the same way at every level, as our informal analysis has assumed. 190 Chapter 7 Quicksort n Θ(n) n Θ(n) 0 n31 (n31)/2 (n31)/2 (n31)/2 3 1 (n31)/2 (a) (b) Figure 7.5 (a) Two levels of a recursion tree for quicksort. The partitioning at the root costs n and produces a 〈1,2,3〉 1:3 〈2,1,3〉 2:3 ≤ > ≤ > 〈1,3,2〉 〈3,1,2〉 〈2,3,1〉 〈3,2,1〉 Figure 8.1 The decision tree for insertion sort operating on three elements. An internal node (shown in blue) annotated by i :j indicates a comparison between ai and aj. A leaf annotated by the permutation h.1/;.2/;;:.n/ i indicates the ordering a.1/ හ a.2/ හ හ a.n/. The high- lighted path indicates the decisions made when sorting the input sequence ha1 D 6; a2 D 8; a3 D 5i. Going left from the root node, labeled 1:2, indicates that a1 හ a2. Going right from the node labeled 2:3 indicates that a2 > a3. Going right from the node labeled 1:3 indicates that a1 > a3. Therefore, we have the ordering a3 හ a1 හ a2 , as indicated in the leaf labeled h3; 1; 2i. Because the three input elements have 3Š D 6 possible permutations, the decision tree must have at least 6 leaves. comparisons of the form ai D aj are useless, which means that we can assume that no comparisons for exact equality occur. Moreover, the comparisons ai හ aj , ai aj , ai > aj , and ai < aj are all equivalent in that they yield identical information about the relative order of ai and aj. We therefore assume that all comparisons have the form ai හ aj. The decision-tree model We can view comparison sorts abstractly in terms of decision trees. A decision tree is a full binary tree (each node is either a leaf or has both children) that repre- sents the comparisons between elements that are performed by a particular sorting algorithm operating on an input of a given size. Control, data movement, and all other aspects of the algorithm are ignored. Figure 8.1 shows the decision tree cor- responding to the insertion sort algorithm from Section 2.1 operating on an input sequence of three elements. A decision tree has each internal node annotated by i :j for some i and j in the range 1 හ i; j හ n, where n is the number of elements in the input sequence. We also annotate each leaf by a permutation h.1/;.2/;;:.n/ i. (See Sec- tion C.1 for background on permutations.) Indices in the internal nodes and the leaves always refer to the original positions of the array elements at the start of the sorting algorithm. The execution of the comparison sorting algorithm corresponds to tracing a simple path from the root of the decision tree down to a leaf. Each internal node indicates a comparison ai හ aj. The left subtree then dictates sub- 8.1 Lower bounds for sorting 207 sequent comparisons once we know that ai හ aj , and the right subtree dictates subsequent comparisons when ai > aj. Arriving at a leaf, the sorting algorithm has established the ordering a.1/ හ a.2/ හ හ a.n/. Because any correct sort- ing algorithm must be able to produce each permutation of its input, each of the nŠ permutations on n elements must appear as at least one of the leaves of the decision tree for a comparison sort to be correct. Furthermore, each of these leaves must be reachable from the root by a downward path corresponding to an actual execution of the comparison sort. (We call such leaves