CSE21_Su2_2024_HW_01.pdf
Document Details
Uploaded by AudibleRaleigh
2024
Tags
Related
- COMP 8547 Advanced Computing Concepts Fall 2024 Lecture Notes PDF
- Module 2: Algorithm Analysis PDF
- CS 412 Algorithm Analysis and Design - Chapter 7 Quicksort - PDF
- Data Structures and Algorithm Analysis in C++ 4th Edition PDF
- Lower Bounds in Algorithm Analysis PDF
- Analysis of Algorithm Chapter Two PDF
Full Transcript
CSE 21 Summer 2, 2024 Homework 1...
CSE 21 Summer 2, 2024 Homework 1 Due date: Sunday, August 18, 2024 at 11:59pm GRACE PERIOD IS ONLY 2 HOURS Instructions This homework should be done in groups of one to four students, without assistance from anyone besides the instructional staff and your group members. Homework must be submitted through Gradescope by a single representative of your group and received by 11:59pm on the due date. There are no exceptions to this rule. You will be able to look at your scanned work before submitting it. You must write or type your solutions. Should you type your solutions, you will receive 1 point of extra credit. (hand-drawn diagrams are okay.) Your group representative can resubmit your assignment as many times as you like before the deadline. Only the most recent submission will be graded. Students should consult their textbook, class notes, lecture slides, podcasts, group members, instructors, TAs, and tutors when they need help with homework. You may ask questions about the homework on Piazza, but questions on Piazza should be private, visible only to instructors. This assignment will be graded for not only the correctness of your answers, but on your ability to present your ideas clearly and logically. When asked to explain or justify, present clearly how you arrived at your conclusions and justify the correctness of your answers with mathematically sound reasoning. Whether you use formal proof techniques or write a more informal argument for why something is true, your answers should always be well-supported. Your goal should be to convince the reader that your results and methods are sound. Unless otherwise stated, justifications are required for each part of each problem. Key Concepts Sorting, Searching, Loop Invariants, and Runtimes, Big O notation (and related notation), showing Big O relations hold, Algorithm runtime analysis, Master Theorem. 1. Searching the Dark Valley (15 points) A list of n distinct integers a1 , a2 ,... , an is called a valley list if the elements reading from left to right first decrease and then increase. The location of the valley is the value i where ai is 1. For example, the list 3, 2, 1, 6, 7 is a valley list with a valley at 3 since the number 1 occurs in the third position. We also consider a list of n increasing numbers to be a valley list with a valley at 1, and a list of n decreasing numbers to be a valley list with a valley at n. (a) (2 points) Fill in the blank for the pseudocode for an algorithm based on linear search that takes as input a valley list a1 , a2 ,... , an and returns the location of the valley. procedure LinearValley(a1 , a2 ,... , an : valley list) i. for i := 1 to n − 1 ii. if iii. return i iv. return n (b) (3 points) What is a loop invariant of LinearValley that you could use to prove the correctness of Linear- Valley? (You do not have to prove.) 1 (c) (2 points) Fill in the blank for the algorithm based on binary search that takes as input a valley list a1 , a2 ,... , an and returns the location of the valley. procedure BinaryValley(a1 , a2 ,... , an : valley list) i. i := 1 ii. j := n iii. while (i < j) iv. m := ⌊ i+j 2 ⌋ v. if vi. i := m + 1 vii. else viii. j := m ix. return i (d) (3 points) What is a loop invariant of BinaryValley that you could use to prove the correctness of Bina- ryValley? (You do not have to prove.) (e) (5 points) In the worst case, which algorithm uses fewer comparisons on a list of length n? LinearValley or BinaryValley? (Please justify your answer.) 2. (24 points) Out of Bounds or In? (3 points each) State if each statement is true or false and give a justification either way. You may use any of the methods used in class including the limit rules and the table of increasing functions. You may also use the following rules freely: n n X n(n + 1) X n i= = 2n n! = n(n − 1)(n − 2)... 2 · 1 i=0 2 i=0 i (a) n log n ∈ O(n) (b) (n + 1)5 ∈ Θ(0.01n5 − 10n4 ) n ! X (c) n log n ∈ o i i=0 (d) log(n log n) ∈ O log(n2 ) 6 n! (e) n ∈ Ω (n − 3)! (f) log(log(nn )) ∈ Ω (log(n)) n X n (g) ∈ Θ(2n ) k k=0 (h) (2n)! ∈ o((n!)2 ) 2 3. (20 points) Sky’s the Limit For the parts below, you will be presented with defined functions. Compute limn→∞ fg(n)(n) (show your work). Note: If the limit is a non-zero constant, you do not have to compute that constant. Given those limits, select which of the following apply (may be multiple, no explanation required): f (n) ∈ Θ(g(n)), f (n) ∈ O(g(n)), f (n) ∈ o(g(n)), f (n) ∈ Ω(g(n)), g(n) ∈ O(f (n)), g(n) ∈ o(f (n)), g(n) ∈ Ω(f (n)), f (n) ∈ o(f (n) + g(n)), g(n) ∈ o(f (n) + g(n)) n n! X n (a) f (n) = n2 + n log2 n + , g(n) = i+ (n − 3)! i=0 i (b) f (n) = 7log3 (n) , g(n) = 5log2 (n) (use log properties, particularly, alogb c = clogb a ) n! (4n2 )! (c) f (n) = , g(n) = (n − 4)! (4n2 − 2)! ceil(log2 n)! n! (d) f (n) = , g(n) = log2 (ceil(log2 n) − 3)! (n − 3)! 4. (11 points) It’sa Me! ItsaMaxSort Consider the following algorithm, ItsaMaxSort, which sorts a list of integers: procedure ItsaMaxSort(a, a,... , a[n]: list of integers L) 1. if (n < 2) return L 2. for (i = n to 2) (decrementing i by 1 each iteration) 3. maxIndex = findMaxIndex(a, a,... a[i]) 4. swap a[maxIndex] and a[i] 5. return L Note: You may assume that f indM axIndex correctly finds the index of the maximum item in a list or array. f indM axIndex is O(n), where n is the size of the input list. (a) (6 points) Prove the following loop invariant: After t iterations of the loop, The last t entries of L are sorted. (b) (3 points) Prove that ItsaM axSort correctly sorts a list L using the loop invariant from part (a) and the loop invariant: ”After t iterations of the loop, the last t elements of L are the largest elements of L” (you do not need to prove either loop invariant correct in this part). (c) (2 points) Find the asymptotic runtime of ItsaM axSort in Big-Θ notation., providing a brief time analysis (Note that swap is Θ(1)). 3 5. (9 points, 3 points each) Master Theorem’s Master Sword Lonk, a child who wears all green, sets out on an adventure. Lonk’s grandparent tells them ”It’s dangerous to go alone, take this”. Expecting a sword to defend themselves, Lonk instead gets an Itemfinder and a Splitter wand. ”The sword is lost, and you must find it” says the grandparent. The Splitter Wand can split Lonk into identical pieces with identical equipment (including wands and Itemfind- ers), while the Itemfinder can be used to find the actual sword, hidden in a field of size n = k × k. Lonk begins at the center of this field, and realizes that they can divide themselves to split the work of finding the sword. For each situation below, first give a recurrence for the running time of the algorithm for which Lonk finds a sword in the field; you don’t need to explain this part of your answer. Then use the Master Theorem to find this time up to order, clearly explaining how the Master Theorem applies (e.g., giving values of variables and identifying the appropriate case). Note that, despite the Lonks being separate entities, they do not act in parallel, so only one Lonk can split or search at a time. Moreover, once a Lonk is in a square of at most 2 × 2, they no longer split, and instead just search their square in constant time (and they either find a sword or not). (a) With the Splitter Wand’s first setting, Lonk can split into 8 identical Lonks, who could themselves further split. Each split Lonk takes one eighth of the field, and divides their section into 8 equal pieces, splits further into 8 Lonks, who then repeat this process on their own subsections of the field, and so on. Once the Lonks come together, figuring out which Lonk knows where the sword is becomes instantaneous; one would say it is O(1). (b) With the Splitter Wand’s second setting, Lonk can split into 16 identical copies, who could themselves split further. Much like in the previous part, they divide the field into 16 equally sized pieces, and then search their subsections by further splitting, dividing the field, and splitting again recursively. Given the chaos of so many Lonks, after the Lonks come together, figuring out which Lonk knows where the sword is located is slower; one would say it is O(n). (c) With the Splitter Wand’s third setting, Lonk can split into 100 identical copies, who would themselves split even further into what is truly, too many Lonks. Much like in the previous parts, they divide the field into 100 roughly equally sized pieces, and then search their subsections by further splitting, dividing the field, and splitting again recursively. There are just so many Lonks, and after the Lonks come together, figuring out which Lonk knows where the sword is located becomes even more tedious; one would say it is O(n5 ). 6. (Extra Credit, 5 points) Blast From the Future, CSE 160: A Walk in a Parallel Universe Consider the MergeSort algorithm presented in lecture 15/16. One of the benefits of divide and conquer occurs when the subproblems can be solved independently of each other. If there was a way to solve the sub-problems at the same time, then the runtime of the algorithm would be much faster, since you would no longer have to run each recursive call in sequence. In CSE 160, you learn how to implement algorithms that utilize this independence, in what would be called parallel computing. For the following parts, unclear answers, and answers with even a minor error may not receive full credit. You may not ask any of the instructional staff to help answer this question. (a) (3 points) Write a recurrence relation for the runtime of ParallelMergeSort, assuming that recursive calls to ParallelMergeSort within a call of the function would run simultaneously (i.e., L1 and L2 are computed at the same time). Using Master Theorem or Unraveling, what is the worst case big-O runtime for ParallelMerge sort? (b) (2 points) In practice, while parallel computing does speed up the runtime of algorithms, it does not scale to the theoretical asymptotic runtime for large problem sizes. Why might that be the case? (Give a brief 1-2 sentence explanation, multiple correct answers exist) (Notes: 2 out of 5 points are fair effort completeness, one per part.) 4