Data Structures and Algorithms Lecture 4 PDF

Document Details

Uploaded by Deleted User

Haliç University

Dr. Mohammed Al-Hubaishi

Tags

data structures algorithms big o notation computer science

Summary

This document is a lecture on data structures and algorithms, specifically covering Big O, Big Ω, and Big Θ notations. It explains how to analyze algorithm running time, optimization techniques, and the different time complexities of algorithms. The lecture may also include examples to illustrate these concepts.

Full Transcript

1 Dr : Mohammed Al-Hubaishi Data Structures and Algorithms LECTURE 4 BIG O, BIG Ω, AND BIG Θ DR. MOHAMMED AL-HUBAISHI 2 Key Topics: ► Understanding Algorithm Complexity ► Run...

1 Dr : Mohammed Al-Hubaishi Data Structures and Algorithms LECTURE 4 BIG O, BIG Ω, AND BIG Θ DR. MOHAMMED AL-HUBAISHI 2 Key Topics: ► Understanding Algorithm Complexity ► Running Time of growth known functions. ► Big-Oh, Big Oh Omega and Big Theta Notations ► Doing a Timing Analysis ► Analyzing Some Simple Programs. Dr: Mohammed Al-Hubaishi 3 Introduction Data structures and algorithms are fundamental concepts in computer science. They are used to store data efficiently and solve computational problems effectively. 4 Understanding Algorithm Complexity Algorithm complexity is a measure of the amount of time and/or space required by an algorithm to solve a problem as a function of the size of the input. 5 What is Algorithm Analysis? Estimating the Running Time of an Algorithm: ► Machine Speed: The performance of the hardware executing the program. ► Programming Language: The efficiency of the language used to write the program. ► Input Size and Organization: The amount and structure of data the algorithm processes. Techniques to Optimize Running Time: ► Mathematical Frameworks: Utilize rigorous mathematical models to accurately describe and improve the algorithm's efficiency. Dr : Mohammed Al-Hubaishi 6 Measurement of Complexity of an Algorithm ► Best case is the function which performs the minimum number of steps on input data of n elements. ► Worst case is the function which performs the maximum number of steps on input data of size n. ► Average case is the function which performs an average number of steps on input data of n elements. Dr : Mohammed Al-Hubaishi 7 Measuring Speed ► Measure speed does not depending on the values inside an array, but it is depending on the operation inside an array. ► We measure algorithm speed in terms of : ► operations relative to input size ► Input size is n ► Growth in operations (time) given in terms of n Here, the input size is the size of array Dr: Mohammed Al-Hubaishi 8 Function of Growth rate Dr : Mohammed AL-HUBAISHI 9 Running time for small inputs Dr : Mohammed Al-Hubaishi 10 Generalizing Running Time Comparing Running time as the input grows of the growth known functions. Input Size: n (1) n log n n log n n² n³ 2ⁿ 10 1 10 2 23 100 1000 1.024000e+03 20 1 20 3 60 400 8000 1.048576e+06 30 1 30 3 102 900 27000 1.073742e+09 40 1 40 4 148 1600 64000 1.099512e+12 50 1 50 4 196 2500 125000 1.125900e+15 60 1 60 4 246 3600 216000 1.152922e+18 70 1 70 4 297 4900 343000 1.180592e+21 80 1 80 4 351 6400 512000 1.208926e+24 Dr: Mohammed Al-Hubaishi 11 Constant (1) ► Most instructions of most programs are executed once or at most only a few times. ► If all the instructions of a program have this property, we say that its running time is constant. ► This is obviously the situation for in algorithm design. Dr: Mohammed Al-Hubaishi 12 linear (N) ► When the running time of a program is linear, it generally is the case that a small amount of processing is done on each input element. ► Ex: when N is a million, then so is the running time. Dr: Mohammed Al-Hubaishi 13 logarithmic (log N) ► When the running time of a program is logarithmic, the program gets slightly slower as N grows. ► This running time commonly occurs in programs which solve a big problem by transforming it into a smaller problem by cutting the size with some constant fraction. ► The base of the logarithm changes the constant, but not by much: if the base is 2; when N is a million, log N is twice as great. Dr: Mohammed Al-Hubaishi 14 N log N ► N log N running time arises in algorithms which solve a problem by breaking it up into smaller subproblems, solving them independently, and then combining the solutions. ► Ex: when N is a million, N log N is perhaps twenty million. Dr: Mohammed Al-Hubaishi 15 Quadratic (N²) ► When the running time of an algorithm is quadratic, it is practical for use only on relatively small problems. ► Quadratic running times typically arise in algorithms which process all pairs of data items (perhaps in a double nested loop). ► Ex: when N is a thousand, the running time is a million. Dr: Mohammed Al-Hubaishi 16 Cubic (N³) ► Similarly, an algorithm which processes triples of data items (perhaps in a triple-nested loop) has a cubic running time and is practical for use only on small problems. ► Ex: when N is a hundred, the running time is a million. Dr: Mohammed Al-Hubaishi 17 Common plots of the growth functions n) ) O(2n) g n2 O( ) lo n3 (n O( O O(n) O(√n) O(log n) O(1) Dr: Mohammed Al-Hubaishi 18 Common plots of the growth functions Big O, Big Ω, and Big Θ Relate the growth of functions Dr: Mohammed Al-Hubaishi 19 Notations in Complexity Big O, Big Ω, and Big Θ Relate the growth of functions ► Big O notation: it defines an algorithm’s worst case time, which determines the set of functions grows slower than or at the same rate as the expression. ► Big Ω notation : It defines the best case of an algorithm’s time complexity, which defines whether the set of functions will grow faster or at the same rate as the expression. ► Big Θ notation : It defines the average case of an algorithm’s time complexity, when the set of functions lies in both O(expression) and Omega(expression), then Theta notation is used. Dr: Mohammed Al-Hubaishi 20 Three types of Bounds ► Here we have the graph of two real functions : f(x) and g(x). ► The function f(x) is more complex and we want to measure its growth in terms of simpler function g(x) Dr: Mohammed Al-Hubaishi 21 Big O Notation: Definition Big O notation describes the upper bound of the time complexity of an algorithm. It provides the worst-case scenario. Example: O(n^2) for a nested loop. 22 Big Ω Notation: Definition Big Ω notation describes the lower bound of the time complexity of an algorithm. Example: Ω(n) for a linear search. 23 Big Θ Notation: Definition Big Θ notation describes the tight bound of the time complexity of an algorithm. It provides both the upper and lower bounds. Example: Θ(n log n) for merge sort. 24 Practical Applications in Data Structures Understanding these notations helps in choosing the right data structure and algorithm for a given problem, optimizing performance and resource usage. 25 Real vs Integer functions ► In mathematics x is used for real variables ► functions f(x) and g(x). ► In algorithms n is used for integer variables ► both functions f(n) and g(n). Dr: Mohammed Al-Hubaishi 26 F(n) is Big Oh of g(n): Bound from above we can bound f(n) from above, if we find constant (c) such that when we multiply with g(n), it will bound f(n) above from some point (n0) onward. In this case we say: f(n) is Big Oh of g(n) (1) In graph, n0 is the minimum possible value to get valid bounds, but any greater value will work. n0 here is 3 Dr: Mohammed Al-Hubaishi 27 F(n) is Big Omega of g(n): Bound from below we can bound f(n) from below, if we find constant (c) such that when we multiply with g(n), it will bound f(n) below from some point (n0) onward. In this case we say: f(n) is Big Omega of g(n) (2) In graph, n0 is the minimum possible value to get valid bounds, but any greater value will work. n0 here is 7 Dr: Mohammed Al-Hubaishi 28 F(n) is Big Theta of g(n): Bound between If there are constants such that g(n) bound f(n) both from below and above, then we say : F(n) is Big Theta of g(n) Of course the constants are different. So that we label them as c1 and c2 (3) In graph, n0 is the minimum possible value to get valid bounds, but any greater value will work. n0 here is 8 Dr: Mohammed Al-Hubaishi 29 f(n) = O(g(n)) example Suppose: f(n) = 5n and g(n) = n. To show that f = O(g), we have to show the existence of a constant C as given in Definition (1). Clearly 5 is such a constant so f(n) = 5 * g(n). We could choose a larger C such as 6, because the definition states that f(n) 5 Dr: Mohammed Al-Hubaishi 30 Example : Big - Oh ► We use this example to introduce the concept the Big-Oh. ► Example : f(n) = 100 n2, g(n) = n4, ► The following table and figure show that g(n) grows faster than f(n) when n > 10. We say: f is big-Oh of g. Dr: Mohammed Al-Hubaishi 31 Big O and T function ► You shouldn't mix up complexity (denoted by big-O) and the T function. ► The T function is the number of steps the algorithm has to go through for a given input. So, the value of T(n) is the actual number of steps, whereas O(something) denotes a complexity. ► By the conventional abuse of notation, T(n) = O( f(n) ) means that the function T(n) is at most the same complexity as f(n) function, which will usually be the simplest possible function of its complexity class. Dr: Mohammed Al-Hubaishi 32 Analyzing Running Time: T(n) 1. n = read input from user T(n), or the running time of a 2. sum = 0 particular algorithm on input of size n, 3. i = 0 is taken to be the number of times the 4. while i < n 5. { number = read input from user instructions in the algorithm are 6. sum = sum + number executed. 7. i = i + 1 } 8. mean = sum / n The computing time Statement Number of times executed for this algorithm in 1 1 terms on input size n 2 1 is: T(n) = 4n + 5. 3 1 4 n+1 Any thing in loop is 5 n executed n times 6 n T(n) = 4n + 5. 7 n Dr: Mohammed Al-Hubaishi 8 1 33 T(n) = O(n) example In the previous timing analysis, we see T(n) = 4n + 5, and we concluded intuitively that T(n) = O(n) because the running time grows linearly as n grows. Now, however, we can prove it mathematically: To show that f(n) = 4n + 5 = O(n), we need to produce a constant C such that: f(n)

Use Quizgecko on...
Browser
Browser