Podcast
Questions and Answers
Which statement accurately describes the running time of an algorithm?
Which statement accurately describes the running time of an algorithm?
- The running time remains constant regardless of the input size.
- The running time typically grows with the input size. (correct)
- The running time is solely determined by the hardware and not by the input.
- The running time decreases as the input size grows.
Under what condition does the 'best case' scenario occur for an algorithm, given a fixed input size?
Under what condition does the 'best case' scenario occur for an algorithm, given a fixed input size?
- When the algorithm performs an average number of steps.
- When the algorithm performs the minimum number of steps. (correct)
- When the algorithm performs the maximum number of steps.
- When the algorithm encounters an error and terminates early.
For a search algorithm on an array, what scenario represents the 'worst case' if 'e' is the element being searched for?
For a search algorithm on an array, what scenario represents the 'worst case' if 'e' is the element being searched for?
- When 'e' is not present in the array, requiring a check of all elements. (correct)
- When 'e' is the first element checked in the array.
- When 'e' is found at the middle of the array.
- When 'e' is found in the array after checking only a few elements.
Why is the worst-case running time often the primary focus when analyzing algorithms?
Why is the worst-case running time often the primary focus when analyzing algorithms?
What are the primary techniques used for analyzing the running time of an algorithm?
What are the primary techniques used for analyzing the running time of an algorithm?
What is a key step in conducting experimental studies to analyze an algorithm's performance?
What is a key step in conducting experimental studies to analyze an algorithm's performance?
What is a key limitation of using experimental studies alone to determine the efficiency of an algorithm?
What is a key limitation of using experimental studies alone to determine the efficiency of an algorithm?
What is used in theoretical analysis rather than implementations of the algorithm?
What is used in theoretical analysis rather than implementations of the algorithm?
What is the primary advantage of using theoretical analysis over experimental studies to evaluate an algorithm?
What is the primary advantage of using theoretical analysis over experimental studies to evaluate an algorithm?
What is a 'primitive operation' in the context of algorithm analysis?
What is a 'primitive operation' in the context of algorithm analysis?
What is the purpose of counting primitive operations in algorithm analysis?
What is the purpose of counting primitive operations in algorithm analysis?
If arrayMax
executes $7n - 2$ primitive operations in the worst case, what does 'n' represent in this context?
If arrayMax
executes $7n - 2$ primitive operations in the worst case, what does 'n' represent in this context?
How does changing the hardware and software environment typically affect the running time $T(n)$ of an algorithm?
How does changing the hardware and software environment typically affect the running time $T(n)$ of an algorithm?
Which function has the lowest growth rate?
Which function has the lowest growth rate?
What does Big-Oh notation, $O(g(n))$, describe about a function $f(n)$?
What does Big-Oh notation, $O(g(n))$, describe about a function $f(n)$?
If $f(n)$ grows more slowly than $g(n)$, which of the following is true regarding Big-Oh notation?
If $f(n)$ grows more slowly than $g(n)$, which of the following is true regarding Big-Oh notation?
According to the Big-Oh rules, which of the following is the correct simplification of $f(n) = 5n^3 + 2n^2 + n$?
According to the Big-Oh rules, which of the following is the correct simplification of $f(n) = 5n^3 + 2n^2 + n$?
What primarily determines the running time in Big-Oh notation during asymptotic algorithm analysis?
What primarily determines the running time in Big-Oh notation during asymptotic algorithm analysis?
Why can constant factors and lower-order terms be disregarded in asymptotic algorithm analysis?
Why can constant factors and lower-order terms be disregarded in asymptotic algorithm analysis?
If Algorithm A has a time complexity of $O(n)$ and Algorithm B has a time complexity of $O(n^2)$, which algorithm is generally preferred for large inputs?
If Algorithm A has a time complexity of $O(n)$ and Algorithm B has a time complexity of $O(n^2)$, which algorithm is generally preferred for large inputs?
Which time complexity represents the fastest algorithm?
Which time complexity represents the fastest algorithm?
For the Contains(A, e)
algorithm, under what conditions does the best case occur?
For the Contains(A, e)
algorithm, under what conditions does the best case occur?
How many times does the 'if' statement run in the worst case for the Contains(A, e)
algorithm, where A
is an array of size n
?
How many times does the 'if' statement run in the worst case for the Contains(A, e)
algorithm, where A
is an array of size n
?
If e
exists in the middle of array A
, approximately how many times does the 'if' statement run in the Contains(A, e)
algorithm?
If e
exists in the middle of array A
, approximately how many times does the 'if' statement run in the Contains(A, e)
algorithm?
Which of the following is NOT a characteristic of pseudocode?
Which of the following is NOT a characteristic of pseudocode?
What does the algorithm arrayMax(A, n)
output?
What does the algorithm arrayMax(A, n)
output?
In the code for arrayMax(A, n)
, what is the purpose of the variable currentMax
?
In the code for arrayMax(A, n)
, what is the purpose of the variable currentMax
?
In the Big-Oh notation, which of the following is true?
In the Big-Oh notation, which of the following is true?
Which term is used to refer to the time taken by a CPU to execute instructions??
Which term is used to refer to the time taken by a CPU to execute instructions??
Why should the exact definition of a primitive operation is not important??
Why should the exact definition of a primitive operation is not important??
Which of the following is NOT a Time Complexity?
Which of the following is NOT a Time Complexity?
Which of the following is NOT an implication of using Theoretical Analysis?
Which of the following is NOT an implication of using Theoretical Analysis?
What is true about the Prefix Sums algorithm?
What is true about the Prefix Sums algorithm?
Flashcards
What is an Algorithm?
What is an Algorithm?
A step-by-step procedure for solving a problem in a finite amount of time.
What is Running Time?
What is Running Time?
How well an algorithm performs as the input size grows.
What is the Best Case?
What is the Best Case?
Occurs when an algorithm performs the minimum number of steps.
What is the Worst Case?
What is the Worst Case?
Signup and view all the flashcards
What is the Average Case?
What is the Average Case?
Signup and view all the flashcards
Contains (A, e) Best case?
Contains (A, e) Best case?
Signup and view all the flashcards
Contains (A, e) Worst case?
Contains (A, e) Worst case?
Signup and view all the flashcards
Which case is generally focussed on?
Which case is generally focussed on?
Signup and view all the flashcards
What are Experimental Studies?
What are Experimental Studies?
Signup and view all the flashcards
What is a limitation of Experimental Studies?
What is a limitation of Experimental Studies?
Signup and view all the flashcards
What is Theoretical Analysis?
What is Theoretical Analysis?
Signup and view all the flashcards
What is Pseudocode?
What is Pseudocode?
Signup and view all the flashcards
What are Primitive Operations?
What are Primitive Operations?
Signup and view all the flashcards
Counting Primitive Operations?
Counting Primitive Operations?
Signup and view all the flashcards
What is Estimating Running Time?
What is Estimating Running Time?
Signup and view all the flashcards
Growth Rate Fact?
Growth Rate Fact?
Signup and view all the flashcards
What is Big-Oh Notation?
What is Big-Oh Notation?
Signup and view all the flashcards
Is f(n) a polynomial of degree d?
Is f(n) a polynomial of degree d?
Signup and view all the flashcards
What is Asymptotic Algorithm Analysis?
What is Asymptotic Algorithm Analysis?
Signup and view all the flashcards
Growth rate Importance?
Growth rate Importance?
Signup and view all the flashcards
What is a prefix sum?
What is a prefix sum?
Signup and view all the flashcards
What is quadratic time?
What is quadratic time?
Signup and view all the flashcards
What is linar time?
What is linar time?
Signup and view all the flashcards
When is f(n) is O(g(n))?
When is f(n) is O(g(n))?
Signup and view all the flashcards
What is time complexity?
What is time complexity?
Signup and view all the flashcards
What is time complexity comparison?
What is time complexity comparison?
Signup and view all the flashcards
Study Notes
- An algorithm is a step-by-step procedure for solving a problem in a finite amount of time.
Running Time and Input Size
- Running time describes how well an algorithm performs as the input size grows.
- Algorithm running time typically increases with input size.
- Algorithms can perform differently with the same input size but different contents or order
Best, Worst, and Average Cases
- With the same input size of n elements, "best case" occurs with minimum steps, "worst case" with maximum steps, and "average case" with an average number of steps.
- Average case time can be difficult to determine.
- Best case occurs when item "e" is the first element in array A, so the 'if' runs once.
- Worst case occurs when item "e" does not exist in A, so the 'if' runs n times.
- Average case occurs when e exists in the middle of array A, so the 'if' runs about n/2 times.
- Generally focusing on the "worst-case" running time is easier to analyze.
- It is possible to estimate the maximum time for the algorithm to finish, which is crucial to applications such as games, finance and robotics
Running Time Analysis
- There are 2 techniques: experimental studies and theoretical analysis.
Experimental Studies
- Process: write a program implementing the algorithm, run the program with inputs of varying size and composition, use programming's language time method to get an accurate measurement of the actual running time and plot the results.
- Limitations: no hardware to process huge amounts of inputs and difficult to find complete or sufficient inputs to produce indicative results.
- It is also difficult to find the same hardware and software to compare algorithms in a fair manner.
Theoretical Analysis
- It uses pseudocode; characterizes running time as a function of the input size, n.
- Advantages: it takes into account all possible inputs.
- It allows evaluation the speed of an algorithm independent of the hardware/software.
Pseudocode
- High-level description of an algorithm.
- Less detailed than a programming language.
- There is more than one way to write a pseudocode.
- Example: find max element of an array.
Primitive Operations
- Basic computations performed by an algorithm.
- Identifiable in pseudocode.
- They are largely independent from the programming language.
- The exact definition is not important; it is assumed to take a constant amount of time.
- Examples: performing an arithmetic operation, assigning a value to a variable, indexing into an array, calling a method, returning from a method and comparing 2 variables.
Counting Primitive Operations
- Determine the maximum number of primitive operations executed by an algorithm (worst case) as a function of the input size.
- Counting primitive operations in detail is for illustration only.
- In the worst case, arrayMax executes 7n - 2 primitive operations whereby the 'if' loop is always true.
- In the best case, arrayMax executes 5n primitive operations whereby the ‘if’ loop is always false.
- T(n) represents the worst-case time of arrayMax.
- T(n)=7n-2
Growth Rate of Running Time
- Changing the hardware/software environment affects T(n) by a constant factor, but does not alter the growth rate of T(n).
- n, 10n and 100n grow at the same rate.
Growth Rates of Functions
- From lowest to highest: constant 1, logarithmic log n, linear n, N-Log-N n log n, quadratic n², cubic n³ and exponential 2"
Big-Oh Notation
- Given functions f(n) and g(n), f(n) is O(g(n)) if there are positive constants c and n₀ such that f(n) ≤ cg(n) for n ≥ n₀.
- Example: 2n + 10 is O(n)
Big-Oh Example
- Example: n² is not O(n)
Big-Oh and Growth Rate
- The big-Oh notation gives an upper bound on the growth rate of a function.
- "f(n) is O(g(n))" means that f(n)'s growth rate is no more than g(n)'s growth rate.
- If g(n) grows more, then f(n) is O(g(n)) is "yes", g(n) is O(f(n)) is "no".
- If f(n) grows more, then f(n) is O(g(n)) is "no", g(n) is O(f(n)) is "yes".
- If growth is the same, then f(n) is O(g(n)) is "yes", g(n) is O(f(n)) is "yes".
Big-Oh Rules
- If f(n) is a polynomial of degree d, then f(n) is O(n^d); drop lower-order terms and constant factors.
- Use the smallest possible class of functions.
- e.g. "2n is O(n)" instead of "2n is O(n²)"
- Use the simplest expression of the class.
- e.g. "3n + 5 is O(n)" instead of "3n + 5 is O(3n)"
Asymptotic Algorithm Analysis
- Determines running time in big-Oh notation.
- It is performed by finding the worst-case number of primitive operations executed as a function of the input size and expressing this function with big-Oh notation.
- Example: algorithm arrayMax executes at most 7n - 2 primitive operations.
- To simplify, constant factors and lower-order terms are eventually dropped.
The Importance of Asymptotics
- An algorithm with an asymptotically fast running time (low growth rate) can process more inputs than an algorithm with an asymptotically slow running time (high growth rate).
- For example, the maximum problem size (n) given 1 second running time:
- O(n) can process 2,500
- O(n log n) can process 4,096
- O(n^2) can process 707
- O(n^4) can process 31
- O(2^n) can process 19
Computing Prefix Sums
- Another example: prefix sums.
- The i-th prefix sum of an array X is sum of the first (i + 1) elements of X: - A[i] = (X[0] + X[1] + ... + X[i])
- Two algorithms with different running time can be used for computing prefix sums.
- PrefixSum1 runs in O(n²) time.
- PrefixSum2 runs in O(n) time.
PrefixSums1 (Quadratic)
- This algorithm computes prefix averages in quadratic time n².
Arithmetic Progression
- The sum of the first n integers
- (1 + 2 + ...+ n) is n(n + 1) / 2.
- The algorithm prefixSum1 runs in O(n²) time.
PrefixSum2 (Linear)
- This algorithm computes prefix sums in linear time n with previous sum.
Intuition for Asymptotic Notation
- Big-Oh: f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n).
- big-Omega: f(n) is Ω(g(n)) if f(n) is asymptotically greater than or equal to g(n).
- big-Theta: f(n) is Θ(g(n)) if f(n) is asymptotically equal to g(n).
- little-oh: f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n)
- little-omega: f(n) is ω(g(n)) if is asymptotically strictly greater than g(n)
Time Complexity
- Time complexity, T(n), refers to the use of asymptotic notation (O,Θ,Ω, o,ω) in denoting running time.
- If 2 algorithms accomplish the same task with different time complexities:
- One is faster than another.
- As input n increases further, the performance difference becomes more obvious.
- Faster algorithm is generally preferred.
Time Complexity Comparison
- A fast algorithm has a low time complexity, while a slow algorithm has a high time complexity.
- Ranking of selected time complexities from fastest to slowest:
- Constant 1 (lowest T(n), fastest speed), Logarithmic log n, Linear n, N-Log-N n log n, Quadratic n², Cubic n³, Exponential 2" (highest T(n), slowest speed).
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.