Algorithms: Running Time and Input Size

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

Which statement accurately describes the running time of an algorithm?

  • The running time remains constant regardless of the input size.
  • The running time typically grows with the input size. (correct)
  • The running time is solely determined by the hardware and not by the input.
  • The running time decreases as the input size grows.

Under what condition does the 'best case' scenario occur for an algorithm, given a fixed input size?

  • When the algorithm performs an average number of steps.
  • When the algorithm performs the minimum number of steps. (correct)
  • When the algorithm performs the maximum number of steps.
  • When the algorithm encounters an error and terminates early.

For a search algorithm on an array, what scenario represents the 'worst case' if 'e' is the element being searched for?

  • When 'e' is not present in the array, requiring a check of all elements. (correct)
  • When 'e' is the first element checked in the array.
  • When 'e' is found at the middle of the array.
  • When 'e' is found in the array after checking only a few elements.

Why is the worst-case running time often the primary focus when analyzing algorithms?

<p>It's easier to analyze than average-case running time. (C)</p> Signup and view all the answers

What are the primary techniques used for analyzing the running time of an algorithm?

<p>Experimental Studies and Theoretical Analysis (A)</p> Signup and view all the answers

What is a key step in conducting experimental studies to analyze an algorithm's performance?

<p>Implementing the algorithm in a programming language. (C)</p> Signup and view all the answers

What is a key limitation of using experimental studies alone to determine the efficiency of an algorithm?

<p>It's hard to ensure fair comparisons because of variations in hardware and software. (C)</p> Signup and view all the answers

What is used in theoretical analysis rather than implementations of the algorithm?

<p>Pseudocode (B)</p> Signup and view all the answers

What is the primary advantage of using theoretical analysis over experimental studies to evaluate an algorithm?

<p>Theoretical analysis takes into account all possible inputs. (C)</p> Signup and view all the answers

What is a 'primitive operation' in the context of algorithm analysis?

<p>A basic computation performed by the algorithm. (A)</p> Signup and view all the answers

What is the purpose of counting primitive operations in algorithm analysis?

<p>To determine the maximum number of operations as a function of input size. (A)</p> Signup and view all the answers

If arrayMax executes $7n - 2$ primitive operations in the worst case, what does 'n' represent in this context?

<p>The input size. (C)</p> Signup and view all the answers

How does changing the hardware and software environment typically affect the running time $T(n)$ of an algorithm?

<p>It affects $T(n)$ by a constant factor. (A)</p> Signup and view all the answers

Which function has the lowest growth rate?

<p>log $n$ (A)</p> Signup and view all the answers

What does Big-Oh notation, $O(g(n))$, describe about a function $f(n)$?

<p>An upper bound on the growth rate of $f(n)$ (A)</p> Signup and view all the answers

If $f(n)$ grows more slowly than $g(n)$, which of the following is true regarding Big-Oh notation?

<p>$f(n)$ is $O(g(n))$ (D)</p> Signup and view all the answers

According to the Big-Oh rules, which of the following is the correct simplification of $f(n) = 5n^3 + 2n^2 + n$?

<p>O(n³) (C)</p> Signup and view all the answers

What primarily determines the running time in Big-Oh notation during asymptotic algorithm analysis?

<p>The worst-case number of primitive operations (C)</p> Signup and view all the answers

Why can constant factors and lower-order terms be disregarded in asymptotic algorithm analysis?

<p>They become insignificant as the input size grows. (A)</p> Signup and view all the answers

If Algorithm A has a time complexity of $O(n)$ and Algorithm B has a time complexity of $O(n^2)$, which algorithm is generally preferred for large inputs?

<p>Algorithm A (A)</p> Signup and view all the answers

Which time complexity represents the fastest algorithm?

<p>O(log n) (A)</p> Signup and view all the answers

For the Contains(A, e) algorithm, under what conditions does the best case occur?

<p>When 'e' exists as the first element in <code>A</code>. (D)</p> Signup and view all the answers

How many times does the 'if' statement run in the worst case for the Contains(A, e) algorithm, where A is an array of size n?

<p>n (A)</p> Signup and view all the answers

If e exists in the middle of array A, approximately how many times does the 'if' statement run in the Contains(A, e) algorithm?

<p>n/2 (B)</p> Signup and view all the answers

Which of the following is NOT a characteristic of pseudocode?

<p>Can be written in only one specific way. (D)</p> Signup and view all the answers

What does the algorithm arrayMax(A, n) output?

<p>The maximum element of <code>A</code>. (D)</p> Signup and view all the answers

In the code for arrayMax(A, n), what is the purpose of the variable currentMax?

<p>To store the current maximum element found so far. (B)</p> Signup and view all the answers

In the Big-Oh notation, which of the following is true?

<p>$n$ is $O(n^2)$ (D)</p> Signup and view all the answers

Which term is used to refer to the time taken by a CPU to execute instructions??

<p>Running time (B)</p> Signup and view all the answers

Why should the exact definition of a primitive operation is not important??

<p>Since we can estimate the max time for the algorithm to complete (B)</p> Signup and view all the answers

Which of the following is NOT a Time Complexity?

<p>O(logN/N) (D)</p> Signup and view all the answers

Which of the following is NOT an implication of using Theoretical Analysis?

<p>Makes the speed dependent on the OS installed (A)</p> Signup and view all the answers

What is true about the Prefix Sums algorithm?

<p>PrefixSum2 runs in O(N) (C)</p> Signup and view all the answers

Flashcards

What is an Algorithm?

A step-by-step procedure for solving a problem in a finite amount of time.

What is Running Time?

How well an algorithm performs as the input size grows.

What is the Best Case?

Occurs when an algorithm performs the minimum number of steps.

What is the Worst Case?

Occurs when an algorithm performs the maximum number of steps.

Signup and view all the flashcards

What is the Average Case?

Occurs when an algorithm performs the average number of steps.

Signup and view all the flashcards

Contains (A, e) Best case?

When e exists as the first element in A; 'if' runs once.

Signup and view all the flashcards

Contains (A, e) Worst case?

When e does not exist in A; 'if' runs n times.

Signup and view all the flashcards

Which case is generally focussed on?

Focus on the worst-case running time of an algorithm.

Signup and view all the flashcards

What are Experimental Studies?

Write a program, run it with varying inputs, and measure the running time.

Signup and view all the flashcards

What is a limitation of Experimental Studies?

No hardware to process huge amount of inputs

Signup and view all the flashcards

What is Theoretical Analysis?

Using pseudocode to characterize running time as a function of input size.

Signup and view all the flashcards

What is Pseudocode?

A high-level description of an algorithm; less detailed than programming language.

Signup and view all the flashcards

What are Primitive Operations?

Basic computations performed by an algorithm.

Signup and view all the flashcards

Counting Primitive Operations?

The maximum number of primitive operations an algorithm executes.

Signup and view all the flashcards

What is Estimating Running Time?

Focus on growth rate and ignore constants/lower-order terms.

Signup and view all the flashcards

Growth Rate Fact?

Changing hardware/software affects T(n) by a constant factor.

Signup and view all the flashcards

What is Big-Oh Notation?

A notation describing the limiting behavior of a function.

Signup and view all the flashcards

Is f(n) a polynomial of degree d?

A polynomial of degree d, then f(n) is O(nd)

Signup and view all the flashcards

What is Asymptotic Algorithm Analysis?

Determines running time using Big-Oh notation.

Signup and view all the flashcards

Growth rate Importance?

An algorithm with a low growth rate can process more inputs.

Signup and view all the flashcards

What is a prefix sum?

The i-th prefix sum of an array X is the sum of the first (i + 1) elements of X.

Signup and view all the flashcards

What is quadratic time?

An algorithm computes prefix averages in quadratic time

Signup and view all the flashcards

What is linar time?

An algorithm computes prefix sums in linear time by using the previous sum

Signup and view all the flashcards

When is f(n) is O(g(n))?

If f(n) is asymptotically less than or equal to g(n)

Signup and view all the flashcards

What is time complexity?

Time complexity refers to the use of asymptotic notation in denoting running time

Signup and view all the flashcards

What is time complexity comparison?

is a ranking of selected time complexities

Signup and view all the flashcards

Study Notes

  • An algorithm is a step-by-step procedure for solving a problem in a finite amount of time.

Running Time and Input Size

  • Running time describes how well an algorithm performs as the input size grows.
  • Algorithm running time typically increases with input size.
  • Algorithms can perform differently with the same input size but different contents or order

Best, Worst, and Average Cases

  • With the same input size of n elements, "best case" occurs with minimum steps, "worst case" with maximum steps, and "average case" with an average number of steps.
  • Average case time can be difficult to determine.
  • Best case occurs when item "e" is the first element in array A, so the 'if' runs once.
  • Worst case occurs when item "e" does not exist in A, so the 'if' runs n times.
  • Average case occurs when e exists in the middle of array A, so the 'if' runs about n/2 times.
  • Generally focusing on the "worst-case" running time is easier to analyze.
  • It is possible to estimate the maximum time for the algorithm to finish, which is crucial to applications such as games, finance and robotics

Running Time Analysis

  • There are 2 techniques: experimental studies and theoretical analysis.

Experimental Studies

  • Process: write a program implementing the algorithm, run the program with inputs of varying size and composition, use programming's language time method to get an accurate measurement of the actual running time and plot the results.
  • Limitations: no hardware to process huge amounts of inputs and difficult to find complete or sufficient inputs to produce indicative results.
  • It is also difficult to find the same hardware and software to compare algorithms in a fair manner.

Theoretical Analysis

  • It uses pseudocode; characterizes running time as a function of the input size, n.
  • Advantages: it takes into account all possible inputs.
  • It allows evaluation the speed of an algorithm independent of the hardware/software.

Pseudocode

  • High-level description of an algorithm.
  • Less detailed than a programming language.
  • There is more than one way to write a pseudocode.
  • Example: find max element of an array.

Primitive Operations

  • Basic computations performed by an algorithm.
  • Identifiable in pseudocode.
  • They are largely independent from the programming language.
  • The exact definition is not important; it is assumed to take a constant amount of time.
  • Examples: performing an arithmetic operation, assigning a value to a variable, indexing into an array, calling a method, returning from a method and comparing 2 variables.

Counting Primitive Operations

  • Determine the maximum number of primitive operations executed by an algorithm (worst case) as a function of the input size.
  • Counting primitive operations in detail is for illustration only.
  • In the worst case, arrayMax executes 7n - 2 primitive operations whereby the 'if' loop is always true.
  • In the best case, arrayMax executes 5n primitive operations whereby the ‘if’ loop is always false.
  • T(n) represents the worst-case time of arrayMax.
  • T(n)=7n-2

Growth Rate of Running Time

  • Changing the hardware/software environment affects T(n) by a constant factor, but does not alter the growth rate of T(n).
  • n, 10n and 100n grow at the same rate.

Growth Rates of Functions

  • From lowest to highest: constant 1, logarithmic log n, linear n, N-Log-N n log n, quadratic n², cubic n³ and exponential 2"

Big-Oh Notation

  • Given functions f(n) and g(n), f(n) is O(g(n)) if there are positive constants c and nâ‚€ such that f(n) ≤ cg(n) for n ≥ nâ‚€.
  • Example: 2n + 10 is O(n)

Big-Oh Example

  • Example: n² is not O(n)

Big-Oh and Growth Rate

  • The big-Oh notation gives an upper bound on the growth rate of a function.
  • "f(n) is O(g(n))" means that f(n)'s growth rate is no more than g(n)'s growth rate.
    • If g(n) grows more, then f(n) is O(g(n)) is "yes", g(n) is O(f(n)) is "no".
    • If f(n) grows more, then f(n) is O(g(n)) is "no", g(n) is O(f(n)) is "yes".
    • If growth is the same, then f(n) is O(g(n)) is "yes", g(n) is O(f(n)) is "yes".

Big-Oh Rules

  • If f(n) is a polynomial of degree d, then f(n) is O(n^d); drop lower-order terms and constant factors.
  • Use the smallest possible class of functions.
    • e.g. "2n is O(n)" instead of "2n is O(n²)"
  • Use the simplest expression of the class.
    • e.g. "3n + 5 is O(n)" instead of "3n + 5 is O(3n)"

Asymptotic Algorithm Analysis

  • Determines running time in big-Oh notation.
  • It is performed by finding the worst-case number of primitive operations executed as a function of the input size and expressing this function with big-Oh notation.
  • Example: algorithm arrayMax executes at most 7n - 2 primitive operations.
  • To simplify, constant factors and lower-order terms are eventually dropped.

The Importance of Asymptotics

  • An algorithm with an asymptotically fast running time (low growth rate) can process more inputs than an algorithm with an asymptotically slow running time (high growth rate).
  • For example, the maximum problem size (n) given 1 second running time:
    • O(n) can process 2,500
    • O(n log n) can process 4,096
    • O(n^2) can process 707
    • O(n^4) can process 31
    • O(2^n) can process 19

Computing Prefix Sums

  • Another example: prefix sums.
  • The i-th prefix sum of an array X is sum of the first (i + 1) elements of X: - A[i] = (X[0] + X[1] + ... + X[i])
  • Two algorithms with different running time can be used for computing prefix sums.
    • PrefixSum1 runs in O(n²) time.
    • PrefixSum2 runs in O(n) time.

PrefixSums1 (Quadratic)

  • This algorithm computes prefix averages in quadratic time n².

Arithmetic Progression

  • The sum of the first n integers
    • (1 + 2 + ...+ n) is n(n + 1) / 2.
  • The algorithm prefixSum1 runs in O(n²) time.

PrefixSum2 (Linear)

  • This algorithm computes prefix sums in linear time n with previous sum.

Intuition for Asymptotic Notation

  • Big-Oh: f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n).
  • big-Omega: f(n) is Ω(g(n)) if f(n) is asymptotically greater than or equal to g(n).
  • big-Theta: f(n) is Θ(g(n)) if f(n) is asymptotically equal to g(n).
  • little-oh: f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n)
  • little-omega: f(n) is ω(g(n)) if is asymptotically strictly greater than g(n)

Time Complexity

  • Time complexity, T(n), refers to the use of asymptotic notation (O,Θ,Ω, o,ω) in denoting running time.
  • If 2 algorithms accomplish the same task with different time complexities:
    • One is faster than another.
    • As input n increases further, the performance difference becomes more obvious.
  • Faster algorithm is generally preferred.

Time Complexity Comparison

  • A fast algorithm has a low time complexity, while a slow algorithm has a high time complexity.
  • Ranking of selected time complexities from fastest to slowest:
    • Constant 1 (lowest T(n), fastest speed), Logarithmic log n, Linear n, N-Log-N n log n, Quadratic n², Cubic n³, Exponential 2" (highest T(n), slowest speed).

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Algorithm Analysis Quiz
5 questions

Algorithm Analysis Quiz

WellBeingFreedom avatar
WellBeingFreedom
Module 2: Algorithm Analysis Quiz
6 questions

Module 2: Algorithm Analysis Quiz

CompliantExuberance8581 avatar
CompliantExuberance8581
Algorithm Running Time
40 questions

Algorithm Running Time

ObtainableHyperbolic avatar
ObtainableHyperbolic
Use Quizgecko on...
Browser
Browser