Granularity of Parallel Systems

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What does the granularity of a task measure?

  • The time required to perform the computation of a task
  • The amount of work performed by a task (correct)
  • The ratio of communication time to computation time
  • The number of instructions executed in a particular task

How is the granularity G of a task calculated?

  • G = Tcomp + Tcomm
  • G = Tcomp / Tcomm (correct)
  • G = Tcomp - Tcomm
  • G = Tcomm / Tcomp

What is fine-grained parallelism characterized by?

  • A small number of small tasks
  • A small number of large tasks
  • A large number of large tasks
  • A large number of small tasks (correct)

What is the benefit of fine-grained parallelism?

<p>Facilitates load balancing (C)</p> Signup and view all the answers

What is an alternative way to specify granularity?

<p>In terms of the execution time of a program (C)</p> Signup and view all the answers

What is the purpose of considering granularity in parallel systems?

<p>To take into account the communication overhead between processors (A)</p> Signup and view all the answers

What is an example of a fine-grained system from outside the parallel computing domain?

<p>The system of neurons in our brain (A)</p> Signup and view all the answers

What occurs in coarse-grained parallelism if tasks process bulk of the data unevenly?

<p>Load imbalance (D)</p> Signup and view all the answers

What is the advantage of coarse-grained parallelism?

<p>Low communication and synchronization overhead (B)</p> Signup and view all the answers

What is medium-grained parallelism relative to?

<p>Fine-grained and coarse-grained parallelism (C)</p> Signup and view all the answers

What is the result of using fewer processors in parallel systems?

<p>Improved performance (C)</p> Signup and view all the answers

What is the optimal performance achieved in parallel and distributed computing?

<p>Between fine-grained and coarse-grained parallelism (A)</p> Signup and view all the answers

Flashcards are hidden until you start studying

Study Notes

Granularity of Parallel Systems

  • Granularity is a measure of the amount of work (or computation) performed by a task.
  • It can also be defined as the ratio of computation time to communication time, wherein:
    • Computation time is the time required to perform the computation of a task.
    • Communication time is the time required to exchange data between processors.

Calculating Granularity

  • Granularity (G) can be calculated as: G = Tcomp / Tcomm
  • Granularity is usually measured in terms of the number of instructions executed in a particular task.
  • Alternatively, it can be specified in terms of the execution time of a program, combining the computation time and communication time.

Types of Parallelism

Fine-grained Parallelism

  • A program is broken down into a large number of small tasks.
  • These tasks are assigned individually to many processors.
  • The amount of work associated with a parallel task is low and the work is evenly distributed among the processors.
  • Example: The system of neurons in our brain.

Coarse-grained Parallelism

  • A program is split into large tasks.
  • A large amount of computation takes place in processors.
  • This might result in load imbalance, where certain tasks process the bulk of the data while others might be idle.
  • Advantage: Low communication and synchronization overhead.
  • Example: Message-passing architecture.

Medium-grained Parallelism

  • A compromise between fine-grained and coarse-grained parallelism.
  • Task size and communication time are greater than fine-grained parallelism and lower than coarse-grained parallelism.
  • Example: General-purpose parallel computers.

Effects of Granularity in Parallel and Distributed Computing

  • Using fewer processors can improve performance of parallel systems.
  • Scaling down a parallel system means using fewer than the maximum possible number of processing elements to execute a parallel algorithm.
  • Optimal performance is achieved between the two extremes of fine-grained and coarse-grained parallelism.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

More Like This

Parallel Systems and Algorithms Lecture 1
15 questions
Distributed Computing System Fundamentals
15 questions
Introdução aos Sistemas Distribuídos
36 questions
Use Quizgecko on...
Browser
Browser