Parallel Computer Architectures Quiz
25 Questions
1 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is a primary characteristic of a loosely coupled system?

  • Nodes are interconnected by a shared bus.
  • The system has high latency in microseconds.
  • All processors share memory and resources.
  • Each node operates independently with its own OS. (correct)

What is meant by 'incremental scalability' in cluster systems?

  • Building clusters exclusively using high-end hardware.
  • Maintaining performance levels when a single node fails.
  • The ability to double the computing power with each upgrade.
  • Adding new systems in small increments without large upgrades. (correct)

Which latency time is characteristic of tightly coupled systems?

  • Kiloseconds
  • Milliseconds
  • Nanoseconds (correct)
  • Microseconds

What is a key benefit of using commodity building blocks in cluster systems?

<p>Lower cost and equal or greater computing power (D)</p> Signup and view all the answers

What does cache coherence in UMA architecture ensure?

<p>Updates in shared memory are reflected across all processors. (A)</p> Signup and view all the answers

What type of memory access architecture is represented by symmetric multiprocessor (SMP) machines?

<p>Uniform Memory Access (UMA) (D)</p> Signup and view all the answers

How does high availability in a cluster benefit from the structure of the nodes?

<p>Each node is a standalone component, reducing service loss risks. (B)</p> Signup and view all the answers

What is one main difference between loosely coupled and tightly coupled systems?

<p>Loosely coupled systems provide higher access speeds. (A)</p> Signup and view all the answers

What is a characteristic of NUMA architecture?

<p>Each processor has its own local memory. (D)</p> Signup and view all the answers

Which system is designed to provide a backup component in case of failures?

<p>Fault-tolerant systems (D)</p> Signup and view all the answers

What describes the purpose of graceful degradation?

<p>To maintain full functionality despite partial failures. (C)</p> Signup and view all the answers

How are blade servers optimized?

<p>By consolidating multiple components in one chassis. (A)</p> Signup and view all the answers

What distinguishes a Beowulf cluster from other systems?

<p>It consists of multiple networked off-the-shelf computers. (A)</p> Signup and view all the answers

What is a primary feature of massively parallel processing (MPP)?

<p>Each CPU has its own memory and operating system. (A)</p> Signup and view all the answers

What characterizes grid computing?

<p>It involves distributed computing over the Internet. (D)</p> Signup and view all the answers

Which statement about NUMA architecture concerning memory access is true?

<p>Memory of other processors is accessible but has varying latency. (B)</p> Signup and view all the answers

What characterizes a Single Instruction, Single Data stream (SISD) system?

<p>A single processor executes a single instruction on data stored in a single memory. (D)</p> Signup and view all the answers

Which of the following statements is true about SIMD architecture?

<p>Each processing element can execute the same instruction on different data sets simultaneously. (B)</p> Signup and view all the answers

What defines the MISD architecture?

<p>One sequence of data is processed by different processors executing different instructions. (A)</p> Signup and view all the answers

Which of the following describes a MIMD architecture?

<p>Multiple instruction sequences are executed on different data sets by multiple processors. (B)</p> Signup and view all the answers

In a Tightly Coupled Multiprocessor system, which statement is accurate?

<p>All processors share a single memory with low communication latency. (B)</p> Signup and view all the answers

What distinguishes Symmetric Multiprocessors (SMP) from Non-uniform Memory Access (NUMA) systems?

<p>SMP allows equal memory access times for all processors. (C)</p> Signup and view all the answers

What is a characteristic feature of a loosely coupled MIMD architecture?

<p>Each processor operates independently with its own memory. (C)</p> Signup and view all the answers

What role does the Control Unit (CU) play in a SISD structure?

<p>It controls the execution of a single instruction stream by the processor. (D)</p> Signup and view all the answers

Which of the following types of systems is characterized by a single master and several slave processors?

<p>Asymmetric Multiprocessor. (D)</p> Signup and view all the answers

Flashcards

Flynn's Taxonomy

A classification of parallel computer architectures based on the number of concurrent instruction (single or multiple) and data streams (single or multiple) available in the architecture. It categorizes systems into SISD, SIMD, MISD, and MIMD.

SISD (Single Instruction, Single Data Stream)

A type of computer architecture where a single processor executes a single instruction stream on data stored in a single memory. It is common to have a central controller that broadcasts the instruction stream to all processing elements.

SIMD (Single Instruction, Multiple Data Stream)

A type of computer architecture where a single instruction is applied to multiple sets of data simultaneously by different processing elements. Each processing element has associated local memory or shared memory.

MISD (Multiple Instruction, Single Data Stream)

A type of computer architecture where multiple processors execute different instruction sequences on a single data stream. It's not widely used in practice.

Signup and view all the flashcards

MIMD (Multiple Instruction, Multiple Data Stream)

A type of architecture where multiple processors execute different instructions on different data streams simultaneously. It is a common architecture for high-performance computing.

Signup and view all the flashcards

Shared Memory MIMD

A type of MIMD architecture where processors share a single, common memory. Communication between processors occurs through this shared memory. It's considered a "tightly-coupled" system, with fast communication between processors.

Signup and view all the flashcards

Symmetric Multiprocessor (SMP)

A type of shared memory MIMD where all processors have equal access to memory and can access any location in memory in approximately the same amount of time.

Signup and view all the flashcards

Non-uniform Memory Access (NUMA)

A type of shared memory MIMD where different regions of memory have different access times for different processors. This means some processors might take longer to access certain memory locations than others.

Signup and view all the flashcards

Distributed Memory MIMD (Clusters)

A type of MIMD architecture where processors have their own independent memory and communication between them occurs over a network, such as Ethernet. It's considered a "loosely-coupled" system, with slower communication between processors.

Signup and view all the flashcards

Clusters

The collection of independent processors in a distributed memory MIMD architecture.

Signup and view all the flashcards

Cluster (Loosely Coupled)

A collection of independent computers (nodes) connected together to work as a single, unified system.

Signup and view all the flashcards

Independent OS & Communication

A system where each CPU runs its own operating system and they communicate through a local area network. Different hosts may perceive the system differently.

Signup and view all the flashcards

UMA (Uniform Memory Access)

A method of connecting multiple processors that allows each processor to access all of the memory equally and with the same latency.

Signup and view all the flashcards

NUMA (Non-Uniform Memory Access)

A method of connecting multiple processors where processors closer to a memory location have faster access to the data compared to those farther away.

Signup and view all the flashcards

Distributed Memory

A type of computer architecture where each processor has its own local memory and they communicate through a network.

Signup and view all the flashcards

Latency

The time it takes to access data from memory. It is very short in tightly coupled systems (multiprocessors) and longer in loosely coupled systems (multicomputers).

Signup and view all the flashcards

Incremental Scalability (Cluster)

Provides the ability to scale computing power by adding more nodes as needed, without needing to replace the existing system with a larger one.

Signup and view all the flashcards

High Availability (Cluster)

Ensures that the system continues to operate even if one of the nodes fails. This is achieved through software that automatically manages fault tolerance.

Signup and view all the flashcards

Fault-Tolerant System

A system designed to tolerate failures of components or network routes without impacting user experience. Backup components, procedures, or routes automatically take over in case of failure.

Signup and view all the flashcards

Graceful Degradation

The ability of a system to maintain limited functionality even when a significant portion of it is damaged or unavailable.

Signup and view all the flashcards

Blade Server

A group of independent multiprocessor systems housed in a single chassis, allowing for high density and energy efficiency.

Signup and view all the flashcards

Beowulf Cluster

A cluster composed of identical, off-the-shelf computers connected via a TCP/IP Ethernet network.

Signup and view all the flashcards

Massively Parallel Processor (MPP)

A type of parallel computer with multiple networked processors. These processors communicate over specialized interconnect networks, creating a highly parallel system.

Signup and view all the flashcards

Blue Gene/L (MPP)

A massive parallel processing system like Blue Gene/L that features numerous independent CPUs, each with its own memory and OS copy. They communicate through a high-speed interconnect network for efficient data exchange.

Signup and view all the flashcards

Grid Computing

A highly distributed form of parallel computing where computers connected over the internet collaborate to solve a particular problem.

Signup and view all the flashcards

Study Notes

Parallel Computer Architectures

  • Multiprocessor systems use multiple processors that execute instructions simultaneously, communicating through shared memory.

Flynn's Taxonomy

  • Proposed by Michael J. Flynn in 1966
  • A classification of parallel computer architectures.
  • Categorized by the number of instruction and data streams.

Types of Architectures

  • SISD (Single Instruction, Single Data):
    • A single processor executes a single instruction stream on a single data stream.
    • Instructions are broadcast to all processing elements.
    • Data is stored in a single memory.
    • A uni-processor is an example.
  • SIMD (Single Instruction, Multiple Data):
    • A single instruction stream is executed on multiple data streams.
    • Processing elements (PEs) each have associated local memory.
    • Instructions are executed simultaneously on different data sets by different processors.
  • MISD (Multiple Instruction, Single Data):
    • Multiple instructions are executed on a single data stream.
    • Less practical application.
  • MIMD (Multiple Instruction, Multiple Data):
    • Multiple instructions are executed on multiple data streams simultaneously.
    • Processors can execute different instruction sequences on different data sets.
    • Can be shared memory (e.g., SMP, NUMA) or distributed memory (e.g., clusters).

MIMD - Shared Memory

  • Processors share memory, communicating through shared memory.
  • "Tightly coupled" system:
    • Single copy of the operating system.
    • Single address space.
    • Usually a single bus or backplane connecting processors and memories.
    • Very low communication latency.
  • Different types:
    • SMP (Symmetric Multiprocessor):
      • Memory access time is approximately the same to different regions of memory.
    • NUMA (Non-uniform Memory Access):
      • Access times to different regions of memory may have different access times.

MIMD - Distributed Memory (Clusters)

  • Collection of independent uniprocessors that run their own operating systems.
  • Communication via a local area network.
  • Often called nodes.
  • Systems look differently based on host.
  • Working together as a unified resource.
  • Illusion of being one machine.
  • Communication via fixed paths or network connections.

Cluster Benefits

  • Absolute scalability: Can have tens, hundreds, or thousands of machines, exceeding the power of single machines.
  • Incremental scalability: Cluster expansions can occur in small steps without major system upgrades.
  • High availability: Failure of one node doesn't stop service as each node is a standalone computer.

Cluster Benefits (Cont.)

  • Superior price/performance: Using commodity components creates cost effective solutions.

Memory Architecture

  • Shared memory: Uniform Memory Access (UMA), Non-Uniform Memory Access (NUMA).
  • Distributed memory: Often part of multiprocessor systems or clusters.

UMA Architecture

  • Uniform memory access; identical processors have the same latency to access any memory location.
  • Most commonly represented by Symmetric Multiprocessor (SMP) machines.
  • Equal access & access times to memory.
  • Cache coherency required.

NUMA Architecture

  • Not all processors have the same access time to all memories.
  • Can be made by physically linking two or more SMPs.
  • Memory access across the link is slower.
  • Each processor has its local memory.
  • Memory of another processor is accessible but latency varies (remote memory access).

Fault Tolerance and Graceful Degradation

  • Fault tolerance: Systems are designed so that a failed component has a backup.
  • Graceful degradation: A system can continue to function with decreased abilities when parts fail, preventing complete breakdown.

Blade Servers

  • Multiple processors, I/O, and networking boards in a single chassis.
  • Each blade boots independently. Each runs its own operating system and application.

Beowulf Cluster

  • Cluster implemented on multiple identical commercial off-the-shelf computers connected via a TCP/IP Ethernet local area network.

Massive Parallel Processing (MPP)

  • A single computer with many networked processors.
  • Large systems with "far more" than 100 processors.
  • Each processor has its own memory and copy of the operating system and application.
  • Communication using an interconnect network.

Grid Computing

  • The most distributed form of parallel computing.
  • Uses computers over the internet to solve a problem.
  • Grid computing applications typically use Middleware to manage resources.
  • Middleware like BOINC (Berkeley Open Infrastructure for Network Computing) is used for grid systems.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Description

Test your knowledge on parallel computer architectures and Flynn's taxonomy. This quiz covers multiprocessor systems, types of architectures including SISD, SIMD, and MISD. Assess your understanding of how these systems operate and their classifications.

More Like This

Use Quizgecko on...
Browser
Browser