Evolution of Computing Platforms

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What characterizes the shift in computing as it evolves from centralized to parallel and distributed systems?

  • An increased emphasis on data intensity and network centricity. (correct)
  • A move away from using multiple computers.
  • A focus on solving smaller problems with single computers.
  • A decrease in data intensity and network dependence.

Why is the Linpack Benchmark no longer considered optimal for measuring the performance of modern computing systems?

  • It is specifically designed for measuring the performance of single supercomputers.
  • It effectively measures the performance of HTC systems.
  • It does not account for the specific demands of computing clouds. (correct)
  • It accurately reflects the demands of high-throughput computing.

Which computing paradigm is MOST focused on serving millions of users simultaneously with Internet searches and web services?

  • Parallel computing.
  • High-throughput computing (HTC). (correct)
  • High-performance computing (HPC).
  • Centralized computing.

How does distributed computing generally differ from centralized computing?

<p>Centralized computing concentrates all resources in one physical system, while distributed computing uses multiple autonomous systems. (B)</p> Signup and view all the answers

What is a key characteristic distinguishing cloud computing from other computing paradigms?

<p>Its potential to be either a centralized or distributed system. (C)</p> Signup and view all the answers

Which factor is most crucial for deciding system efficiency in future HPC and HTC systems?

<p>Throughput per watt of energy consumed. (D)</p> Signup and view all the answers

What does the design goal of 'Dependability' aim to ensure in the context of distributed systems?

<p>High throughput service even under failure conditions. (B)</p> Signup and view all the answers

What impact has the increased use of commodity technologies had on large-scale computing?

<p>It has driven the adoption and use of these technologies in large-scale computing. (B)</p> Signup and view all the answers

Which type of parallelism involves a processor executing multiple instructions simultaneously rather than just one at a time?

<p>Instruction-level parallelism (ILP). (A)</p> Signup and view all the answers

In the context of GPUs, what does 'throughput architecture' refer to?

<p>A design that emphasizes slower execution of many concurrent threads. (D)</p> Signup and view all the answers

What is a primary advantage of Virtual Machines (VMs) in modern computing environments?

<p>Solutions to underutilized resources and application inflexibility. (A)</p> Signup and view all the answers

What is the role of a Virtual Machine Monitor (VMM) in the context of virtual machines?

<p>To provide the illusion of dedicated hardware to a guest OS. (D)</p> Signup and view all the answers

Which trend is driving data center designs to emphasize the performance/price ratio over raw speed performance?

<p>A growing focus on storage and energy efficiency. (A)</p> Signup and view all the answers

What is the central idea behind cloud computing transforming how we interact with information?

<p>It provides on-demand services at the infrastructure, platform, or software level. (D)</p> Signup and view all the answers

How have hardware virtualization and multicore chips influenced cloud computing environments?

<p>By supporting dynamic configurations in the cloud. (C)</p> Signup and view all the answers

What is the primary operational difference between a cluster and a grid computing system?

<p>Clusters are typically LAN-based, while grids operate over WANs. (C)</p> Signup and view all the answers

What key property defines an ideal cluster, as described by Greg Pfister?

<p>A single-system image. (A)</p> Signup and view all the answers

What differentiates a structured P2P overlay network from an unstructured one?

<p>Structured networks utilize specified topologies and rules for node management. (C)</p> Signup and view all the answers

Why is aligning system scaling with workload said to be necessary to achieve expected performance in P2P networks?

<p>To address issues related to heterogeneity in the P2P environment. (A)</p> Signup and view all the answers

What fundamental design aspect most enables cloud computing to deliver cost-effective benefits for both users and providers?

<p>Its use of machine virtualization. (B)</p> Signup and view all the answers

How does the Software as a Service (SaaS) model primarily benefit customers?

<p>By eliminating the need for upfront investments in servers or licensing. (D)</p> Signup and view all the answers

What is the primary advantage of using commodity switches and network infrastructure within data centers?

<p>Decreased costs and complexity. (C)</p> Signup and view all the answers

What foundational element is essential for utility and grid computing technologies to pave the way for cloud computing?

<p>Laying a computing foundation. (C)</p> Signup and view all the answers

In Service-Oriented Architecture (SOA), what role does the Web Services Description Language (WSDL) fulfill?

<p>Defining the interfaces of the services. (B)</p> Signup and view all the answers

What is the primary benefit of adopting a distributed model with clear software functions in system design?

<p>Enhanced software reuse and simpler maintenance. (D)</p> Signup and view all the answers

What concept in distributed computing does the term 'grid' often represent?

<p>A collection of services that have multiple message-based inputs and outputs. (A)</p> Signup and view all the answers

Which of the following is an important role attributed to filter services ('fs') in the evolution of SOA?

<p>Removing unwanted data to respond to queries from different sources. (B)</p> Signup and view all the answers

In the context of large-scale computations, what architectural consideration is given to memory systems?

<p>It requires more data to get optimized to the application memory. (C)</p> Signup and view all the answers

Which of the following most accurately describes the shift in CPU architecture over the last few decades?

<p>From high latency to low latency. (D)</p> Signup and view all the answers

Which of the following has the largest contribution to data center costs?

<p>Power and cooling. (A)</p> Signup and view all the answers

Which of the following is NOT a characteristic of distributed and cloud computing systems?

<p>All of the above are characteristiscs. (E)</p> Signup and view all the answers

What is a main challenge or problem to using distributed computing?

<p>Security problems. (A)</p> Signup and view all the answers

What is considered a special cluster middleware?

<p>High availabilty. (A)</p> Signup and view all the answers

In the world of networks, what are P2P networks mostly used for?

<p>Business file sharing and social networking. (C)</p> Signup and view all the answers

What is the ideal cluster configuration?

<p>Split into a single system image. (B)</p> Signup and view all the answers

What is the largest constraint concerning system scaling.

<p>There should backwards compatibility. (B)</p> Signup and view all the answers

Concerning software, what is scalability refering to?

<p>All of the above. (E)</p> Signup and view all the answers

Which of the following accurately describes Amdahl's Law?

<p>Sequential bottle neck can impede a cluster. (B)</p> Signup and view all the answers

Which of the following accurately describes Gustafson's Law?

<p>Appiled when trying to achieve higher effeciency. (B)</p> Signup and view all the answers

Flashcards

Distributed Computing

Using multiple computers connected by a network to solve large-scale problems.

Scalable Computing

The practice of parallel and distributed computing in modern computer systems, significantly improving quality of life and information services.

Internet Computing

High-performance computing services that supercomputers and large data centers provide to many concurrent Internet users.

High-Throughput Computing

Systems built with parallel and distributed computing technologies to meet the demands of computing clouds.

Signup and view all the flashcards

Centralized Computing

Computing where all computer resources are centralized in one physical system.

Signup and view all the flashcards

Parallel Computing

Processors are tightly or loosely coupled with shared or distributed memory, enabling programs to run simultaneously.

Signup and view all the flashcards

Distributed Computing

Systems with multiple autonomous computers, each with private memory, communicating through a network to exchange information.

Signup and view all the flashcards

Cloud Computing

A centralized or distributed computing system providing resources over the Internet.

Signup and view all the flashcards

Internet of Things (IoT)

A networked connection of everyday objects, allowing for computing with pervasive devices at any place and time.

Signup and view all the flashcards

Computational Grids

Technologies for building P2P networks and networks of clusters, designed to establish wide area computing infrastructures.

Signup and view all the flashcards

Processor Speed and Network Bandwidth

Measured in millions of instructions per second (MIPS) or gigabits per second (Gbps).

Signup and view all the flashcards

Multicore CPUs

Modern chips with multiple processing cores to exploit parallelism at ILP and TLP levels, addressing power limitations and heat generation.

Signup and view all the flashcards

Multithreading Technology

Micro-architecture in modern CPUs using multiple threads of instructions to exploit ILP and TLP, boosting performance.

Signup and view all the flashcards

Graphics Processing Unit (GPU)

A processor or accelerator mounted on a computer's graphics card that offloads the CPU with faster processing.

Signup and view all the flashcards

Utility Computing

A service model where customers receive computing resources from a paid service provider.

Signup and view all the flashcards

Hype Cycle

A cycle describing the expectations for technology at different stages from trigger to productivity plateau.

Signup and view all the flashcards

Web Services

A computing paradigm by which programs have instructions carried with communicated messages that uses Simple Object Access Protocol (SOAP)

Signup and view all the flashcards

REST

Is made to simplify computing by delegating problems to implementation specific software that is written in a web service language

Signup and view all the flashcards

Grid Computing

A distributed system where a local computer can connect to a remote one.

Signup and view all the flashcards

Availability and Support

Computing in which systems can either be on hardware or software that provide support for sustained high availability.

Signup and view all the flashcards

Hardware Fault Tolerance

Refers to computers that can run despite automated failure management to eliminate single points of failure.

Signup and view all the flashcards

Single System Image

System that has SSI at a functional level with help by extensions or hardware

Signup and view all the flashcards

Efficient Computing

To design communication for computers to reduce system overhead

Signup and view all the flashcards

Cluster-wide Job Management

Using global system monitoring with the purpose of achieving better system scheduling.

Signup and view all the flashcards

Dynamic Load Balancing

An area of focus that balances workload in computing nodes

Signup and view all the flashcards

Scalability and Programmability

The ability to allow systems to match the scales while performing, through increasing processing power.

Signup and view all the flashcards

Peer to Peer Systems

A computing paradigm where each node in P2P system as a client and a server at same time providing part of system resources.

Signup and view all the flashcards

Overlay Peer to Peer Networks

Where peer IDs to form an overlay network at logical data level.

Signup and view all the flashcards

Distributed File Sharing

A P2P system for content sharing of digital data.

Signup and view all the flashcards

Collaborative Platform

A P2P platform designed primarily for chat and gaming.

Signup and view all the flashcards

Distributed P2P Computing

A P2P system designed for computing focused purposes. EX: SETI@home

Signup and view all the flashcards

Cloud Computing Over the Internet

It gives a virtual computing platform by giving hardware, software or data virtually

Signup and view all the flashcards

Infrastructure as a service

A computing infrastructure that has servers, storage, networks and data fabric, deployed with VMs running on guest OSes

Signup and view all the flashcards

Platform as Service

Models to enable deploying user build applications onto virtual cloud platform

Signup and view all the flashcards

Software as a Service

A program that lets users access software/applications over thousands of cloud platforms without upfront investments.

Signup and view all the flashcards

Service Oriented Architecture Evolution (SOA)

Service architectures that build service grids and clouds built form sensor equipment with filter services

Signup and view all the flashcards

Performance Metrics

The performance metrics used to account for a cpu and/or network

Signup and view all the flashcards

Size Scalability

The scalability by achieving better performance

Signup and view all the flashcards

Software Scalability

Where systems are updated in os updates

Signup and view all the flashcards

Application Scalability

How problems get matched between scalability for computer.

Signup and view all the flashcards

Study Notes

  • Computing technology has changed platforms and environments over the last 60 years.
  • Distributed computing systems use multiple computers on the Internet to solve large problems, becoming data-intensive and network-centric.
  • Internet applications utilizing parallel and distributed computing enhance quality of life and information services.

Internet Computing

  • Billions of people use the Internet daily, demanding high-performance computing from supercomputers and data centers.
  • The Linpack Benchmark is no longer optimal due to high demand, which increased the need for high-throughput computing (HTC) systems, instead of high-performance computing (HPC)
  • HTC systems use parallel and distributed computing, fast servers, storage, and high-bandwidth networks to advance network-based computing and web services.

Platform Evolution

  • There have been five generations of computer technology, each lasting 10-20 years with overlaps of about 10 years.
  • From 1950-1970, mainframes like IBM 360 and CDC 6400 met the needs of large businesses and governments.
  • From 1960-1980, minicomputers such as DEC PDP 11 and VAX Series were more popular in small businesses and colleges.
  • From 1970-1990, personal computers with VLSI microprocessors became widespread.
  • From 1980-2000, portable computers and pervasive devices appeared commonly.
  • Since 1990, HPC and HTC systems hidden in clusters, grids, or Internet clouds have proliferated and are used by both consumers and high-end services.
  • The trend is now to leverage shared web resources and large amounts of data over the Internet
  • Supercomputers (MPPs) shift into cooperative computer clusters to share resources.
  • Peer-to-peer (P2P) networks are formed for distributed file sharing and content delivery using many client machines globally.
  • P2P, cloud computing, and web service platforms focus more on HTC than HPC, and clustering and P2P lead to computational grids or data grids.

High-Performance Computing

  • HPC systems prioritize raw speed performance, increasing from Gflops in the early 1990s to Pflops in 2010, driven by scientific, engineering, and manufacturing demands.
  • Supercomputer users are a small fraction of computer users (~10%). Most users now use desktop computers or servers for Internet searches and market-driven computing tasks.

High-Throughput Computing

  • Market-oriented systems are shifting from HPC to HTC, focusing on high-flux computing.
  • HTC's key application is simultaneous Internet searches and web services by millions of users, shifting the performance goal to throughput, and not just speed
  • HTC technology improves batch processing speed and addresses cost, energy, security, and reliability concerns in data and enterprise computing centers.

New Computing Paradigms

  • SOA, Web 2.0, and virtualization enable Internet clouds as a new computing paradigm.
  • RFID, GPS, and sensors have enabled the Internet of Things (IoT).
  • Clusters, MPPs, P2P networks, grids, clouds, web services, social networks, and IoT may blur in the future, with clouds seen as grids or clusters modified by virtualization.

Computing Paradigm Distinctions

  • Distributed computing is the opposite of centralized computing.
  • Parallel computing overlaps with distributed and cloud computing
  • Centralized computing is a computing paradigm where all the computer resources are centralized into the same physical system with fully shared resources
  • Parallel computing involves tightly or loosely coupled processors with centralized/distributed memory with shared memory or message passing. Computer systems are known as parallel computers and programs are referred to as parallel programs
  • Distributed computing studies distributed systems that consist of multiple autonomous computers, each with its own private memory, communicating through a computer network and message passing with programs known as distributed programs
  • Cloud computing uses the internet for distributed computing and applications are service or utility computing
  • Some prefer concurrent computing (parallel and distributed), ubiquitous computing (pervasive devices), and Internet of Things (networked objects supported by Internet clouds)

Distributed System Families

  • Technologies for P2P networks and clusters have been consolidated into national projects establishing wide area computing infrastructures.
  • There is a surge in interest in exploring Internet cloud resources for data-intensive applications and are the result of moving desktop computing to service-oriented computing, with server clusters and large databases.
  • Grids and clouds, are disparity systems, emphasize resource sharing in hardware and software.
  • Massively distributed systems use a high degree of parallelism or concurrency among machines, such as the Tianhe-1A system built in China in October 2010 that contained 86016 CPU cores and 3,211,264 GPU cores
  • A typical P2P network may involve millions of client machines working simultaneously. Experimental cloud computing clusters have been built with thousands of processing nodes.
  • HPC and HTC systems require multicore processors to handle many computing threads.
  • HPC and HTC systems emphasize parallelism and distributed computing along with throughput, efficiency, scalability, and reliability
  • Key design objectives are Efficiency, Dependability, Adaptation, and Flexibility.
  • Efficiency is the utilization rate of resources by exploting massive parallelism in HPC and job throughput in HTC, along with data access, storage and power
  • Dependability measures the reliability and self management to ensure high throughput service and QoS
  • Adaptation refers to the ability to support billions of requests over massive data sets and virtualized cloud resources
  • Flexibility refers to the ability of system to run well in both science and engineering(HPC) and business applications (HTC)
  • Technology trends drive computing applications, as seen in Jim Gray's "Rules of Thumb in Data Engineering".
  • Moore's law says processor speed doubles every 18 months, but it's difficult to say whether this will continue.
  • Gilder's law states that network bandwidth has doubled each year, affecting commodity hardware and adoption of commodity technologies in large-scale computing.
  • Distributed systems emphasize resource distribution and concurrency or high degree of parallelism (DoP).

Degrees of Parallelism

  • Bit-level parallelism (BLP) converts bit-serial processing to word-level processing.
  • Instruction-level parallelism (ILP) executes multiple instructions at once. ILP requires branch prediction, scheduling, speculation, etc.
  • Data-level parallelism (DLP) uses SIMD and vector machines. DLP requires hardware/compiler support.
  • Task-level parallelism (TLP) has become more relevant with multicore processors. TLP struggles with programming/compilation for efficient multicore execution

Innovative Applications

  • HPC and HTC systems require transparency in data access, resource allocation, process location, concurrency, job replication, and failure recovery.

Applications Table

  • Science and Engineering domain: Includes Scientific simulations and genomic analysis
  • Business, education, services Industry and healthcare domains Include Earthquake prediction, global warming, telecommunication, content delivery, banking and insurance and air traffic control
  • Internet/Web services and Government Applications include internet search, data centers, traffic monitory and cybersecurity
  • Mission Critical Applications include Military command/control and crisis management
  • Computing paradigms are ubiquitous, reliable, scalable, autonomic, composable with QoS/SLAs, and realize the computer utility vision.
  • Utility computing is a business model where customers receive computing resources for a fee and grid/cloud platforms regarded as service providers
  • Distributed cloud applications run on any server, facing technological challenges in network efficiency, scalable memory/storage, distributed OS, new programming models, etc.

New Technologies Hype Cycle

  • Emerging technologies go through a hype cycle involving inflated expectations, disillusionment, and then gradual enlightenment to a productivity plateau.
  • The hype cycle helps in understanding the maturity and adoption timeline of new technologies like clouds, biometric authentication, and interactive TV
  • Hollow circles will mainstream in two years, gray circles in 2-5 years, solid circles in 5-10 years and the triangles will take more than 10 years

Internet of Things (IoT)

  • The concept of the IoT was introduced in 1999 at MIT and refers to the networked interconnection of everyday objects, tools, devices, or computers
  • The IoT is a sensor network that interconnects things varying in time and place using RFID or other electronic technology.

IPv6

  • The IPv6 protocol assigns an IP address to all the objects on Earth
  • Objects and devices are instrumented and interconnected to communicate in 3 patterns: H2H, H2T, and T2T
  • Connections are made at the PC and on the move in the day or night.
  • Dynamic connection will have IoT in its infancy with prototypes and cloud computing with cloud researchers to support the prototype .
  • A smart Earth must have intelligent cities, abundant resources, efficient telecommunications with green IT and good infrastructure.

Cyber-Physical

  • Cyber-physical systems (CPS) integrate computational processes with the physical world, merging cyber aspects with physical objects and merges communication, computation and control.
  • The IoT emphasizes connections between physical objects. The CPS emphasizes exploration of virtual reality (VR) applications with a transformation of how we interact

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Distributed Parallel Computing Systems Overview
10 questions
Scalable Computing Over the Internet
16 questions
Trends in Distributed Systems and the Internet
40 questions
Intro to Cloud Computing
44 questions

Intro to Cloud Computing

TimeHonoredJubilation8017 avatar
TimeHonoredJubilation8017
Use Quizgecko on...
Browser
Browser