🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Cloud Computing: M21DES313 MCA Sem III - Academic Year : 2023 - 2024 ODD Semester School of Computer Science and Applications Shreetha Bhat Assistant Professor, School of...

Cloud Computing: M21DES313 MCA Sem III - Academic Year : 2023 - 2024 ODD Semester School of Computer Science and Applications Shreetha Bhat Assistant Professor, School of CSA, REVA Univesity. Unit 1 Fundamentals of Cloud Computing Lecture 1 18-12-2023 AGENDA ▪ Introduction to Cloud Computing ▪ Course Description ▪ Prerequisites ▪ Course Objective ▪ Course Outcomes ▪ Quick look at Syllabus ▪ Text Book ▪ Continuous Assessment INTRODUCTION TO CLOUD COMPUTING 4 INTRODUCTION TO CLOUD COMPUTING The big players in the corporate computing sphere include: Google Cloud Amazon Web Services (AWS) Microsoft Azure IBM Cloud Alibaba Cloud 5 INTRODUCTION TO CLOUD COMPUTING Career in Cloud Cloud Solution Architect. Cloud Developer Engineer/ Cloud Software Engineer. Cloud DevOps Engineer. Cloud System Engineer/ Administrator. Cloud SysOps Administrator. Cloud Product Manager. Cloud Consultant. 6 INTRODUCTION TO CLOUD COMPUTING Cloud Certifications 1. Amazon Web Services (AWS) Solutions Architect - Associate.... 2. Microsoft Certified: Azure Fundamentals.... 3. Google Associate Cloud Engineer.... 4. IBM Certified Solution Advisor - IBM Cloud Foundations V2.... 5. Cloud Security Alliance: Certificate of Cloud Security Knowledge (CCSK) 7 COURSE DESCRIPTION This course introduces the fundamental principles of Cloud computing and its related paradigms. It discusses the concepts of virtualization technologies along with the architectural models of Cloud computing. It presents prominent Cloud computing technologies available in the marketplace. It contains topics on concurrent, high-throughput, and data-intensive computing paradigms and their use in programming Cloud applications. Introduces AWS Cloud with its complete environment, along with its implementation. 8 PRE REQUISITES Operating System Knowledge In Virtualization Concepts Networking, and Coding Skills. 9 COURSE OBJECTIVE The objectives of this course are: To introduce the broad perceptive of cloud architecture and model. To distinguish between the various types of Virtualization. To be familiar with the lead players the in the cloud. To choose the right cloud providers as per need. To learn to design trusted Cloud Computing. 10 COURSE OUTCOMES On successful completion of this course; the student shall be able to: Understand the fundamentals of Cloud Computing and evaluate ideas for building cloud computing environments. Explain the fundamental concepts of Virtualization and analyze the characteristics of virtualized environments. Analyze existing cloud architecture to design and develop new systems using software tools that can solve real-time problems without harming the environment. To get familiarize with the AWS Cloud environment and apply the knowledge gained in developing cloud computing applications in various areas and analyze their usage. 11 QUICK LOOK ON SYLLABUS Unit 1 - Fundamentals of Cloud Computing. Unit 2 - Fundamental concept and Models. Unit 3 - Cloud Infrastructure Mechanisms and Architecture. Unit 4 - AWS Cloud platform. 12 QUICK LOOK ON SYLLABUS Unit 1 - Fundamentals of Cloud Computing Cloud computing at a glance, the vision of cloud computing, Defining a cloud, Historical developments, Building cloud computing environments, Application development. Characteristics of Cloud computing. Scalability, types of scalability. Horizontal Scalability and Cloud Computing. Computing platforms and technologies, Principles of Parallel and Distributed Computing. 13 QUICK LOOK ON SYLLABUS Unit 2 - Fundamental concept and Models Basics of Virtualization, Characteristics of virtualized environments, and Taxonomy of virtualization techniques, - Types of Virtualization- OS virtualization, Application level virtualization, Programming Language virtualization, and Desktop Virtualization. Virtualization and cloud computing, Technology examples, Xen: paravirtualization, VMware: full virtualization. 14 QUICK LOOK ON SYLLABUS Unit 3 - Cloud Infrastructure Mechanisms and Architecture Fundamentals of Cloud Architecture, The cloud reference model, Cloud Delivery Models: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), Comparing Cloud Delivery Models, Cloud Deployment Models: Public Clouds, Community Clouds, Private Clouds, Hybrid Clouds, Introduction to Cloud Software Environments, Aneka Framework overview. 15 QUICK LOOK ON SYLLABUS Unit 4 - AWS Cloud platform Amazon Web Services Cloud: Amazon Web Services overview, working with Amazon Simple Storage Service (S3), Elastic compute cloud: security groups, key pair, launch Linux and windows instances. Working with Amazon Machine Images. Deploy applications to Amazon EC2, EC2 applications, Simple queue Service, SQS applications, Elastic Block Storage, RDS, beansta. 16 TEXT BOOK 17 CONTINUOUS ASSESSMENT Sl. No Assessment Component Marks Conduction Date Results Date IA1 Test 30 (15) 1 Assignment 5 Seminar 5 IA2 Test 30 (15) 2 Assignment 5 Seminar 5 3 SEE 100 (50) 18 SUMMARY ▪ Introduction to Cloud Computing ▪ Course Description ▪ Prerequisites ▪ Course Objective ▪ Course Outcomes ▪ Quick look at Syllabus ▪ Text Book ▪ Continuous Assessment 19 Unit 1 Fundamentals of Cloud Computing Lecture 2 20-12-2023 RECAP ▪ Introduction to Cloud Computing ▪ Course Description ▪ Prerequisites ▪ Course Objective ▪ Course Outcomes ▪ Quick look at Syllabus ▪ Text Book ▪ Continuous Assessment 21 AGENDA ▪ Introduction ▪ Cloud Computing at a Glance WHAT IS CLOUD COMPUTING ??? 23 INTRODUCTION Computing mean : ✓ To solve any goal oriented activity Example: ▪ Designing and building hardware and software system for a wide range of purposes ▪ Making computing systems intelligent using communications ▪ Finding gathering information from relevant to any particular purpose and so on 24 INTRODUCTION Cloud Computing : ✓ Cloud computing is the most recent emerging paradigm promising to turn the vision of “computing utilities” in to a reality. ✓ The services provided are: ▪ Storage ▪ Networking, and ▪ Information Technology (IT) Infrastructure 25 INTRODUCTION Utility Services: ✓ How do we get electricity in homes / Work Place? 26 INTRODUCTION Utility Services: ✓ How do we get Telephone Connection in homes / Work Place? 27 INTRODUCTION Utility Services: ✓ How do we get water Connection in homes / Work Place? 28 INTRODUCTION Utility Services: Cloud Computing is also a utility service where the services are provided using internet on pay – per- use basis. 29 INTRODUCTION Cloud Computing is a model for enabling ubiquitous, convenient, on-demand network access to computing resources. There are many services and features of cloud computing. 30 INTRODUCTION 31 CLOUD COMPUTING AT A GLANCE In 1969, Leonard Kleinrock, one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET), which seeded the Internet, said: “As of now, computer networks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of computer utilities’ which, like present electric and telephone utilities, will service individual homes and offices across the country”. 32 CLOUD COMPUTING AT A GLANCE One of the most diffuse views of cloud computing can be summarized as follows: “I don’t care where my servers are, who manages them, where my documents are stored, or where my applications are hosted. I just want them always available and access them from any device connected through Internet. And I am willing to pay for this service for as long as I need it.” 33 CLOUD COMPUTING AT A GLANCE Computing Services available: Software as a Service Platform as a Service Infrastructure as a Service 34 CLOUD COMPUTING AT A GLANCE Technologies Used: Web 2.0 Service Orientation Virtualization 35 SUMMARY ▪ Introduction ▪ Cloud Computing at a Glance 36 Unit 1 Fundamentals of Cloud Computing Lecture 3 26-12-2023 RECAP ▪ Introduction ▪ Cloud Computing at a Glance 38 AGENDA ▪ The Vision Of Cloud Computing Defining a cloud A closer look VISION OF CLOUD COMPUTING The long-term vision of cloud computing is that IT services are traded as utilities in an open market, without technological and legal barriers. 40 VISION OF CLOUD COMPUTING 41 DEFINING A CLOUD 42 DEFINING A CLOUD Definition by Armbrust et al. : “Cloud computing refers to both the applications delivered as services over the Internet and the hardware and system software in the data centers that provide those services”. 43 DEFINING A CLOUD Definition proposed by the U.S. National Institute of Standards and Technology (NIST): “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction”. 44 DEFINING A CLOUD The utility-oriented nature of cloud computing is clearly expressed by Buyya et al. : “A cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers”. 45 A CLOSER LOOK Access to, as well as integration of, cloud computing resources and systems is now as easy as performing a credit card transaction over the Internet. Practical examples are: Large enterprises can offload some of their activities to cloud-based systems. (Amazon EC2 and S3). Small enterprises and start-ups can afford to translate their ideas into business results more quickly, without excessive up-front costs. 46 A CLOSER LOOK System developers can concentrate on the business logic rather than dealing with the complexity of infrastructure management and scalability. End users can have their documents accessible from everywhere and any device. Cloud computing does not only contribute with the opportunity of easily accessing IT services on demand, it also introduces a new way of thinking about IT services and resources: as utilities. 47 A CLOSER LOOK 48 A CLOSER LOOK 49 SUMMARY ✓ The Vision Of Cloud Computing ✓ Defining a cloud ✓ A closer look 50 Unit 1 Fundamentals of Cloud Computing Lecture 4 20-12-2022 RECAP ✓ The Vision Of Cloud Computing ✓ Defining a cloud ✓ A closer look 52 AGENDA A closer look Historical Developments A CLOSER LOOK 54 THE CLOUD COMPUTING REFERENCE MODEL 55 Unit 1 Fundamentals of Cloud Computing Lecture 5 28-12-2023 RECAP ✓ The Vision Of Cloud Computing ✓ Defining a cloud 57 AGENDA Historical Developments HISTORICAL DEVELOPMENTS 59 HISTORICAL DEVELOPMENTS Five core technologies that played an important role in the realization of cloud computing are: 1. Distributed Systems 2. Virtualization 3. Web 2.0 4. Service Orientation, and 5. Utility Computing. 60 HISTORICAL DEVELOPMENTS 1. Distributed Systems: “A distributed system, also known as distributed computing, is a system with multiple components located on different machines that communicate and coordinate actions in order to appear as a single coherent system to the end- user”. 61 HISTORICAL DEVELOPMENTS 1. Distributed Systems: Distributed Systems evidences two very important elements characterizing a distributed system: i. The fact that it is composed of multiple independent components and that these components are perceived as a single entity by users. ii. The primary purpose of distributed systems is to share resources and utilize them better. 62 HISTORICAL DEVELOPMENTS 1. Distributed Systems: Examples : i. Telephone and cellular networks ii. Peer to Peer network 63 HISTORICAL DEVELOPMENTS 1. Distributed Systems: Benefits and Challenges i. Horizontal Scalability ii. Reliability iii. Performance 64 HISTORICAL DEVELOPMENTS 1. Distributed Systems: Three major milestones have led to cloud computing: i. Mainframes ii. Clusters and iii. Grids 65 HISTORICAL DEVELOPMENTS 1. Distributed Systems: i. Mainframes 66 HISTORICAL DEVELOPMENTS 1. Distributed Systems: i. Mainframes 67 HISTORICAL DEVELOPMENTS 1. Distributed Systems: ii. Clusters 68 HISTORICAL DEVELOPMENTS 1. Distributed Systems: iii. Grids 69 HISTORICAL DEVELOPMENTS 1. Distributed Systems: iii. Grids Grid computing is a group of networked computers which work together as a virtual supercomputer to perform large tasks, such as analysing huge sets of data or weather modeling. 70 HISTORICAL DEVELOPMENTS 1. Distributed Systems: iii. Grids 71 HISTORICAL DEVELOPMENTS 2. Virtualization: Virtualization is technology that lets you create useful IT services using resources that are traditionally bound to hardware. It allows you to use a physical machine’s full capacity by distributing its capabilities among many users or environments. 72 HISTORICAL DEVELOPMENTS 2. Virtualization: 73 HISTORICAL DEVELOPMENTS 2. Virtualization: How does virtualization work? 74 HISTORICAL DEVELOPMENTS 75 Unit 1 Fundamentals of Cloud Computing Lecture 6 29 – 12 -2023 RECAP ✓ Historical Developments o Distributed Systems o Virtualization 77 AGENDA Historical Developments o Web 2.0 o Service Orientation, and o Utility Computing. HISTORICAL DEVELOPMENTS 3. Web 2.0 : Describes the second generation of the World Wide Web. Moved static HTML pages to a more interactive and dynamic web experience. Focused on the ability for people to collaborate and share information online via social media, blogging, and Web-based communities. New tools made it possible for nearly anyone to contribute, regardless of their technical knowledge. Web 2.0 is pronounced web-two-point-o. Making Internet applications seem local. 79 HISTORICAL DEVELOPMENTS 3. Web 2.0 : New paradigm in web interaction. A fundamental change in how developers create websites, but more importantly, how people interact with those websites. It will be akin to an artificial intelligence which understands context rather than simply comparing keywords, as is currently the case. Examples of Web 2.0 applications are Google Documents, Google Maps, Flickr, Facebook, Twitter, YouTube, Blogger, and Wikipedia. 80 HISTORICAL DEVELOPMENTS 4. Service Oriented Computing: ✓ Service-Oriented Computing (SOC) is the computing paradigm that utilizes services as fundamental elements for developing applications/solutions. ✓ Service-oriented computing (SOC) supports the development of rapid, low-cost, flexible, interoperable, and evolvable applications and systems. ✓ A service is supposed to be loosely coupled, reusable, programming language independent, and location transparent. 81 HISTORICAL DEVELOPMENTS 4. Service Oriented Computing: Service-oriented computing introduces and diffuses three important concepts, which are also fundamental to cloud computing: i. Quality of service (QoS) and ii. Software-as-a-Service (SaaS). iii. Web Services 82 HISTORICAL DEVELOPMENTS 4. Service Oriented Computing: i. Quality of service (QoS) Identifies a set of functional and non-functional attributes that can be used to evaluate the behavior of a service from different perspectives. Performance metrics: response time, or security attributes, transactional integrity, reliability, scalability, and availability. 83 HISTORICAL DEVELOPMENTS 4. Service Oriented Computing: Software-as-a service The term has been inherited from the world of application service providers (ASPs), which deliver software services-based solutions across the wide area network from a central datacenter and make them available on a subscription or rental basis. 84 HISTORICAL DEVELOPMENTS 4. Service Oriented Computing: iii. Web Services Web services provide a common platform that allows multiple applications built on various programming languages to have the ability to communicate with each other. Definition: Web service is a standardized medium to propagate communication between the client and server applications on the World Wide Web. 85 HISTORICAL DEVELOPMENTS Popular Web Services Protocols are: ✓ SOAP: SOAP is known as the Simple Object Access Protocol. SOAP was developed as an intermediate language so that applications built on various programming languages could talk quickly to each other and avoid the extreme development effort. 86 HISTORICAL DEVELOPMENTS Popular Web Services Protocols are: ✓ WSDL: WSDL is known as the Web Services Description Language(WSDL). WSDL is an XML-based file which tells the client application what the web service does and gives all the information required to connect to the web service. 87 HISTORICAL DEVELOPMENTS Popular Web Services Protocols are: ✓ REST: REST stands for REpresentational State Transfer. REST is used to build Web services that are lightweight, maintainable, and scalable. 88 HISTORICAL DEVELOPMENTS Popular Web Services Protocols are: iii. Web Services 89 HISTORICAL DEVELOPMENTS Popular Web Services Protocols are: 90 HISTORICAL DEVELOPMENTS 5. Utility Oriented Computing: “Utility computing is a vision of computing that defines a service-provisioning model for compute services in which resources such as storage, compute power, applications, and infrastructure are packaged and offered on a pay-per- use basis.” 91 BUILDING CLOUD COMPUTING ENVIRONMENTS The creation of cloud computing environments encompasses both the development of applications and systems that leverage cloud computing solutions and the creation of frameworks, platforms, and infrastructures delivering cloud computing services. 92 APPLICATION DEVELOPMENT Applications that leverage cloud computing benefit from its capability to dynamically scale on demand. Web Applications Resource Intensive Applications (Data Intensive / Compute Intensive). 93 APPLICATION DEVELOPMENT Cloud computing provides a solution for on-demand and dynamic scaling across the entire stack of computing. This is achieved by : a) providing methods for renting compute power, storage, and networking b) Offering runtime environments designed for scalability and dynamic sizing; and c) Providing application services that mimic the behavior of desktop applications but that are completely hosted and managed on the provider side 94 CHARACTERISTICS OF CLOUD COMPUTING Cloud computing has some interesting characteristics that bring benefits to both cloud service consumers (CSCs) and cloud service providers (CSPs). These characteristics are: ✓ No up-front commitments ✓ On-demand access ✓ Nice pricing 95 CHARACTERISTICS OF CLOUD COMPUTING ✓ Simplified application acceleration and scalability ✓ Efficient resource allocation ✓ Energy efficiency ✓ Seamless creation and use of third-party services 96 SUMMARY ✓ Historical Developments o Service Orientation o Utility Computing ✓ Building Cloud Computing Environments ✓ Application Development ✓ Characteristics of Cloud Computing 97 Unit 1 Fundamentals of Cloud Computing Lecture 8 02 – 01 - 2024 RECAP ✓ Historical Developments o Service Orientation o Utility Computing ✓ Building Cloud Computing Environments ✓ Application Development ✓ Characteristics of Cloud Computing 99 AGENDA Scalability Computing Platforms and Technologies Principles of Parallel and Distributed Computing o Eras of Computing o Parallel v/s Distributed Computing o What is Parallel Processing? o Hardware architectures for Parallel Processing SCALABILITY Scalability in the context of cloud computing can be defined as the ability to handle growing or diminishing resources to meet business demands in a capable way. The different types of scaling available in the cloud: 1. Horizontal Scaling (Scaling In/Out) 2. Vertical Scaling (Scaling Up/Down) 3. Diagonal Scaling 101 SCALABILITY 1. Horizontal scaling (Scaling In/Out) 102 SCALABILITY 2. Vertical Scaling (Up / Down) 103 SCALABILITY 3. Diagonal Scaling (Horizontal+Vertical) 104 SCALABILITY 105 COMPUTING PLATFORMS AND TECHNOLOGIES Cloud computing services are automated which means there exists an application that is designed to deliver various cloud services without human intervention. Suitable platforms and frameworks are required to develop cloud applications. o Amazon Web services (AWS) o Google AppEngine o Microsoft Azure o Hadoop o Force.com and Salesforce.com o Manjrasoft Aneka 106 COMPUTING PLATFORMS AND TECHNOLOGIES Amazon Web services (AWS) ▪ Provides IaaS services (virtual compute, storage, and networking). ▪ AWS is mostly known for its compute and storage-on- demand services. ▪ Compute - Elastic Compute Cloud (EC2) and ▪ Storage - Simple Storage Service(S3). 107 COMPUTING PLATFORMS AND TECHNOLOGIES Google AppEngine ▪ A scalable runtime environment mostly devoted to executing Web applications (PaaS). ▪ Developers can build and test applications on their own machines using the AppEngine software development kit (SDK). ▪ The services include in-memory caching, scalable data store, job queues, messaging. ▪ The languages currently supported are Python, Java, and Go. 108 COMPUTING PLATFORMS AND TECHNOLOGIES Microsoft Azure ▪ Microsoft Azure is a cloud operating system and a platform for developing applications in the cloud. ▪ Applications in Azure are organized around the concept of roles : Web role, worker role, and virtual machine role. ▪ Besides roles, Azure provides a set of additional services that complement application execution, such as support for storage (relational data and blobs), networking, caching, content delivery, and others. 109 COMPUTING PLATFORMS AND TECHNOLOGIES Hadoop 1. Apache Hadoop is an open-source framework that is suited for processing large data sets on commodity hardware. 2. Hadoop is an implementation of Map Reduce is a programming model developed at Google. 3. Map Reduce provides two fundamental operations for data processing: map and reduce. 4. The former transforms and synthesizes the input data provided by the user; the latter aggregates the output obtained by the map operations. 110 COMPUTING PLATFORMS AND TECHNOLOGIES Hadoop 5. Hadoop provides the runtime environment, and developers need only provide the input data and specify the map and reduce functions that need to be executed. 6. Primary objective was to implement large-scale search, and text processing on massively scalable web data stored using a big table or a GFS-distributed file system. 7. Huge amount of data is processed using parallel computation utilizing tens of thousands of processors at a time. 111 PROGRAMMING MODEL – MAP REDUCE MapReduce : Programming model developed at Google Objective : - Implement large scale search - Text processing on massively scalable web data stored using BigTable and GFS (Global File System ) distributed file system Designed for processing and generating large volumes of data via massively parallel computations, utilizing tens of thousands of processors at a time. 112 PROGRAMMING MODEL – MAP REDUCE Fault Tolerant : Ensure progress of computation even if processors and networks fail Example : - Hadoop: Open source implementation of MapReduce (developed at Yahoo) - Available on pre-packaged AMIs on Amazon EC2 cloud platform 113 PROGRAMMING MODEL – MAP REDUCE MapReduce programs work in two phases: 1. Map phase 2. Reduce phase. An input to each phase is key-value pairs. In addition, every programmer needs to specify two functions: map function and reduce function. 114 PROGRAMMING MODEL – MAP REDUCE Example – Consider you have following input data for your Map Reduce Program Welcome to Hadoop Class Hadoop is good Hadoop is bad 115 PROGRAMMING MODEL – MAP REDUCE 116 PROGRAMMING MODEL – MAP REDUCE The final output of the MapReduce task is bad 1 Class 1 good 1 Hadoop 3 is 2 to 1 Welcome 1 117 COMPUTING PLATFORMS AND TECHNOLOGIES Force.com and Salesforce.com ▪ Force.com is a cloud computing platform for developing social enterprise applications. ▪ SalesForce.com is a Software-as-a-Service solution for customer relationship management. ▪ The platform provides complete support for developing applications, from the design of the data layout to the definition of business rules and workflows and the definition of the user interface. 118 COMPUTING PLATFORMS AND TECHNOLOGIES 119 COMPUTING PLATFORMS AND TECHNOLOGIES Manjrasoft Aneka ▪ Manjrasoft Aneka is a cloud application platform for rapid creation of scalable applications and their deployment on various types of clouds in a seamless and elastic manner. ▪ Developers can choose different abstractions to design their application tasks, distributed threads, and map-reduce. 120 Unit 1 Fundamentals of Cloud Computing Lecture 9 09-01-2024 RECAP Scalability Computing Platforms and Technologies 122 AGENDA Principles of Parallel and Distributed Computing o Eras of Computing o Parallel v/s Distributed Computing o What is Parallel Processing? o Hardware architectures for Parallel Processing 123 PRINCIPLES OF PARALLEL AND DISTRIBUTED COMPUTING Eras of computing Every aspect of this era underwent a three- phase process: 1. Research And Development (R&D) 2. Commercialization, and 3. Commoditization. 124 PARALLEL VS. DISTRIBUTED COMPUTING Parallel Processing Distributed Processing 125 PARALLEL VS. DISTRIBUTED COMPUTING 126 WHAT IS PARALLEL PROCESSING ? ▪ Processing of multiple tasks simultaneously on multiple processors is called parallel processing. ▪ Programming on a multi processor system using the divide-and- conquer technique is called parallel programming. ▪ Computational requirements in the areas related to life sciences, aerospace, geographical information systems, mechanical design and analysis, and the like. 127 HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING The core elements of parallel processing are CPUs. Based on the number of instruction and data streams that can be processed simultaneously, computing systems are classified into the following four categories: 1. Single-Instruction, Single-Data (SISD) systems 2. Single-instruction, Multiple-Data (SIMD) systems 3. Multiple-instruction, Single-Data (MISD) systems 4. Multiple-instruction, Multiple-Data (MIMD) systems 128 HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING 1. Single-Instruction, Single-Data (SISD) Systems: Uniprocesssor Machine instructions are processed sequentially Based on the von Neumann architecture Examples of SISD systems are IBM PC, Macintosh, and workstations. 129 129 HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING von Neumann architecture Uses a single processor Uses one memory for both instructions and data. Executes programs following the fetch- decode-execute cycle 130 HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING 2. Single-instruction, Multiple-Data (SIMD) systems : Includes many processing units under the supervision of a common control unit. All processors receive the same instruction from the control unit but operate on different items of data. 131 HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING 3. Multiple-instruction, Single-Data (MISD) systems: MISD computing system is a multiprocessor machine capable of executing different instructions on different PEs but all of them operating on the same data set: Example : Y= sin(x) + cos(x) + tan(x) 132 HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING 4. Multiple-instruction, Multiple-Data (MIMD) systems: MIMD computing system is a multiprocessor machine capable of executing multiple instructions on multiple data sets. Used in computer-aided design/computer-aided manufacturing, simulation, modeling, communication switches etc. 133 HARDWARE ARCHITECTURES FOR PARALLEL PROCESSING 4. Multiple-instruction, Multiple-Data (MIMD) systems: MIMD machines are broadly categorized into two types based on the way PEs are coupled to the main memory : 1. Shared-memory MIMD (Tightly Coupled Multiprocessor System) Ex: Silicon Graphics machines and Sun/IBM’s SMP (Symmetric Multi-Processing). 2. Distributed-memory MIMD (Loosely Coupled Multiprocessor System) 134 AGENDA Principles of Parallel and Distributed Computing Hardware architectures for Parallel Processing Architectural Styles for Distributed Computing Software Architectural Styles APPROACHES TO PARALLEL PROGRAMMING Parallel Programming - a program divided into smaller independent chunks so that each processor can work on separate chunks of the problem. Parallel Programming Approaches: 1. Data Parallelism (Divide- and-Conquer) 2. Process Parallelism 3. Farmer and Worker Model (Job distribution approach) 136 LEVELS OF PARALLELISM Levels of parallelism are decided based on the lumps of code (grain size) that can be a potential candidate for parallelism. 137 LEVELS OF PARALLELISM Parallelism within an application can be detected at several levels: Large grain (or task level) Medium grain (or control level) Fine grain (data level) Very fine grain (multiple-instruction issue) 138 DISTRIBUTED COMPUTING A distributed system is a collection of independent computers that appears to its users as a single coherent system. Distributed computing studies the models, architectures, and algorithms used for building and managing distributed systems. 139 DISTRIBUTED COMPUTING 140 DISTRIBUTED COMPUTING 141 ARCHITECTURAL STYLES FOR DISTRIBUTED COMPUTING There are many different ways to organize the components that, taken together, constitute distributed environment. Architectural styles help in understanding and classifying the organization of software systems in general and distributed computing in particular. 142 COMPONENTS AND CONNECTORS A component represents a unit of software that encapsulates a function or a feature of the system. Examples of components can be programs, objects, processes. A connector is a communication mechanism that allows cooperation and coordination among components. 143 ARCHITECTURAL STYLES FOR DISTRIBUTED COMPUTING Architectural styles are organized into two major classes: 1. Software architectural styles (based on the logical arrangement of software components) 2. System architectural styles (Physical Arrangement of components) 144 SOFTWARE ARCHITECTURAL STYLES According to Garlan and Shaw , architectural styles are classified as shown in Table below. 145 SOFTWARE ARCHITECTURAL STYLES a) Data Centred Architectures: To achieve Data Integrity 146 SOFTWARE ARCHITECTURAL STYLES a) Data Centred Architectures: Types of components There are two types of components : 1. A central data structure or data store or data repository - which is responsible for providing permanent data storage. It represents the current state. 2. A data accessor or a collection of independent components - that operate on the central data store, perform computations, and might put back the results. 147 SOFTWARE ARCHITECTURAL STYLES The flow of control differentiates the architecture into two categories − 1. Repository Architecture Style 2. Blackboard Architecture Style 148 SOFTWARE ARCHITECTURAL STYLES 1. Repository Architecture Style Examples : Information System, Programming Environments, Graphical Editors, AI Knowledge Bases, Reverse Engineering System. 149 SOFTWARE ARCHITECTURAL STYLES Advantages ▪ Provides data integrity, backup and restore features. ▪ Provides scalability and reusability of agents as they do not have direct communication with each other. Disadvantages ▪ It is more vulnerable to failure and data replication or duplication is possible. ▪ High dependency between data structure of data store and its agents. ▪ Changes in data structure highly affect the clients. ▪ Evolution of data is difficult and expensive. ▪ Cost of moving data on network for distributed data. 150 SOFTWARE ARCHITECTURAL STYLES 2. Blackboard Architecture Style Examples : AI applications and complex applications, such as speech recognition, image recognition, security system, and business resource management systems etc. 151 SOFTWARE ARCHITECTURAL STYLES 2. Blackboard Architecture Style :Parts of Blackboard Model The blackboard model is usually presented with three major parts − i. Knowledge Sources (KS) Knowledge Sources, also known as Listeners or Subscribers are distinct and independent units. They solve parts of a problem and aggregate partial results. ii. Blackboard Data Structure This represents the data structure that is shared among the knowledge sources and stores the knowledge base of the application. iii. Control Control manages tasks and checks the work state. 152 SOFTWARE ARCHITECTURAL STYLES 2. Blackboard Architecture Style :Parts of Blackboard Model Advantages Provides scalability which provides easy to add or update knowledge source. Provides concurrency that allows all knowledge sources to work in parallel as they are independent of each other. Supports reusability of knowledge source agents. Disadvantages The structure change of blackboard may have a significant impact on all of its agents as close dependency exists between blackboard and knowledge source. It can be difficult to decide when to terminate the reasoning as only approximate solution is expected. 153 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: The main objective of this approach is to achieve the qualities of reuse and modifiability. It is suitable for applications such as compilers and business data processing applications. There are three types of execution sequences between modules: 1. Batch Sequential 2. Pipe and Filter 3. Process Control 154 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 1. Batch Sequential Batch sequential is a classical data processing model, in which a data transformation subsystem can initiate its process only after its previous subsystem is completely through − 155 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 1. Batch Sequential Example: 156 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 1. Batch Sequential Example: 157 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 1. Batch Sequential Advantages Provides simpler divisions on subsystems. Each subsystem can be an independent program working on input data and producing output data. Disadvantages Provides high latency and low throughput. Does not provide concurrency and interactive interface. External control is required for implementation. 158 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 2. Pipe and Filter This approach lays emphasis on the incremental transformation of data by successive component. The whole system is decomposed into components of data source, filters, pipes, and data sinks. The main feature of this architecture is its concurrent and incremented execution. 159 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 2. Pipe and Filter Filter : A filter is an independent data stream transformer or stream transducers. It transforms the data of the input data stream, processes it, and writes the transformed data stream over a pipe for the next filter to process. There are two types of filters − Active filter and Passive filter. 160 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 2. Pipe and Filter Active filter Active filter lets connected pipes to pull data in and push out the transformed data. It operates with passive pipe, which provides read/write mechanisms for pulling and pushing. This mode is used in UNIX pipe and filter mechanism. Passive filter Passive filter lets connected pipes to push data in and pull data out. It operates with active pipe, which pulls data from a filter and pushes data into the next filter. It must provide read/write mechanism. 161 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 2. Pipe and Filter 162 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 2. Pipe and Filter Advantages Provides concurrency and high throughput for excessive data processing. Provides reusability and simplifies system maintenance. Provides simplicity by offering clear divisions between any two filters connected by pipe. Provides flexibility by supporting both sequential and parallel execution. Disadvantages Not suitable for dynamic interactions. A low common denominator is needed for transmission of data in ASCII formats. Overhead of data transformation between filters. Difficult to configure this architecture dynamically. 163 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: 2. Pipe and Filter Pipe Pipes are stateless and they carry binary or character stream which exist between two filters. It can move a data stream from one filter to another. Pipes use a little contextual information and retain no state information between instantiations. 164 SOFTWARE ARCHITECTURAL STYLES b) Data Flow Architectures: Process Control It is a type of data flow architecture where data is neither batched sequential nor pipelined stream. The flow of data comes from a set of variables, which controls the execution of process. It decomposes the entire system into subsystems or modules and connects them. 165 SOFTWARE ARCHITECTURAL STYLES c) Virtual Machine Architectures: 166 SOFTWARE ARCHITECTURAL STYLES c) Virtual Machine Architectures: Popular examples within this category are 1. Rule-based Style, 2. Interpreter Style 3. Command Language Processors 167 SOFTWARE ARCHITECTURAL STYLES c) Virtual Machine Architectures: 1. Rule-based Style This architecture is characterized by representing the abstract execution environment as an inference engine. Programs are expressed in the form of rules or predicates that hold true. The input data for applications is generally represented by a set of assertions or facts that the inference engine uses to activate rules or to apply predicates, thus transforming data. 168 SOFTWARE ARCHITECTURAL STYLES c) Virtual Machine Architectures: 2. Interpreter Style The core feature of the interpreter style is the presence of an engine that is used to interpret a pseudo-program expressed in a format acceptable for the interpreter. The interpretation of the pseudo-program constitutes the execution of the program itself. 169 SOFTWARE ARCHITECTURAL STYLES c) Virtual Machine Architectures: 2. Interpreter Style Systems modeled according to this style exhibit four main components: i. the interpretation engine that executes the core activity of this style, ii. an internal memory that contains the pseudo-code to be interpreted, iii. a representation of the current state of the engine, and iv. a representation of the current state of the program being executed. This model is quite useful in designing virtual machines for high-level pro- gramming (Java, C#) and scripting languages (Awk, PERL, and so on) 170 SOFTWARE ARCHITECTURAL STYLES d) Call and Return Architectures: Availability of data controls the computation. Three Architectural Styles : 1. Top-Down Style 2. Object-Oriented Style 3. Layered Style 171 SOFTWARE ARCHITECTURAL STYLES d) Call and Return Architectures: 1. Top Down Style Follows divide-and-conquer approach to problem resolution. Systems developed according to this style are composed of one large main program that accomplishes its tasks by invoking subprograms or procedures. The components in this style are procedures and subprograms, and connections are method calls or invocation. Invocations The calling program passes information with parameters and receives data from return values or parameters procedures 172 SOFTWARE ARCHITECTURAL STYLES d) Call and Return Architectures: 2. Object -Oriented Style Systems are specified in terms of classes and implemented in terms of objects. Classes define the type of components by specifying the data that represent their state and the operations that can be done over these data. 173 SOFTWARE ARCHITECTURAL STYLES d) Call and Return Architectures: 2. Object -Oriented Style Advantages: There is a coupling between data and operations used to manipulate them. Object instances become responsible for hiding their internal state representation and for protecting its integrity while providing operations to other components. This leads to a better decomposition process and more manageable systems. Disadvantages of this style are mainly two: each object needs to know the identity of an object if it wants to invoke operations on it, and shared objects need to be carefully designed in order to ensure the consistency of their state. 174 SOFTWARE ARCHITECTURAL STYLES d) Call and Return Architectures: 2. Object -Oriented Style Disadvantages: i. Each object needs to know the identity of an object if it wants to invoke operations on it, and ii. shared objects need to be carefully designed in order to ensure the consistency of their state. 175 SOFTWARE ARCHITECTURAL STYLES d) Call and Return Architectures: 3. Layered Style The layered system style allows the design and implementation of software systems in terms of layers, which provide a different level of abstraction of the system. Each layer generally operates with at most two layers: i. the one that provides a lower abstraction level and ii. the one that provides a higher abstraction layer. Specific protocols and interfaces define how adjacent layers interact. 176 SOFTWARE ARCHITECTURAL STYLES e) Architectural Styles based on Independent Components: This class of architectural style models systems in terms of independent components that have their own life cycles, which interact with each other to perform their activities. There are two major categories that differentiate in the way the interaction among components is managed. Two Architectural Styles : 1. Communicating Processes ( Components = Processes, Connector = IPC Facilities) 2. Event Systems: ( “a significant change in state” ) 177 SOFTWARE ARCHITECTURAL STYLES e) Architectural Styles based on Independent Components: Communicating Processes ( Components = Processes, Connector = IPC Facilities) Components are represented by independent processes that leverage IPC facilities for coordination management. This is an abstraction that is quite suitable to modeling distributed systems that, being distributed over a network of computing nodes, are necessarily composed of several concurrent processes. Each of the processes provides other processes with services and can leverage the services exposed by the other processes. 178 SOFTWARE ARCHITECTURAL STYLES e) Architectural Styles based on Independent Components: Event Systems The components of the system are loosely coupled and connected. In addition to exposing operations for data and state manipulation, each component also publishes (or announces) a collection of events with which other components can register. In general, other components provide a callback that will be executed when the event is activated 179 SYSTEM ARCHITECTURAL STYLES System architectural styles cover the physical organization of components and processes over a distributed infrastructure. Two fundamental reference styles are: a. Client-Server b. Peer-to Peer 180 SUMMARY Principles of Parallel and Distributed Computing Hardware architectures for Parallel Processing Architectural Styles for Distributed Computing Software Architectural Styles 181 Unit 1 Fundamentals of Cloud Computing Lecture 10 17-01-2024 RECAP Principles of Parallel and Distributed Computing Hardware architectures for Parallel Processing Architectural Styles for Distributed Computing Software Architectural Styles 183 AGENDA System Architectural Styles Models for Inter-Process Communication SYSTEM ARCHITECTURAL STYLES System architectural styles cover the physical organization of components and processes over a dis- tributed infrastructure. They provide a set of reference models for the deployment of such systems and help engineers not only have a common vocabulary in describing the physical layout of systems but also quickly identify the major advantages and drawbacks of a given deployment and whether it is applicable for a specific class of applications. Two fundamental reference styles 1. Client-Server 2. Peer-to-Peer 185 SYSTEM ARCHITECTURAL STYLES a. Client-Server: Important Operations: Client – request, accept Server – Listen and response Client Design: Two major models 1. Thin – client Model 2. Fat -client Model 186 SYSTEM ARCHITECTURAL STYLES a. Client-Server: The three major components in the client-server model: presentation, application logic, and data storage. The mapping between the conceptual layers and their physical implementation in modules and components allows differentiating among several types of architectures, which garner the name of multitiered architectures. Two major classes exist: 1. Two-Tier Architecture 2. Three Tier/N-Tier Architecture 187 SYSTEM ARCHITECTURAL STYLES b. Peer-to-Peer: Ex: File Sharing applications such as Gnutella, BitTorrent and Kazaa. 188 MODELS FOR INTER-PROCESS COMMUNICATION IPC is a fundamental aspect of distributed systems design and implementation. IPC is used to either exchange data and information or coordinate the activity of processes. There are several different models in which processes can interact with each other; shared memory, remote procedure call (RPC), and message passing. 189 MODELS FOR INTER-PROCESS COMMUNICATION Message Based Communication: a. Message Passing (Ex: Message Passing Interface (MPI), OpenMP b. Remote Procedure Call c. Distributed Objects d. Distributed Agents (Software entities, designed to execute as independent threads and on distributed processors, capable of acting autonomously in order to achieve a pre-defined task. ). e. Web Services (SOAP and REST Services). 190 MODELS FOR INTER-PROCESS COMMUNICATION Message Based Communication: a. Point-to-point message model (Direct Communication, Queue Based Communication) b. Publish-and-subscribe message model (Push and Pull Strategy) c. Request-reply message model 191 TECHNOLOGIES FOR DISTRIBUTED COMPUTING The relevant technologies that provide concrete implementation of Interaction models which mostly rely on Message Based Communication: 1. Remote Procedure Call (RPC) 2. Distributed Object Frameworks 3. Service Oriented Computing 192 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 1. Remote Procedure Call (RPC) 193 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 1. Remote Procedure Call (RPC) Developing a system leveraging RPC for IPC consists of the following steps: ✓ Design and implementation of the server procedures that will be exposed for remote invocation. ✓ Registration of remote procedures with the RPC server on the node where they will be made available. ✓ Design and implementation of the client code that invokes the remote procedure(s). 194 TECHNOLOGIES FOR DISTRIBUTED COMPUTING For instance, RPyC is an RPC implementation for Python. There also exist platform- independent solutions such as XML-RPC and JSON-RPC, which provide RPC facilities over XML and JSON, respectively. 195 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 2. Distributed Object Frameworks: Distributed Object Framework refers to a technology that allows many different products, using many different standards, to work together and share information effortlessly across many different networks (e.g., LAN, WAN, Intranet, and Internet. Distributed object frameworks extend object-oriented programming systems by allowing objects to be distributed across a heterogeneous network and provide facilities so that they can coherently act as though they were in the same address space 196 TECHNOLOGIES FOR DISTRIBUTED COMPUTING The common interaction pattern is the following: 197 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 198 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 2. Distributed Object Frameworks: Object’s activation - Creation of a remote object. Various strategies can be used to manage object activation, from which we can distinguish two major classes: 1. Server-based activation and 2. Client-based activation. 199 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 2. Distributed Object Frameworks: In server-based activation, the active object is created in the server process and registered as an instance that can be exposed beyond process boundaries. In this case, the active object has a life of its own and occasionally executes methods as a consequence of a remote method invocation. In client-based activation the active object does not originally exist on the server side; it is created when a request for method invocation comes from a client. 200 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 2. Distributed Object Frameworks: Examples Common object request broker architecture (CORBA) Distributed componentobjectmodel(DCOM/COM1) Java remote method invocation (RMI).NET remoting 201 TECHNOLOGIES FOR DISTRIBUTED COMPUTING 3. Service – Oriented Computing Service-oriented computing organizes distributed systems in terms of services. Service: A service encapsulates a software component that provides a set of coherent and related functionalities that can be reused and integrated into bigger and more complex applications. 202 TECHNOLOGIES FOR DISTRIBUTED COMPUTING Service – Oriented Architecture Service-Oriented Architecture (SOA) is an architectural approach in which applications make use of services available in the network. In this architecture, services are provided to form applications, through a communication call over the internet. 203 TECHNOLOGIES FOR DISTRIBUTED COMPUTING Service – Oriented Architecture : There are two major roles within Service-oriented Architecture: 1. Service provider 2. Service consumer 204 TECHNOLOGIES FOR DISTRIBUTED COMPUTING Web Services: Web services are a standardized way or medium to propagate communication between the client and server applications on the World Wide Web. The various components of web services are SOAP,WSDL, REST 205 TECHNOLOGIES FOR DISTRIBUTED COMPUTING Web Services: Popular Web Services Protocols are: SOAP: SOAP is known as the Simple Object Access Protocol. SOAP was developed as an intermediate language so that applications built on various programming languages could talk quickly to each other and avoid the extreme development effort. 206 TECHNOLOGIES FOR DISTRIBUTED COMPUTING Web Services: Popular Web Services Protocols are: WSDL: WSDL is known as the Web Services Description Language(WSDL). WSDL is an XML-based file which tells the client application what the web service does and gives all the information required to connect to the web service. REST: REST stands for REpresentational State Transfer. REST is used to build Web services that are lightweight, maintainable, and scalable. 207 WEB SERVICES ? 208 SUMMARY System Architectural Styles Models for Inter-Process Communication 209 Unit 1 Fundamentals of Cloud Computing Lecture 10 03-01-2023 RECAP System Architectural Styles Models for Inter-Process Communication 211 AGENDA Web Services WEB SERVICES ? Types of Web Service There are mainly two types of web services. 1. SOAP web services. 2. RESTful web services. 213 WEB SERVICES ? SOAP SOAP is known as a transport-independent messaging protocol. SOAP is based on transferring XML data as SOAP Messages. 214 WEB SERVICES ? Each SOAP document needs to have a root element known as the element. The root element is the first element in an XML document. The "envelope" is in turn divided into 2 parts. The first is the header, and the next is the body. The header contains the routing data which is basically the information which tells the XML document to which client it needs to be sent to. The body will contain the actual message. 215 WEB SERVICES ? SOAP Unicode Transformation Format 216 WEB SERVICES ? WSDL WSDL is an XML-based file which basically tells the client application what the web service does. It is known as the Web Services Description Language(WSDL). The WSDL file contains the location of the web service and The methods which are exposed by the web service. 217 WEB SERVICES ? WSDL The general structure of a WSDL file Definition TargetNamespace DataTypes Messages Porttype Bindings service 218 WEB SERVICES ? RESTful Web Services REST is used to build Web services that are lightweight, maintainable, and scalable in nature. A service which is built on the REST architecture is called a RESTful service. The underlying protocol for REST is HTTP 219 UNIT SUMMARY Unit 1 - Fundamentals of Cloud Computing Cloud computing at a glance, the vision of cloud computing, Defining a cloud, Historical developments, Building cloud computing environments, Application development. Characteristics of Cloud computing. Scalability, types of scalability. Horizontal Scalability and Cloud Computing. Computing platforms and technologies, Principles of Parallel and Distributed Computing. 220 Thank You

Use Quizgecko on...
Browser
Browser