Evolution of Cloud Computing PDF
Document Details

Uploaded by UnbiasedLemur3035
Next.edu.mk
Trajche Krstev
Tags
Summary
This document provides an overview of the evolution of cloud computing, from the early days of distributed systems to the modern concepts of virtualization, automation, and CI/CD. It discusses key concepts such as mainframe computing, cluster computing, grid computing, utility computing, virtualization techniques, and the layered model of cloud computing.
Full Transcript
THE Evolution of Cloud Computing CLOUDUTION Trajche Krstev, MSc Solution Architect Packet Core and Telco Cloud [email protected] Agenda - The Evolution - Layered model of Cloud Computing - Virtualization in Cloud Computing - Virtualization techniqu...
THE Evolution of Cloud Computing CLOUDUTION Trajche Krstev, MSc Solution Architect Packet Core and Telco Cloud [email protected] Agenda - The Evolution - Layered model of Cloud Computing - Virtualization in Cloud Computing - Virtualization techniques - Automation in Cloud Computing - Continuous Integration/Continuous Delivery (or Continuous Deployment) – CI/CD The Evolution - The term "Cloud Computing" was first invented in the 1950s to refer to internet-based services. - The invention of cloud computing can be traced back to the 1960s, when John McCarthy predicted that "computation may someday be organized as a public utility“. - In the 1990s, with the rise of the internet, companies began to realize the potential of hosting applications and data online. - This concept gained further momentum in the early 2000s with the launch of platforms like Amazon Web Services (AWS), which offered scalable computing resources on-demand. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Distributed Systems: - A distributed system consists of multiple independent systems that appear as a single entity to the user. - Its main purpose is to share resources and utilize them efficiently across systems. - Key characteristics include scalability, concurrency, continuous availability, heterogeneity, and fault tolerance. - The limitation of early distributed systems was that all systems needed to be located in the same geographical area. To address this, the evolution of distributed computing led to the development of mainframe, cluster, and grid computing. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Mainframe Computing: - Mainframes, introduced in the 1950s, are powerful and reliable machines designed for large-scale data processing. - They handle massive input-output operations and are still used for bulk tasks such as online transaction processing. - Known for minimal downtime and high fault tolerance, mainframes offered substantial computing power but were prohibitively expensive. - As a cost-reduction measure, cluster computing emerged as an alternative, offering similar computational capabilities at a lower cost. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Cluster Computing: - Cluster computing emerged in the 1980s as a more cost-effective solution to mainframe computing. - It involves multiple machines connected via high-bandwidth networks, functioning as a unified system. - Clusters were cheaper than mainframes, could handle high computational tasks, and allowed for the easy addition of new nodes. - While it addressed cost concerns, the geographical limitations persisted. To overcome this, grid computing was introduced, connecting systems across vast geographical areas. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Grid Computing: - Introduced in the 1990s, grid computing connects systems at different geographical locations via the internet. - Grid nodes often belong to different organizations, forming a heterogeneous network of systems. - This model addressed geographic limitations but introduced new challenges such as low availability of high-bandwidth connectivity. - With increasing network issues, cloud computing is seen as the natural evolution and successor to grid computing. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Virtualization: - Virtualization, introduced nearly 40 years ago, allows the creation of a virtual layer over hardware, enabling multiple instances to run simultaneously. - It is a foundational technology for cloud computing, facilitating services like Amazon EC2, VMware vCloud and Openstack. - Hardware virtualization remains one of the most common types, enabling efficient resource allocation and isolation of workloads. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Web 2.0: - Web 2.0 represents the evolution of the internet into a more interactive and dynamic platform. - It enabled flexible, rich web applications that allow users to interact and share data easily. - Popular Web 2.0 applications include Google Maps, Facebook, and Twitter. This technology is crucial for the success of social media platforms. - Web 2.0 gained widespread adoption around 2004, marking a significant shift in how users interact with the web. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Service Orientation: - Service Orientation refers to a cloud computing model that supports flexible, low-cost, and scalable applications. - It introduced two important concepts: Quality of Service (QoS), including Service Level Agreements (SLA), and Software as a Service (SaaS). - This model emphasizes the importance of delivering services that can be tailored to user needs while maintaining high service quality. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Utility Computing: - Utility computing is a computing model that provisions services, such as compute power and storage, on a pay-per-use basis. - It defines the methods of providing resources as a utility, much like electricity, where users pay only for what they consume. - This approach is foundational to the cloud computing model, where resources are dynamically allocated and billed based on usage. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing The Evolution Cloud Computing: - Cloud computing allows the storage and access of data and programs over the internet, rather than on local machines or servers. - Also known as internet-based computing, it provides users with on-demand access to resources such as computing power, storage, and applications. - Cloud computing has revolutionized how businesses and individuals interact with technology, offering flexibility, scalability, and cost-efficiency. - Services can include data storage, file management, software applications, and more, all hosted remotely on secure cloud platforms. Distributed Cluster Service Cloud Virtualization Computing Computing Oriented Computing Mainframe Grid Web Utility 1950s Computing Computing 2.0 Computing Layered model of Cloud Computing - Cloud computing builds upon several established and well-researched concepts, including distributed and grid computing, virtualization, and Software-as-a-Service (SaaS). - While many of these concepts are not entirely new, the true innovation of cloud computing lies in the way it delivers computing services to customers. - Over time, various business models have emerged, offering services at different levels of abstraction, such as software applications, programming platforms, data storage, and computing infrastructure. Layered model of Cloud Computing Cloud Application Layer - The application layer, at the top of the cloud stack, hosts cloud applications and is the most visible layer to end users, accessed via web portals. - Cloud applications benefit from automatic scaling, offering improved performance, availability, and reduced operational costs compared to traditional applications. - This layer includes various cloud services, which users access as needed, and it determines communication availability and required cloud resources for data transfer. - The application layer is responsible for ensuring that applications cooperate and can communicate, managing IP traffic handling protocols like Telnet, FTP, HTTP, and HTTPS. - Key benefits of this layer include reduced software maintenance costs, centralized updates, and subscription- based revenue models for service providers, though challenges around security, availability, and data migration persist. Examples include Salesforce CRM, Google Apps, Facebook, Youtube... Layered model of Cloud Computing Cloud Software Environment Layer - Platform - The Cloud Software Environment Layer, also known as the Platform Layer, combines operating systems, application software, and APIs to provide a programming environment for cloud application developers. - This layer ensures scalability, dependability, and security protection, giving users the space to create, test, and monitor their applications and operational processes. - The primary goal of this layer is to simplify application deployment on virtual machines (VMs), reducing complexity for developers by managing underlying infrastructure. - It provides essential cloud services like automatic scaling, load balancing, and communication, crucial for building and running cloud-based applications. - Examples of Platform-as-a-Service (PaaS) include Google’s App Engine, which supports API development for web applications, and Salesforce’s AppExchange, which allows developers to build or extend applications within Salesforce’s CRM environment. - A significant challenge is vendor lock-in, as developers may become dependent on the proprietary platform provided by the cloud service provider, limiting flexibility. Layered model of Cloud Computing Cloud Software Infrastructure Layer - The Infrastructure Layer (or Virtualization Layer) is the foundational layer in cloud computing, utilizing virtualization technologies like Xen, KVM, VMware, and Hyper-V to transform physical resources into virtualized pools of compute, storage, and network resources. - Virtualization enables automated resource provisioning, allowing for flexible and efficient management of infrastructure, as well as dynamic resource allocation based on demand. - This layer supports the creation of higher-level platform layers, providing the underlying infrastructure for cloud services, applications, and platforms. - Resource flexibility is a key advantage of the infrastructure layer, enabling cloud providers to offer scalable and adaptable resources to meet varying user needs. - Infrastructure-as-a-Service (IaaS) and Container-as-a-Service (CaaS) are key services offered in the infrastructure layer, where IaaS provides virtualized computing resources, and CaaS allows users to deploy, manage, and scale containerized applications using cloud infrastructure. - Data Storage-as-a-Service (DaaS) offers flexible, remote storage accessible from anywhere, with trade-offs between availability, reliability, performance, and consistency - Amazon’s Elastic Block Storage (EBS) & Simple Storage Service (S3). - Communication-as-a-Service (CaaS) focuses on providing secure, high-quality communication capabilities, such as network security, bandwidth, and monitoring. Though still primarily research-driven, Microsoft’s Connected Service Framework (CSF) is an example of CaaS. Layered model of Cloud Computing Cloud Software Hardware Layer Software Kernel Layer & Hardware / Firmware Layer - The software kernel layer manages the physical servers in data centers using OS kernels, hypervisors, virtual machine monitors, and clustering middleware, serving as a foundation for grid computing applications and leveraging advancements from the grid computing research community. - The hardware/firmware layer consists of physical hardware forming the backbone of cloud computing, enabling services like Hardware-as-a-Service (HaaS) for enterprises, with examples including IBM's Managed Hosting service. - This layer oversees critical physical resources such as servers, switches, routers, power supplies, and cooling systems in data centers, ensuring their availability and efficient operation to deliver services to end users. - To reduce service interdependencies and support diverse use cases, microservices rely on a data layer with multiple dedicated databases rather than a single shared database, enabling independent deployment and modification of services. Virtualization in Cloud Computing “The abstraction of a physical component into a logical object.” - Virtualization enables the creation of virtual versions of underlying services, allowing multiple operating systems and applications to operate concurrently on the same hardware. This technology enhances hardware utilization and flexibility and was originally developed during the mainframe era. - Virtualization allows sharing of a single physical instance of a resource or an application among multiple customers and organizations at one time. It does this by assigning a logical name to physical storage and providing a pointer to that physical resource on demand. - The term virtualization is often synonymous with hardware virtualization, which plays a fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for cloud computing - In the case of cloud computing, users store data in the cloud, but with the help of Virtualization, users have the extra benefit of sharing the infrastructure. Virtualization in Cloud Computing - Characteristics - Enhanced Security: The ability to manage the execution of a guest program in a fully transparent manner creates new opportunities for providing a secure and controlled environment. Guest program operations are typically executed within the virtual machine, which then translates and applies them to the host system. - Managed Execution: Key features of virtualization include sharing, aggregation, emulation, and isolation. - Sharing: Virtualization enables the creation of separate computing environments within a single host. - Aggregation: While virtualization allows physical resources to be shared among multiple guests, it also supports aggregation, the reverse process, where resources can be combined. Virtualization in Cloud Computing - Types - Hardware Virtualization: Hardware virtualization occurs when the virtual machine software or virtual machine monitor (VMM) is installed directly on the hardware system. The hypervisor’s primary role is to manage and monitor the processor, memory, and other hardware resources. Once the hardware is virtualized, different operating systems can be installed, allowing multiple applications to run on each OS. - Operating System Virtualization: Operating system virtualization occurs when the virtual machine software or virtual machine manager (VMM) is installed on the host operating system rather than directly on the hardware system. - Server Virtualization: Server virtualization occurs when the virtual machine software or virtual machine manager (VMM) is directly installed on the server system. In most of the cases this is done because a single physical server can be divided into multiple servers on the demand basis and for balancing the load. - Storage Virtualization: Storage virtualization is the technique of combining physical storage from multiple network storage devices to present it as a single unified storage device. It can also be implemented through software applications. - Network Virtualization: The ability to run multiple virtual networks with each having a separate control and data plan. It co-exists together on top of one physical network. Virtualization in Cloud Computing - Benefits and Drawbacks Benefits: Drawbacks: - More flexible and efficient allocation of - Cloud adoption often requires a significant resources. upfront investment, although it can lead to long- - Enhance development productivity. term cost savings for companies. - It lowers the cost of IT infrastructure. - Transitioning from traditional servers to the - Remote access and rapid scalability. cloud demands a skilled workforce. - High availability and disaster recovery. - Storing data on third-party cloud services - Pay per use of the IT infrastructure on demand. introduces potential risks, as the data may be - Enables running multiple operating systems. vulnerable to cyberattacks or breaches. Virtualization techniques in Cloud Computing - Hypervisor - Hypervisor – The software providing the environment to abstract the physical component into the logical object aka Virtual Machine Monitor (VMM). It’s a hardware virtualization technique that allows multiple guest operating systems (OS) to run on a single host system at the same time. Bare-Metal Virtualization (Type 1 Hypervisor) - A hypervisor is installed directly on the hardware without a host operating system, managing virtual machines - Simple architecture: Hypervisor replaces the kernel entirely. - Acts as the base layer and provides direct access to hardware for VMs.. - High performance due to direct hardware access. - This means: - Isolation and higher security by design - Is traded off against maintainability and re-use of drivers - Examples: VMware ESXi, Microsoft Hyper-V, Citrix XenServer Virtualization techniques in Cloud Computing - Hypervisor Hosted Virtualization (Type 2 Hypervisor) - A hypervisor runs on top of a host operating system, allowing VMs to run as applications. - The hypervisor is just a driver that works with user-level monitor and the host OS kernel. - HW access is intercepted by the ring 0- VM monitor passed to the User level Virtual Monitor, which passes requests to the kernel. - Easy to set up on existing systems. - Lower performance compared to bare-metal virtualization. - This means: - Re-use of device drivers and host OS kernel infrastructure - Is traded off against security and a large trusted computing base - Examples: VMware Workstation, Oracle VirtualBox Virtualization techniques in Cloud Computing - Other Variations Paravirtualization Full Virtualization - Paravirtualization is a virtualization technique where - Full virtualization is a technique where the hypervisor the guest operating system is modified to interact completely emulates the underlying hardware, directly with the host's hypervisor via specialized APIs. enabling unmodified guest operating systems to run as - The guest OS is aware it’s running in a virtualized if they were on real hardware. environment. It bypasses certain hardware emulation - The hypervisor intercepts and translates all instructions tasks and communicates with the hypervisor for from the guest OS to the hardware. This creates a fully optimized performance. isolated environment for each virtual machine. - Improved performance compared to full virtualization. - No need to modify the guest OS. - Reduced overhead because there’s no need for - High compatibility with different operating systems. emulating hardware. - More overhead due to hardware emulation, which can - Requires modification of the guest OS, which may not impact performance. always be feasible. Virtualization techniques in Cloud Computing - Other Variations Paravirtualization vs Full Virtualization Feature Paravirtualization Full-Virtualization Guest OS modification Required Not required Performance Better, lower overhead Slightly slower, higher overhead Compatibility Limited to modified OS Supports any OS Virtualization Hypercalls Binary translation Technique Security More secure Less secure Speed Faster Slower Examples Xen VMware ESXi, Hyper-V, KVM Virtualization techniques in Cloud Computing - Other Variations OS-Level Virtualization Emulation - A virtualization technique where the kernel of the host - Allows applications written for one hardware operating system allows multiple isolated user-space environment to run on a very different hardware instances, often referred to as containers. environment, such as different type of CPU. - Containers share the host OS kernel but have isolated - The entire hardware environment is simulated in environments for applications. software, enabling any OS or application to run, - Lightweight and fast. regardless of architecture. - Minimal overhead since no separate guest OS is - The emulator replicates the instruction set architecture required. of another system. - Ideal for microservices and cloud-native applications. - Highly flexible; runs software designed for entirely - Limited to the same operating system as the host. different hardware. - Example: Docker, LXC (Linux Containers). - Significant performance overhead. - Example: QEMU Automation in Cloud Computing - Automation is the use of technology and software to streamline repetitive and manual tasks, such as provisioning, configuration, and deployment of IT resources. In Cloud Computing, automation plays a crucial role in deploying and managing applications, allowing organizations to deliver new features and updates rapidly and consistently to customers. Benefits: - Improved Efficiency: Automation can significantly reduce the time and effort required to perform repetitive and manual tasks, freeing up valuable time and resources for more strategic initiatives. - Increased Accuracy: Automation eliminates the risk of human error, helping to ensure that tasks are performed consistently and accurately. - Faster Deployment: Automation can help organizations to deploy new applications and updates much more quickly and consistently, enabling them to stay ahead of the competition. - Better Scalability: Automation makes it easier to scale resources up or down in response to changing business needs, helping organizations to be more agile and responsive. - Cost Savings: By automating repetitive and manual tasks, organizations can significantly reduce their operational costs, freeing up the budget for other initiatives. - Improved Security: Automation can help organizations to enforce security policies and procedures more consistently, reducing the risk of security breaches and ensuring that sensitive data is protected. Continuous Integration/Continuous Delivery (or Continuous Deployment) [CI/CD] - CI (Continuous Integration) and CD (Continuous Delivery) are practices that focus on automating the process of integrating code changes from multiple developers into a unified codebase. In these practices, developers frequently commit their changes to a central code repository (e.g., GitHub, GitLab, Bitbucket). - Once code is committed, automated tools trigger processes like building, testing, and code review to ensure that new changes do not break the existing code. These practices streamline development by enabling faster and more reliable code integration and deployment. - The key goals of CI/CD are to find and address bugs quicker, make the process of integrating code across a team of developers easier, improve software quality, and reduce the time it takes to release new feature updates. Continuous Integration/Continuous Delivery (or Continuous Deployment) [CI/CD] - CI (Continuous Integration: Continuous Integration is about how developers integrate code using a shared repository multiple times a day with the help of automation. - Continuous Delivery is about automatically releasing software to the test or production environment. - Continuous Integration [AKA Build Pipeline] and Continuous Delivery [AKA Release Pipeline] process: - Developers manage code on a shared repository. - After compilation, Automated Unit and UI testing are performed. - The Operations Team needs to take care of Automated scripts that further go to the test environment. - Testing is performed and after approval, the software is sent to production. Continuous Integration/Continuous Delivery (or Continuous Deployment) [CI/CD] - Continuous Deployment is about releasing software to production automatically without human intervention. - Continuous Deployment is an advancement, and it reduces manual work of the Operations and Testing team to deploy the software to production as it is automatically deployed to production after Automated Acceptance Testing. - Automated and quick code integration. - Error detection and response are fast due to the short cycle and iterations. - Less prone to human errors. - Faster than following manual instructions due to automated scripting. - Deployment can be done several times a day. SPEAK UP questions, comments?