🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Huawei Certification Cloud Computing Learning Guide.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Huawei Certification Cloud Computing Training Courses HCIA-Cloud Computing Learning Guide Version: V4.0 HUAWEI TECHNOLOGIES CO., LTD. 1 Copyright © Huawei Technologies Co., Ltd. 2019 All rights reserved. No part of this document ma...

Huawei Certification Cloud Computing Training Courses HCIA-Cloud Computing Learning Guide Version: V4.0 HUAWEI TECHNOLOGIES CO., LTD. 1 Copyright © Huawei Technologies Co., Ltd. 2019 All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. Huawei Technologies Co., Ltd. Address: Huawei Industrial Base Bantian, Longgang Shenzhen 518129 People's Republic of China Website: https://www.huawei.com/ Email: [email protected] Copyright © Huawei Technologies Co., Ltd. i HCIA-Cloud Computing Learning Guide V4.0.1 Contents Contents 1 What's Cloud Computing?...................................................................................................... 1 1.1 Cloud Computing Is Around Us............................................................................................................................. 1 1.2 Characteristics of Cloud Computing....................................................................................................................... 3 1.2.1 On-Demand Self-Service.................................................................................................................................... 3 1.2.2 Broad Network Access........................................................................................................................................ 4 1.2.3 Resource Pooling................................................................................................................................................ 5 1.2.4 Rapid Elasticity.................................................................................................................................................. 6 1.2.5 Measured Service............................................................................................................................................... 6 1.3 Definition of Cloud Computing.............................................................................................................................. 7 1.4 Origin and Development of Cloud Computing....................................................................................................... 8 1.4.1 A Brief History of the Internet............................................................................................................................. 8 1.4.2 The History of Computing.................................................................................................................................10 1.4.3 Development of Cloud Computing.....................................................................................................................13 1.4.4 Further Reading: Differences Between Cloud Computing 1.0 and 2.0/3.0...........................................................15 1.5 Cloud Computing Models.....................................................................................................................................20 1.5.1 By Deployment Model.......................................................................................................................................21 1.5.2 By Service Model..............................................................................................................................................22 2 Introduction to Compute Virtualization............................................................................. 24 2.1 Virtualization Overview........................................................................................................................................24 2.1.1 What's Virtualization?........................................................................................................................................24 2.1.2 A Brief History of Compute Virtualization..........................................................................................................26 2.1.3 Compute Virtualization Types............................................................................................................................27 2.2 Compute Virtualization.........................................................................................................................................29 2.2.1 CPU Virtualization.............................................................................................................................................29 2.2.2 Memory Virtualization.......................................................................................................................................35 2.2.3 I/O Virtualization...............................................................................................................................................36 2.2.4 Mainstream Compute Virtualization...................................................................................................................37 2.3 KVM....................................................................................................................................................................38 2.4 FusionCompute....................................................................................................................................................42 3 Network Basics for Cloud Computing................................................................................ 44 3.1 Network Architecture in Virtualization..................................................................................................................44 3.1.1 Traffic on a Virtual Network..............................................................................................................................44 Copyright © Huawei Technologies Co., Ltd. ii HCIA-Cloud Computing Learning Guide V4.0.1 Contents 3.1.2 Basic Network Concepts....................................................................................................................................45 3.2 Physical Networks in Virtualization......................................................................................................................49 3.3 Virtual Networks in Virtualization.........................................................................................................................54 3.3.1 Virtual Network Architecture.............................................................................................................................54 3.3.2 Network Features of Huawei Virtualization Products..........................................................................................61 4 Storage Basics for Cloud Computing.................................................................................. 69 4.1 Mainstream Physical Disk Types...........................................................................................................................70 4.1.1 HDD.................................................................................................................................................................70 4.1.2 SSD...................................................................................................................................................................74 4.2 Centralized Storage and Distributed Storage..........................................................................................................77 4.2.1 Centralized Storage............................................................................................................................................77 4.2.2 RAID................................................................................................................................................................85 4.2.3 Distributed Storage and Replication...................................................................................................................88 4.3 Virtualized Storage and Non-virtualized Storage...................................................................................................92 4.4 VM Disks.............................................................................................................................................................95 4.5 Storage Features of Huawei Virtualization Products..............................................................................................95 4.5.1 Storage Architecture of Huawei Virtualization Products......................................................................................95 4.5.2 Characteristics of Huawei VM Disks..................................................................................................................96 5 Virtualization Features.......................................................................................................... 98 5.1 Virtualization Cluster Features..............................................................................................................................98 5.1.1 HA....................................................................................................................................................................99 5.1.2 Load Balancing............................................................................................................................................... 100 5.1.3 Scalability....................................................................................................................................................... 101 5.1.4 Memory Overcommitment............................................................................................................................... 102 5.2 VM Features....................................................................................................................................................... 104 5.2.1 Quick VM Deployment.................................................................................................................................... 104 5.2.2 VM Resource Hot-Add.................................................................................................................................... 105 5.2.3 VM Console.................................................................................................................................................... 106 5.2.4 VM Snapshot................................................................................................................................................... 107 5.2.5 NUMA............................................................................................................................................................ 108 5.3 Huawei Virtualization Product Features.............................................................................................................. 109 5.3.1 Cluster Features............................................................................................................................................... 109 5.3.2 VM Features.................................................................................................................................................... 112 6 Cloud Computing Trends................................................................................................... 115 6.1 Fields Related to Cloud Computing..................................................................................................................... 115 6.2 Cloud-Enabling Technologies............................................................................................................................. 121 6.2.1 Container......................................................................................................................................................... 121 6.2.2 OpenStack....................................................................................................................................................... 123 6.3 Other Emerging Technologies............................................................................................................................. 129 6.3.1 Fog Computing................................................................................................................................................ 129 6.3.2 Edge Computing.............................................................................................................................................. 129 Copyright © Huawei Technologies Co., Ltd. iii HCIA-Cloud Computing Learning Guide V4.0.1 Contents 6.3.3 Microservices.................................................................................................................................................. 130 6.3.4 Serverless........................................................................................................................................................ 131 7 Conclusion............................................................................................................................ 132 8 Appendix............................................................................................................................... 134 8.1 Verification 1...................................................................................................................................................... 134 8.2 Verification 2...................................................................................................................................................... 135 8.3 Verification 3...................................................................................................................................................... 136 8.4 Verification 4...................................................................................................................................................... 137 Copyright © Huawei Technologies Co., Ltd. iv HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? 1 What's Cloud Computing? 1.1 Cloud Computing Is Around Us People outside of the tech industry may have many questions when they so often come across the term cloud computing. What's cloud computing? What kind of services does cloud computing provide? Where and how do I acquire them? Cloud computing may be a technical term whose meaning is unclear to many, but there is a good chance that many of us are already using cloud services without being aware it. First, let's take a look at the Products menu on the HUAWEI CLOUD user portal, Huawei's public cloud service, as shown in Figure 1-1. Figure 1-1 HUAWEI CLOUD user portal Under Products, we can see several service categories, including Compute, Storage, Network, Application, and more. Each category further contains a variable number of services. Now, let's have a look at one of the most popular cloud services on HUAWEI CLOUD — Elastic Cloud Server (ECS). Figure 1-2 shows available ECS flavors (or specifications). Copyright © Huawei Technologies Co., Ltd. 1 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? Figure 1-2 ECS flavors An ECS flavor is similar to the computer hardware specifications we see when we buy a new computer. They both contain CPU, memory, hard disk, and other parameters. To ready an ECS, we will also need to install the required OS and configure an IP address for it. In fact, subscribing to an ECS is just like buying and setting up a personal computer, except that an ECS is a computer on the cloud. It can do almost anything that a conventional computer does, such as editing documents, sending emails, and enabling office collaboration, plus the things that a conventional computer can't do. For example, an ECS is accessible via a mobile phone or tablet, with a similar user experience as when it is accessed through a computer with a big screen monitor. Also, you can modify the configuration of your ECS at any time, for example, scaling the ECS memory from 1 GB to 2 GB. In addition to ECS, you can also subscribe to many other services on HUAWEI CLOUD. For example, you can buy the CloudSite service to quickly build websites, or the Object Storage Service (OSS) or Elastic Volume Service (EVS) to quickly expand storage spaces. HUAWEI CLOUD, as well as other mainstream cloud platforms, also provide AI services, such as facial, voice, image, and text recognition. In short, cloud computing allows us to use IT services as conveniently as using utilities like water and electricity. Think about how we use water and electricity. We use water and electricity simply by turning on the tap or power switch. This is because water and electricity are already on the grids. This is also true with IT services. Cloud computing delivers ready-to-use IT services over the Internet. By analogy, the Internet is the grid, and web portals or apps are the water taps You may say: I have my own personal computer. Most applications I need are installed on my local disk, and I don't use services like facial or voice recognition. IT services are not water or electricity. I can live without them. So cloud computing has nothing to do with me. There is a chance that you might be wrong, for you may be using cloud computing already. As we have said earlier, the "tap" for cloud computing, that is, how and where we access cloud resources, may be a web page or app. Here are a few apps you might be using already: auto backup & restore on Huawei phones, Google Translate, and iReader. Copyright © Huawei Technologies Co., Ltd. 2 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? Figure 1-3 Cloud computing apps Backup & Restore is a default service on Huawei phones. Other brands also have similar services, such as iCloud for iPhone. These services allow you to back up the local files on your phone to a remote data center. After you change to a new phone, you can easily restore your data to your new phone using your account and password configured for this service. Google Translate is a free service that instantly translates words, phrases, and web pages between English and over 100 other languages. iReader is a popular online reading app that gives you access to a huge library of online electronic books. These three apps are all powered by the cloud. Even if you never used any of them, there is a good chance you are using other cloud-based apps without being aware of it. 1.2 Characteristics of Cloud Computing As cloud computing services mature both commercially and technologically, it will be easier for companies as well as individuals to reap the benefits of cloud. Cloud computing has the following five well-recognized characteristics. 1.2.1 On-Demand Self-Service Supermarket is a good example of on-demand self-service. In a supermarket, each consumer chooses the items they need by themselves. For similar goods of the same category, we make our choice by comparing their prices, brands, and product descriptions. On-demand self-service is also one of the major characteristics of cloud computing. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service's provider. The prerequisite for on-demand self-service is for the consumer to understand their own needs and know which product or products can accommodate such needs. A supermarket is comparable to a cloud service platform in the sense that they both offer a huge variety of products (called cloud services in the case of the cloud platform). To deliver a good consumer Copyright © Huawei Technologies Co., Ltd. 3 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? experience, the supermarket or cloud provider may need to make necessary recommendations or consumer guidance in order to help consumers make the right choices. 1.2.2 Broad Network Access Cloud computing is computing power over the Internet, so network access is an innate characteristic of cloud computing. Today, the Internet has reached almost every inhabited corner of the world. We can use any electronic device, such as a personal computer, tablet, or cell phone to connect to the Internet. This means we can have access to cloud services through any electronic device as long as there is network connectivity. When in office, we can use cloud services via personal computers. In air ports or train stations, we can use them through a mobile phone or tablet over Wi-Fi or mobile data. Figure 1-4 and Figure 1-5 both show the user interface for ModelArts, a one-stop development platform for AI developers offered by HUAWEI CLOUD: one on a personal computer and the other on a mobile phone. Despite the different user interfaces, the experiences are the same. Figure 1-4 ModelArts on a personal computer Copyright © Huawei Technologies Co., Ltd. 4 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? Figure 1-5 ModelArts on a mobile phone 1.2.3 Resource Pooling Resource pooling is one of the prerequisites for on-demand self-service. In a supermarket, we can see that different categories of items are put into different areas, such as the fruit and vegetable area, the fast frozen food area, and so on, so that consumers can quickly find the items they need. Resource pooling is not merely putting all resources of the same type onto the same rack, as it is done in supermarkets. Resource pooling is also about breaking down the resources by the finest granularities for flexible, on-demand provisioning. Instant noodles are popular food among many. The problem is that, for some people, one bag is too little but two are too much. In theory, the manufacturers can solve this problem by reducing the minimum purchasable unit for instant noodles from per bag to an even smaller one. This is one of the things resource pooling does. Another example is how drinks are served in many cafeterias. Different types of fruit juices are put into different dispensers so that customers can take the exact amounts they need. Another thing that resource pooling does is to shield the differences in the underlying resources. Again we use the analogy of serving carbonated drinks in cafeterias. Customers may not know whether they are having Pepsi or Coca-Cola, because the dispenser says nothing of the brand. Resources that can be pooled include compute, storage, and network Copyright © Huawei Technologies Co., Ltd. 5 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? resources. Compute resources include CPU and memory. Pooled CPU resources are available to consumers per core. Consumers have no idea whether they are use AMD or Intel CPUs. Further details about pooled compute resources will be provided in section "Computing Virtualization" of this book. 1.2.4 Rapid Elasticity For applications that may face fluctuations in demand, being able to rapidly and elastically provision computing resources is a desirable feature. When the load surges during peak hours or in the case of a major social or commercial event, more servers can be quickly provisioned, and automatically in some cases. When the load reduces, the servers can be quickly released for reallocation. Rapid elasticity, both scaling in or out, can be achieved manually or based on predefined policies. Scaling can be done by increasing or decreasing the quantity of servers, or by increasing or decreasing the resources available with each server. A best example of this characteristic, which the Chinese are familiar with, is the primary weapon of the Monkey King (or Sun Wukong), called Ruyi Jingu Bang. This weapon is a rod which he can shrink down to the size of a needle and keep in his ear, as well as expand it to gigantic proportions, as need be. Besides, he can also pluck hairs from his body and blow on them to convert them into clones of himself to gain a numerical advantage in battle. In this case, the rod also multiplies so that each of the clones can have one. The most significant benefit of rapid elasticity is cost reduction with guaranteed business continuity and reliability. For example, with this characteristic, a startup company can start by acquiring only small amounts of IT resources and add more as its business grows. The company can quickly obtain more resources to handle load surges and release them afterwards. This allows the company to spend its limited budget on higher-priority aspects of their business. 1.2.5 Measured Service Measured service is how cloud systems control a user or tenant's use of resources by leveraging a metering capability. Metering is not billing, although billing is based on metering. In cloud computing, most services came with a price while some others are free. For example, Auto Scaling can be provisioned as a service, and most of the times this service is free of charge. Measured service ensures that all resource usage can be accurately measured, through technical or other means, based on predefined criteria, which can be the duration of usage, resource quota, or the volume of data transmitted. With these, cloud systems can automatically control and adjust resource configuration based on usage. On the cloud consumers' part, they can know the exact usage of the services they have subscribed, and decide whether to scale up or down based on the current usage. Let's again resort to the example of the Monkey King's rod. The Monkey King can change the size of the rod freely to suit his needs, which are mostly to combat demons of various kinds and shapes. For example, facing the Skeleton Queen, he can turn the rod to 3 meters long and maybe 10 cm thick and keep it at this state for 30 minutes. For a less powerful demon, he may turn the rod to a smaller size, for example, 2 meters long and 7 cm thick, and keep it this way for 15 minutes. When at rest, he can turn the rod to 1 cm long and 0.1 cm thick and so to keep it in his ear. Copyright © Huawei Technologies Co., Ltd. 6 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? This shows how cloud service instances can be scaled based on accurate measurement of service usage. Figure 1-6 shows an example of measured usage of the ECS service. For example, for an ECS, the result can be 1 vCPU, 2 GB memory, and 40 GB disk space, with a validity period of one month. Figure 1-6 Measured ECS service 1.3 Definition of Cloud Computing The IT industry must have at least one hundred definitions of what cloud computing is. One of the most widely accepted is given by the National Institute of Standards and Technology (NIST) of the US: A model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Note the following key points in this definition:  Cloud computing is not a technology so much as a service delivery model.  Cloud computing gives users convenient access to IT services, including networks, servers, storage, applications, and services, like using utilities such as water and electricity.  The prerequisite for convenient, on-demand access to cloud resources is network connectivity.  Rapid resource provisioning and reclamation fall into the rapid elasticity characteristic of cloud computing, while minimal management effort and service provider interaction the on-demand self-service characteristic. In the term "cloud computing", "cloud" is a metaphor for the Internet. It is an abstraction of the Internet and the infrastructure underpinning it. "Computing" refers to computing services provided by a sufficiently powerful computer capable of providing a range of functionalities, resources, and storage. Put together, cloud computing can be understood as the delivery of on-demand, measured computing services over the Internet. The word "cloud" comes from the cloud symbol used by datacom engineers in flow charts and diagrams to symbolize the Internet or other types of networks. Copyright © Huawei Technologies Co., Ltd. 7 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? The history of cloud computing consists of those of the Internet and computing models. In the next chapter, we will talk about how cloud computing develops into what it is today. But before that, let's hear a story about a failed attempt to popularize cloud computing before the Internet reaches its maturity. This story is about Larry Ellison, co-founder and the executive chairman and chief technology officer of Oracle Corporation, and a legend in the IT industry. If you're interested, you can search for him on the Internet. Around the time Oracle was founded, two other legendary US companies, Apple and Microsoft were also founded. To the general public, Larry Ellison always seemed to be a bit overshadowed by Bill Gates. At the beginning, Microsoft's business was computer operating systems, and Oracle's was databases. However, in 1988, Microsoft also launched the SQL Server database. This seemed a direct challenge to Oracle. In response, Larry Ellison launched an Internet computer without an OS or hard disk. Rather, the OS, user data, and computer programs all resided on servers located in a remote data center. This product also had a price advantage over computers running Microsoft OSs. However, there was a small miscalculation in Larry Ellison's plan: The year was 1995, and the Internet was still at its infancy. At that time, the Internet was still unavailable in most parts of the world, and was unable to provide the bandwidth needed for the Internet computer to function properly. This led to poor user experience, so the project was terminated two years later. The Internet computer launched by Oracle can be seen as an early form of cloud computing. The only problem was that it was way ahead of its time. Plus, the Doc-Com bubble burst around 2000 also affected people's confidence in cloud-based applications. This situation lasted until Amazon launched AWS in 2006. 1.4 Origin and Development of Cloud Computing By definition, cloud computing can be understood as the delivery of on-demand, measured computing services over the Internet. The history of cloud computing consists of those of the Internet and computing models. This chapter will talk about the histories of all three. 1.4.1 A Brief History of the Internet In the beginning, all computers were separated from each other. Data computation and transmission were all done locally. Later, the Internet was born connecting all these computers, and also the world together. The following are some of the milestone events in the history of the modern Internet. 1969: The Advanced Research Projects Agency Network (ARPANET) was born, and is widely recognized as the predecessor of today's Internet. Like many technologies that are underpinning our modern society, the ARPANET was originally developed to serve military purposes. It is said that the ARPANET was launched by the US military to keep a fault-tolerant communications network active in the US in the event of a nuclear attack. In the beginning, only four nodes joined the ARPANET. They were four universities in the central states of the US: the University of California, Los Angeles (UCLA), Stanford Research Institute (SRI), University of California, Santa Barbara (UC Santa Barbara), and University of Utah. The birth of the ARPANET marked the beginning of the Internet era. In the coming years, more and more nodes joined the ARPANET, and the majority of them came from non-military fields. In 1983, for security reasons, 45 nodes were removed from ARPANET to form a separate military network called MILNET. The remaining nodes were used for civilian purposes. 1981: The complete specifications of the TCP/IP protocol suite were released for the first time, signaling the birth of the Internet communications language. Why was the TCP/IP protocol Copyright © Huawei Technologies Co., Ltd. 8 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? needed? TCP/IP is in fact a suite of many protocols, including the Transport Control Protocol (TCP), Internet Protocol (IP), and others. The earliest protocol used on the ARPANET was called the Network Control Protocol (NCP). However, as the ARPANET grew, NCP could not keep up with the demands of large-scale networks. Born for use on large and mega-size networks, TCP/IP was soon used by the ARPANET to replace NCP on January 01, 1983. 1983: All three of the original networks, ARPANET, PRNET, and SATNET, switched to TCP/IP, which marked the beginning of the accelerated growth of the Internet. 1984: The Domain Name System (DNS) was invented. Since the adoption of TCP/IP, the development of the Internet picked up speed, and more computers were added to the network. Each computer used TCP/IP-compliant numerical IP addresses to identify each other. As the quantity of connected computers continued to increase, the inconvenience of using IP addresses to identify computers became evident: they are hard to memorize. This is comparable to using people's identity numbers, instead of their names, to identify them. It's difficult to memorize such long numbers. This is where DNS came in. DNS translates between numerical IP addresses and more readily memorized domain names. In this way, computer users can locate their peers simply through domain names, leaving the translation work to domain name servers. The domain name consists of two parts: name, for example, HUAWEI; and category or purpose, for example,.com for commercial. Maintenance personnel can enter the domain name HUAWEI.com to reach the computer with the corresponding IP address. Today, domain names, used in URLs, can be used to identify any web pages across the globe. 1986: The modern email routing system MERS was developed. 1989: The first commercial network operator PSINet was founded. Before PSINet, most networks were funded by the government or military for military or industrial purposes, or for scientific research. PSINet marked the beginning of the commercial Internet. 1990: The first network search engine Archie was launched. As the Internet expanded, the amounts of information on the Internet grew at an explosive rate. A search engine or website was needed to index and search for information needed by users, to speed up the searches. Archie is the earliest search engine and a tool for indexing FTP archives located on physically dispersed FTP servers. It was developed by Alan Emtage, then a student at McGill University. Archie allows users to search for files by their names. 1991: WWW was officially open to the public. The World Wide Web (WWW), or simply the Web, that most of us now use on a daily basis, became publicly available only in 1991, less than 30 years ago. Tim Berners-Lee, a British scientist, invented the Web while working at CERN, the European Organization for Nuclear Research. The Web allows hypermedia, which can be documents, voice or video, and a lot more, to be transmitted over the Internet. It was only after the popularization of the Web that all the great Internet companies were born and all kinds of Internet applications that have fundamentally changed the lives of ordinary people began to emerge. 1995: E-commerce platforms Amazon and eBay were founded. Many great companies, such as Yahoo and Google, emerged since the brief history of the Internet began. Here we will talk about Amazon alone, since it's the first Internet company that made commercial cloud computing a reality. In the early days, Amazon mainly sold books online. To process and store commodity and user information, Amazon built huge data centers. The US has a shopping festival called Black Friday, similar to the "Double Eleven" invented by Tmall of China. On this day, Amazon needed to process huge amounts of information, and all the servers in its data centers were used. However, after this day, most of the servers were idle. To improve return on investment, Amazon needed to lease these idle servers. This was the reason why in 2006 Amazon launched its first cloud computing product: Elastic Compute Cloud (ECS). Copyright © Huawei Technologies Co., Ltd. 9 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? Many of the best known companies, such as Amazon, Google, Alibaba, and Tencent, are Internet companies. Companies like IBM, Cisco, Huawei, and Lenovo are traditional IT companies. 2000: The dotcom bubble burst. The unprecedented growth of the Internet in the 1990s resulted in the dot-com bubble, which burst around 2000. It was during this period that PSINet, the first commercial network operator we mentioned earlier, went bankrupt. The Internet regained rapid growth after the dotcom bubble burst. In 2004, Facebook was founded, and with it came the phenomenon of social networking. 1.4.2 The History of Computing 1.4.2.1 Parallel computing Traditionally, software has been written for serial computation: 1. Each problem is broken into a discrete series of instructions. 2. Instructions are executed one after another on a single CPU. 3. Only one instruction may execute at any point in time. Figure 1-7 Schematic diagram of serial computing With serial computing, a complex problem takes a long time to process. With large-scale applications, especially when there is limited computer memory capacity, a single-CPU architecture is impractical or even impossible. For example, a search engine and networked database process millions of requests per second, which is far beyond the capacity of serial computing. Limits to serial computing, both in theory and for practical reasons, pose significant constraints to simply building ever faster serial computers:  Transmission speeds — the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.  Limits to miniaturization — processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.  Economic limitations — it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive. In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. Copyright © Huawei Technologies Co., Ltd. 10 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? Figure 1-8 Schematic diagram of parallel computing  Each problem is broken into discrete parts that can be solved concurrently.  Each part is further broken down to a series of instructions.  Instructions from each part execute simultaneously on different CPUs.  A unified control mechanism is added to control the entire process. Traditionally, parallel computing has been considered to be "the high end of computing" and has been used for cases such as scientific computing and numerical simulations of complex systems. Today, commercial applications are providing an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data in sophisticated ways. The reasons for using parallel computing include the following:  Time and cost savings: In theory, using more compute resources will lead to completing a task faster and save potential costs. This is even more true considering the resources can be inexpensive, and even out-of-date CPUs clustered together.  Solving larger problems that can be handled using serial computing. The CPUs used for parallel computing can come from the same computer, or from different computers residing on the same network. 1.4.2.2 Distributed Computing Distributed computing is a field of computer science that studies distributed systems. A distributed system distributes its different components to different networked computers, which communicate and coordinate their actions using a unified messaging mechanism. The components work together in order to achieve a common goal. Distributed computing provides the following benefits:  Easier resource sharing  Balanced load across multiple computers  Running each program on the most eligible computers The first two are the core rationale behind distributed computing. Copyright © Huawei Technologies Co., Ltd. 11 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? Figure 1-9 Schematic diagram of distributed computing Parallel and distributed computing both use parallelism to achieve higher performance. Their difference lies in how memory is used: In parallel computing, the computer can have a shared memory or distributed memory. In distributed computing, each computer has its own memory. Some people believe that distributed computing is a special case of parallel computing. In fact, in distributed computing, each task is independent. The result of one task, whether unavailable or invalid, virtually does not affect other tasks. Therefore, distributed computing has low requirement on real-timeliness and can tolerate errors. (Each problem is divided into many tasks, each of which is solved by one or more computers. The uploaded results are compared and verified in the case of a large discrepancy.) In parallel computing, there are no redundant tasks. The results of all tasks affect one another. This requires the correct result to be obtained for each task, and preferably in a synchronized manner. In the case of distributed computing, many tasks are redundant, and many useless data blocks are generated. Despite its advantage in speed, the actual efficiency may be low. 1.4.2.3 Grid Computing Grid computing is the use of widely distributed computer resources to reach a common goal. It is a special type of distributed computing. According to IBM's definition, a grid aggregates compute resources dispersed across the local network or even the Internet, making the end users (or client applications) believe that they have access to a super and virtual computer. The vision of grid computing is to create a collection of virtual and dynamic resources so that individuals and organizations can have secure and coordinated access to these resources. Grid computing is usually implemented in the form of a cluster of networked, loosely coupled computers. 1.4.2.4 Cloud Computing Cloud computing is a new way of infrastructure sharing. It pools massive amounts of resources together to support a large variety of IT services. Many factors drive up the demand for such environments, including connected devices, real-time stream processing, adoption of service-oriented architecture (SOA), and rapid growth of Web 2.0 applications such as search, open collaboration, social network, and mobile office. In addition, improved performance of digital components has allowed even larger IT environments to be deployed, which also drives up the demand for unified clouds. Cloud computing is hailed as a revolutionary computing model by its advocates, as it allows the sharing of super computational power over the Internet. Enterprises and individual users no longer need to spend a fortune purchasing expensive hardware. Instead, they purchase on-demand computing power provisioned over the Internet. Cloud computing in a narrow sense refers to a way of delivering and using IT infrastructure to enable access to on-demand and scalable resources (infrastructure, platform, software, etc.) The network over which the resources are provisioned is referred to as the cloud. To Copyright © Huawei Technologies Co., Ltd. 12 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? consumers, the resources on the cloud appear to be infinite. They are available and scalable on demand and use a pay-as-you-go (PAYG) billing model. These characteristics allow us to use IT services as conveniently as using utilities like water and electricity. In a broad sense, cloud computing refers to the on-demand delivery and utilization of scalable services over the Internet. These services can be IT, software, Internet, or any other services. Cloud computing has the following typical characteristics:  Hyperscale. Clouds are usually large. Google's cloud consists of over 1 million servers. The clouds of Amazon, IBM, Microsoft, and Yahoo each has hundreds of thousands of servers. The private cloud of an enterprise can have hundreds to thousands of servers. The cloud offers users computational power that was impossible with conventional methods.  Virtualization. Cloud computing gives users access to applications and services regardless of their location or what device they use. The requested resources are provided by the cloud, rather than any tangible entity. Applications run somewhere in the cloud. The users have no knowledge and do not need to concern themselves about the locations of the applications. A laptop or mobile phone is all they need to access the services they need, or even to perform complex tasks like supercomputing.  High reliability. With mechanisms such as multi-copy redundancy, fault tolerance, fast and automated failover between homogeneous compute nodes, it is possible for cloud computing to deliver higher reliability than local computers.  General-purpose. A single cloud is able to run a huge variety of workloads to meet wide-ranging customer needs.  High scalability. Clouds are dynamically scalable to accommodate changing demands.  On-demand. A cloud provides a huge resource pool from where on-demand resources can be provisioned. Cloud service usage can be metered similarly to how utilities like water, electricity, and gas are metered.  Cost savings. With a broad selection of fault tolerance mechanisms available for clouds, service providers and enterprises can use inexpensive nodes to build their clouds. With automated, centralized cloud management, enterprises no longer need to grapple with the high costs entailed in managing a data center. By provisioning hardware-independent, general-purpose resources, cloud significantly improves resource utilization. These allow users to have quick access to cost-efficient cloud services and resources. 1.4.3 Development of Cloud Computing There are three phases of cloud computing in terms of transforming the enterprise IT architecture from a legacy non-cloud architecture to a cloud-based one. Cloud Computing 1.0 This phase deals with virtualization of IT infrastructure resources, with focus on compute virtualization. Enterprise IT applications are completely decoupled from the infrastructure. With virtualization and cluster scheduling software, multiple enterprise IT application instances and runtime environments (guest operating systems) can share the same infrastructure, leading to high resource utilization and efficiency. HCIA - Cloud Computing mainly covers the implementation and advantages of cloud computing in this phase. Cloud Computing 2.0 Infrastructure resources are provisioned to cloud tenants and users as standardized services, and management is automated. These are made possible with the introduction of standard Copyright © Huawei Technologies Co., Ltd. 13 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? service provisioning and resource scheduling automation software on the management plane, and software-defined storage and networking on the data plane. The request, release, and configuration of infrastructure resources, which previously required the intervention of data center administrators, are now fully automated, as long as the right prerequisites (e.g. sufficient resource quotas, no approval process in place) are met. This transformation greatly improves the speed and agility of infrastructure resource provisioning required by enterprise IT applications, and accelerates time to market (TTM) for applications by shortening the time needed to ready infrastructure resources. It transforms static, rolling planning of IT infrastructure resources into a dynamic, elastic, and on-demand resource allocation process. With it, enterprise IT is able to deliver higher agility for the enterprise's core applications, enabling it to quickly respond and adapt to changing demands. In this phase, the infrastructure resources provisioned to tenants can be virtual machines (VMs), containers (lightweight VMs), or physical machines. This transformation has not yet touched the enterprise's applications, middleware, or database software architectures that are above the infrastructure layer. Cloud Computing 3.0 This phase is characterized by:  A distributed, microservices-based enterprise application architecture.  An enterprise data architecture redesigned using Internet technology and intelligence unleashed by big data. In this phase, the enterprise application architecture gradually transforms from a vertical, hierarchical architecture that:  Relies on traditional commercial databases and middleware suites  Is purpose-designed for each application domain, siloed, highly sophisticated, stateful, and large scale To  A distributed, stateless architecture featuring lightweight, fully decoupled functionalities, and total separation of data and application logic  Databases and middleware service platforms that are based on open-source yet enhanced enterprise-class architectures and fully shared across different application domains. In this way, enterprise IT can deliver a new level of agility and intelligence for the enterprise business, further improve resource utilization, and lay a solid foundation for fast innovation in an iterative manner. The majority of enterprises and industries have already passed Cloud Computing 1.0. Enterprises from some industries have already commercially deployed Cloud Computing 2.0, some on small scales though, and are now considering scaling up or continuing to move towards Cloud Computing 3.0. Others are moving from Cloud Computing 1.0 to 2.0, and are even considering implementing Cloud Computing 2.0 and 3.0 in parallel. Copyright © Huawei Technologies Co., Ltd. 14 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? 1.4.4 Further Reading: Differences Between Cloud Computing 1.0 and 2.0/3.0 This section talks about the major differences between Cloud Computing 1.0 and 2.0/3.0. The content comes from the book Cloud Computing Architecture: Technologies and Practice, written by Gu Jiongjiong. Difference 1 From non-critical IT applications on cloud to telecom network applications and mission-critical enterprise applications on cloud. In the beginning, virtualization is used only for non-critical applications, such as desktop cloud and development & testing cloud. At this stage, applications are insensitive to the performance overheads caused by the virtualization software, and people's attention is mostly focused on the higher resource utilization and more efficient application deployment enabled by resource pooling. As cloud continues to grow in popularity, enterprises begin to move more business-critical applications, even their core production systems, to the cloud. Therefore, it has become crucial for a cloud platform to become more efficient and reliable in order to support critical enterprise applications that are mostly performance-demanding and latency-sensitive. Besides tangible assets such as compute, storage, and network resources, the most valuable asset of an enterprise is its data. In the compute virtualization phase of cloud computing, the front- and back-end I/O queues between the guest OS and host OS have high throughput overheads, while traditional structured data has demanding requirements on I/O throughput and latency. This is why in the beginning the part of the infrastructure that handles mission-critical structured data is excluded from the scope of virtualization and even resource pooling. However, as Xen and KVM-based virtualization engines keep improving I/O performance using techniques like single root input/output virtualization (SR-IOV) and multi-queue, virtualization platforms can now deliver the performance needed to run core enterprise applications, such as mission-critical relational databases like enterprise resource planning (ERP). In the recent two to three years, cloud computing has extended beyond the IT sector to penetrate the telecom sector. The success of the Internet, both in business model and technology, has encouraged telecom operators to reconstruct existing networks and network functions using cloud technology, so as to decouple telecom software from the proprietary hardware supplied by a limited choice of vendors, while also enjoying the benefits brought by the cloud, such as lower total cost of ownership (TCO) for hardware, energy savings, accelerated innovation and more efficient application deployment. A multinational operator running subsidiaries in different countries may be able to use cloud to enable fast, software-based customization of network functions and allow higher openness. Difference 2 From compute virtualization to storage and network virtualization. The early phases of cloud computing focus on compute virtualization to support on-demand, elastic resource allocation and the decoupling of software from hardware. In fact, the now well-known compute virtualization technology can be traced back to the days of the IBM System/370, when it was first implemented on IBM mainframe computers. The idea was to put a virtualization layer between the OS and the bare metal hardware to simulate multiple runtime environments on top of the 370 mainframe instruction system. This led the upper layer applications to "believe" that they were running in a dedicated system. The compute virtualization engine enabled time-based CPU sharing between multiple virtual machines. In addition, access requests to memory, I/O, and network resources were also intercepted and proxied by the virtualization engine. Compute virtualization became widely available for commercial use only after x86-based servers became the mainstream IT hardware platforms, with different single-host compute virtualization implementations from VMware ESX, Xen, and KVM. Combined with Copyright © Huawei Technologies Co., Ltd. 15 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? small- and medium-sized cluster management software (such as vCenter/vSphere, XenCenter, and FusionSphere) capable of dynamic VM migration and HA scheduling, these implementations have become the mainstream compute virtualization offerings. As data and information are becoming the core assets of enterprise IT, data storage media is split from servers to form a large, independent industry. Like the essential computing power provided by CPUs, storage plays an equally important role in data centers. Questions like "how to quickly accommodate enterprises' changing storage needs" and "how to make the best use of legacy storage systems supplied by multiple vendors" can best be answered by storage virtualization. Also, the hardware of modern data centers no longer merely consists of IBM mainframes or midrange computers deployed in master/slave mode. North-south traffic between the client and server, east-west traffic between different servers, and the communication between an enterprise's internal network and the public network all transmit through Ethernet and wide area networks (WAN) featuring peer-to-peer, open architectures. Therefore, network became the third essential element forming the IT infrastructure of a data center, along with compute and storage. In the context of an end-to-end data center infrastructure solution, server virtualization can no longer adequately support hardware-independent, elastic, and on-demand resource allocation. Instead, all three virtualization technologies, compute, storage, and network, must work together to get the job done. Besides the unified processing of API and information models by the cloud management and scheduling software on the management and control plane, one important characteristic of virtualization is to intercept the original access requests, extract keywords, and simulate the requested resources, though maybe in different granularities from the underlying physical resources. The CPU and memory resources from commodity x86 servers are virtualized into VM CPU and memory specifications and then provisioned to consumers (upper-layer users or applications) on an on-demand basis. With the rapid upgrade of compute components and the horizontal scalability enabled by a software-based load balancing mechanism, compute virtualization only need to deal with splitting resources into smaller units. However, for storage, due to limited single-disk capacities (SATA/SAS) in contrast with growing storage needs, it is necessary to aggregate the storage resources of multiple loosely-coupled and distributed servers across the entire data center, including both the disks inside servers and external SAN/NAS devices, to form a unified storage resource pool. This storage pool may be a homogeneous one consisting of storage software and hardware supplied by the same vendor, or it may consist of heterogeneous storage devices from multiple different vendors. All storage pools can be accessed through standard formats, such as block, object, and file storage. In a data center, network connectivity needs come from applications and are closely related to the compute and storage resources functioning as network nodes. However, traditionally, network switching functions were implemented on physical switches and routers. To upper-layer applications, network functions were just isolated "boxes" connected by communication links. These "boxes" could not sense the dynamic connectivity needs of upper-layer applications. Networking and isolation requirements from the service layer are accommodated entirely manually. In a multi-tenant virtualization environment, different tenants may have vastly different requirements on the configuration and management of edge routing and gateway devices. The built-in multi-instance feature of the physical routers and firewalls cannot meet the multi-tenancy requirements in the cloud environment either. On the other hand, deploying physical routers and firewalls to match existing tenants in quantity is simply not an economically viable option for most customers. This has led people to consider the possibility of migrating network functions from proprietary, close-architecture platforms to commodity x86 servers. In this way, instances can be created and destroyed on network nodes by the cloud OS platform in an automated manner. Virtual communication links, along with the necessary security isolation mechanisms, can be created between any two network nodes. The significance of this is that it enables application-driven, automated network management and configuration, significantly reducing the complexity of data center network management. From the perspective of resource utilization, the traffic between any two virtual network nodes needs to be exchanged over the underlying physical network. An unlimited number of virtual nodes can be created and used as long as they do not exceed the total Copyright © Huawei Technologies Co., Ltd. 16 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? resource quotas of the physical network. (It is advisable to use the non-blocking Clos architecture for physical networks.) Network bandwidth resources are released as soon as the virtual network loads are released. This maximizes dynamic sharing of physical network resources. To sum up, network virtualization allows multiple "box-like" network entities to be presented as a unified network resource pool to upper layer applications, providing a unified collaboration mechanism for compute, storage, and network resources. Difference 3 From internal services within an enterprise to multi-tenant infrastructure services and end-to-end IT services, with larger resource pools. An important goal of cloud computing is to allow users to use computing power as conveniently as using utilities like water and electricity. In the early days of cloud computing, virtualization technologies, such as VMware ESX, Microsoft Hyper-V, and Linux-based Xen and KVM, were widely used to implement server-centric virtualization and resource consolidation. At this stage, the servers in enterprise data centers, though virtualized and pooled, were still partially silos. The concepts of multi-tenancy and service automation were understood and accepted only by a few people. Server-centric resource pooling serves only the administration/management personnel for the data center infrastructure hardware and software. Before virtualization, data center management personnel manage servers, and storage and network devices. After virtualization, they manage VMs, storage volumes, software-based switches, and even software-based firewalls. This allows multiple application instances and OSs to share server resources to the maximum extent possible, increasing resource utilization by evening out peaks and troughs. In addition, extra high availability (HA) and fault tolerance (FT) mechanisms are provided for applications. Power usage effectiveness (PUE) is improved through dynamic resource scheduling and power management: aggregate light-load instances onto a small number of servers and split them when the load increases, and power off idle servers. However, virtualization only achieves higher resource utilization and PUE. It is still a long way from the real cloud featuring multi-tenancy and automated cloud service provisioning. So the natural next step following virtualization would be to extend the benefits of cloud computing to each tenant, rather than just the data center administrators. On top of the existing infrastructure O&M, monitoring, management portals, the cloud platform must be able to provision on-demand, customized infrastructure resources for each internal or external tenant by providing a portal for service subscription and daily maintenance and management or an API portal. Permissions like add, delete, modify, and search on virtual or physical resources must be delegated to each tenant while ensuring proper control and isolation. Each tenant is authorized to access only the compute and storage resources requested and created by themselves, along with the OS and application software bound with these resources. In this way, the tenants can have quick access to on-demand resources without purchasing any IT hardware equipment, and enjoy a new level of automation and agility brought by cloud. Thus, cloud benefits like the economy of scale and fast elasticity of on-demand resources can be fully exploited. Difference 4 From small to huge datasets, and from structured data to unstructured or semi-structured data. With the increasing availability and popularity of smart devices, social networks, and the rise of the Internet of Things (IoT), the data transmitted over IT networks has changed from small-scale structured data to massive amounts of unstructured and semi-structured data, such as text, images, and videos. The data volumes have increased exponentially. The compute and storage capacities needed to process such massive amounts of unstructured and semi-structured data have far exceeded those that can be provided by traditional Scale-Up hardware architectures. Therefore, it has become imperative to fully utilize the Scale-Out architecture enabled by cloud computing, in order to create large-scale resource pools needed for mass data processing. The massive data sets accumulated during daily enterprise Copyright © Huawei Technologies Co., Ltd. 17 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? transactions, as well as the data obtained from other sources, such as customer data from social networks or other websites, do not always require real-time processing. The data processing system does not need to be ready at all times. Therefore, a massive storage platform that is shared, plus the ability to dynamically allocate and release batch, parallel compute resources will be the most efficient means to support big data analytic needs. Difference 5 From on-premises fixed-location computing and traditional man-machine interaction to cloud computing, smart mobile terminal, and immersive experience for enterprise and consumer applications. As enterprise and consumer applications are increasingly moved to the cloud, thin client as a way of accessing cloud services is gaining popularity. Compute and storage resources are also split from on-premises locations and moved to a remote data center for centralized deployment. In this circumstance, enterprises must find ways to guarantee consistent (sometimes immersive) user experiences for on-cloud applications regardless of where or how the access requests are initiated: using various thin clients or smart terminals from over a variety of networks, such as the enterprise's internal local area network (LAN), external fixed or mobile broadband, or wide area network (WAN). Facing problems like unstable performance and latency in packet forwarding, packet loss, and the absence of an end-to-end QoS mechanism for LAN or WAN communication, enterprises may find it exceptionally challenging to guarantee a cloud experience as good as or nearly as good as local computing. Cloud applications may be accessed using different methods and have different user experience standards. Common remote desktop protocols are becoming inadequate in delivering a user experience equivalent to that of local computing. Concerted efforts must be focused on optimizing the end-to-end QoS and QoE for IP multimedia (audio and video), and different methods may need to be used on the basis of dynamically identifying different types of workloads. Typical scenarios include the following:  For common office applications, the response latency must be less than 100 ms, and the average bandwidth usage 150 kbps: GDI/ DX/OpenGL rendering instructions are intercepted on the server, and network traffic is monitored and analyzed in real time. Based on it, the optimal transmission methods and compression algorithms are selected, and the rendering instructions are redirected to thin clients or soft terminals, minimizing the latency and bandwidth usage.  Virtual desktop usually delivers less-than-ideal VoIP quality. The default desktop protocol TCP is not suitable for VoIP. To solve this problem, RTP/UDP is used to replace TCP, and mature VoIP codec such as G.729/AMR is used. When VoIP/UC clients are used, VoIP traffic usually bypasses VMs to reduce the latency and voice quality overheads caused by additional coding and decoding. With these, the average mean opinion score (MOS) of voice services in virtual desktop scenarios has increased from 3.3 to 4.0.  Remote access and playback of cloud-hosted HD (1080p/720p) videos: Common thin clients usually perform poorly in decoding HD videos. When there are concurrent access requests from multiple cloud desktops and media stream redirection is supported, the desktop protocol client software should have the ability to call the hard decoding capability of the thin client chip through a dedicated API. Some applications, such as Flash, and video software that directly reads from and writes to the GPU, rely on the concurrent codec capability of the GPU or hardware DSP. Software codec based on general-purpose CPUs will result in serious stalling problems and poor user experience. In this case, hardware GPU virtualization or DSP acceleration card can be used to improve the user experience of cloud-based HD video applications, delivering an HD and smooth viewer experience on par with that of local video playback. The desktop protocol should also be able to intelligently identify and differentiate the heat of image change and enable the bandwidth-intensive GPU data compression and redirection for Copyright © Huawei Technologies Co., Ltd. 18 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? the image area that changes a lot, where the redirection of rendering instructions cannot keep up.  Graph computing-intensive cloud applications, such as engineering or mechanical drawing, hardware PCB drawing, 3D games, and the most recent VR simulation: These applications also require large amounts of virtualized GPU resources for hardware-assisted rendering and compression acceleration. They also require significantly higher bandwidths: dozens to hundreds of Mbit/s bandwidth per channel, and 10 to 100 Gbit/s for concurrent access. The bandwidth between cloud access nodes and a centralized data center is finite. To provide higher and scalable bandwidth, one way is to split the large, centralized data center into multiple distributed data centers that are logically centralized but physically dispersed. This allows applications with a heavy load of human-machine interactions, such as VDI/VR, to be deployed at the Service PoPs in close proximity of end users. The global consumer IT is entering to the post-PC era, and iOS and Android smart mobile devices are also gradually replacing PCs and even laptops used in offices. Enterprise users hope that with smart devices, they can access not only traditional Windows desktop applications, but also the enterprise's internal web SaaS applications, third-party SaaS applications, as well as other Linux desktop applications. They also hope that each cloud application can deliver consistent user experience, regardless of the user devices or OSs, and without the need of any adaptation efforts. The Web Desktop solution can meet all these requirements. It introduces the HTML5 protocol, and supports an OS that can run on multiple types of desktops, unified authentication and application aggregation, zero installation, upgrade, and maintenance for applications, delivering a consistent user experience across various types of smart devices. Difference 6: From homogeneous virtualization to heterogeneous virtualization, lightweight containers, and bare metal servers (physical machines). In the process of transforming from a traditional enterprise IT architecture to a modern, more agile one, enterprises need to be able to duplicate their applications quickly and in batches. Closed-source virtualization solutions offered by VMware and Hyper-V, and open-source ones such as Xen and KVM, are among the first to reach maturity for scale deployment. With virtualization, the best practices of application installation and configuration can be quickly duplicated in the form of VM templates and images, greatly simplifying repetitive yet sometimes complex installation, provisioning, and configuration of IT applications, reducing software deployment period to hours or even minutes. However, as enterprise IT applications increasingly transform from small-scale, monolithic, and stateful ones, to large-scale, distributed, and stateless ones with complete separation of data and application logic, people begin to realize that while VMs allow efficient resource sharing for multi-instance applications within large-scale IT data centers, they still cannot keep up with the elastic resource needs of applications like online web services and big data analytics. A single tenant may run multiple applications, with hundreds to thousands, or even tens of thousands of concurrent application instances. With VMs, an OS instance will need to be created for each application instance, leading to huge overheads. Also, the speed at which VM-based application instances can be created, started, and upgraded is still not fast enough for applications with unpredictable, massive resource needs. Internet companies like Google, Facebook, and Amazon have widely adopted Linux container technologies (such as namespace and cgroup). Based on the shared Linux kernel, these technologies isolate the runtime environments of application instances by containers. They also package and encapsulate the configuration information and runtime environment of each instance together, and use container cluster technology (such as Kubernetes, MESOS, and Swarm) to allow fast provisioning of highly concurrent, distributed multi-container instances and large-scale, dynamic orchestration and management of containers. This yields an unprecedented level of efficiency and agility in large-scale software deployment and life cycle Copyright © Huawei Technologies Co., Ltd. 19 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? management, as well as iterative software development and rollout based on DevOps. In the long run, the container technology, being more lightweight and agile, will eventually replace virtualization. However, in a foreseeable future, container will continue to rely on cross-VM and -PM isolation mechanisms to isolate the runtime environments of different tenants and fulfills service level agreements. This is because container itself cannot properly address cross-tenant security isolation and excessive resource competition in the case of over-commitment of shared host resources, at least in the short term. Also, virtualization may not be suitable for some enterprise applications or middleware due to reasons like special vendor support policies, enterprise-class performance standards, and compatibility requirements. For example, virtualization cannot accommodate the performance needs of commercial databases, such as the Oracle RAC cluster database and HANA in-memory computing database. The customer wishes that these workloads, while running on physical machines, can also enjoy benefits like on-demand infrastructure resource provisioning based on resource pooling and automated configuration that are available for virtualization and containers. To achieve these, the cloud platform and cloud management software must not only automate OS and application installation on physical machines, but also enable collaborative management of storage and network resource pools and automated configuration while guaranteeing secure tenant isolation. Difference 7 From closed-source and closed-architecture cloud platform and cloud management software to open-source and open-architecture ones. In the early stages of cloud computing, closed-source cloud platform software, such as VMware vSphere/vCenter and Microsoft's SystemCenter/Hyper-V was far more mature than open-source ones, and was therefore the firsthand choice for enterprises seeking to build their private clouds. With the rapid advancements of open-source communities and technologies, Xen and KVM-based cloud OSs, such as OpenStack, CloudStack, and Eucalyptus, are catching up, both in influence and market presence. Take OpenStack as an example. Many leading software and hardware companies, such as IBM, HP, SUSE, Red Hat, and Ubuntu, have become OpenStack platinum members. The community operates around a six-month, time-based release cycle with frequent development milestones. Since the release of the first version in 2010, a new OpenStack version has been released every six months, with active contributions from all community members. The functionalities of OpenStack community versions are fast and steadily iterating. In the first half of 2014, OpenStack had caught up with vCloud/vSphere 5.0 in terms of maturity and was ready for basic commercial deployment. Judging by the current situation and trends, OpenStack is becoming "the Linux in the cloud computing world." Around 2001, Linux OS was still relatively weak while Unix systems dominated most production platforms of enterprise IT systems. Back then, it would have been unimaginable that the open-source Linux would replace the closed-source Unix in just 10 years to have become the default choice of OS for enterprise IT servers. In addition, midrange computers and even mainframes are being replaced by commodity x86 servers. 1.5 Cloud Computing Models Although not universally agreed upon, cloud computing is commonly categorized based on cloud deployment and service models. Copyright © Huawei Technologies Co., Ltd. 20 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? 1.5.1 By Deployment Model Public Cloud Public cloud is the earliest and best-known form of cloud computing. Our previous examples of cloud computing, including Backup & Restore on Huawei phones and Google Translate, are both examples of public cloud. Public cloud offers utility-like IT services over the Internet for the public. Public clouds are usually built and run by cloud service providers. End users access cloud resources or services on a subscription basis while the service provider takes all the O&M and administration responsibilities. Private Cloud Private clouds are usually deployed for internal use within enterprises or other types of organizations. All the data of a private cloud is stored in the enterprise or organization's own data center. The data center's ingress firewalls control access to the data. A private cloud can be built based on the enterprise's legacy architecture, allowing most of the customer's hardware equipment to be reused. A private cloud may be able to deliver a higher level of data security and allow reuse of legacy equipment. However, these equipment will eventually need to be updated to keep up with growing demands, and doing so may entail high costs. On the other hand, stricter data access control also means less data sharing, even within the organization. In recent years, a different type of private cloud has emerged encouraging enterprises to deploy core applications on the public cloud: Dedicated Cloud (DeC) on a public cloud. This model offers dedicated compute and storage resources and reliable network isolation, meeting the high reliability, performance, and security standards of tenants' mission-critical applications. Hybrid Cloud Hybrid cloud is a flexible cloud deployment mode. It may comprise two or more different types of clouds (public, private, and community, which we will talk about later) that remain distinctive entities. Cloud users can switch their workloads between different types of clouds as needed. Enterprises may choose to keep core data assets on-premises for maximum security while other data on public clouds for cost efficiency, hence a hybrid cloud model. With the pay-per-use model, public clouds offer a highly cost-efficient option for companies with seasonal data processing needs. For example, for some online retailers, their demands for computing power peak during holidays. Hybrid cloud also accommodates elasticity demands of other purposes, for example, disaster recovery. This means that a private cloud can use a public cloud as a disaster recovery destination and recover data from it when necessary. Another feasible option is to run applications on one public cloud while selecting another public cloud for disaster recovery purposes. To sum up, a hybrid cloud allows users to enjoy the benefits of both public and private clouds. It also offers great portability for applications in a multi-cloud environment. In addition, the hybrid cloud model is cost-effective because enterprises can have on-demand access to cloud resources on a pay-per-use basis. The downside is that hybrid cloud usually requires more complex setup and O&M. A major challenge facing hybrid cloud is integration between different cloud platforms, different types of data, and applications. A hybrid cloud may also need to address compatibility issues between heterogeneous infrastructure platforms. Copyright © Huawei Technologies Co., Ltd. 21 HCIA-Cloud Computing Learning Guide V4.0.1 1 What's Cloud Computing? Community Cloud A community cloud is a cloud platform where the cloud infrastructure is built and managed by a leading organization of a specific community and shared between several organizations of that community. These organizations typically have common concerns, such as similar security, privacy, performance, and compliance requirements. The level of resource sharing may vary, and the services may be available with or without a fee. Community cloud is not a new concept. Its difference from the public and private clouds lies in its industry attribute. For example, with a cloud built for the healthcare industry, patients' case files and records can be stored in this cloud. Doctors from every hospital can obtain patient information from the cloud for diagnostic purposes. Community cloud can be a huge opportunity as well as a huge challenge. For example, with the community cloud for the healthcare industry, special efforts, including technical and administrative means, must be taken to ensure personal information security on the cloud. 1.5.2 By Service Model In cloud computing, all deployed applications use some kind of hierarchical architecture. Typically, there is the user-facing interface, where the end users create and manage their own data; the underlying hardware resources; the OS on top of the hardware resources; and the middleware and application runtime environment on top of the OS. We call everything related to applications the software layer; the underlying, virtualized hardware resources (network, compute, and storage) the infrastructure; and the part in between the platform layer. IaaS refers to a situation where a cloud service provider provides and manages the infrastructure layer while the consumer the other two layers. PaaS refers to a situation where the cloud service provider manages the infrastructure and platform layers while the consumer the application layer. SaaS means all three layers are managed by the provider. Let's explain these three cloud service models using the example of a game. Figure 1-10 Computer specifications requirements for a video game The figure above shows the required computer specifications for this game. If we buy a computer of the required specifications, install an OS and then this game, this is not cloud computing. If we buy a cloud server of the same specifications from a public cloud provider, use an image to install an OS, download and then install the game, we're using the IaaS model. When installing a large game such as this one, we are likely

Use Quizgecko on...
Browser
Browser