Cloud_Computing_Technology (1).pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

# Chapter 3 Virtualization Technology ## 3.1 Introduction to Virtualization Technology ### 3.1.1 Definition of Virtualization Virtualization is a broad and changing concept, so it is not easy to give a clear and accurate definition of virtualization. Currently, the industry has given the followin...

# Chapter 3 Virtualization Technology ## 3.1 Introduction to Virtualization Technology ### 3.1.1 Definition of Virtualization Virtualization is a broad and changing concept, so it is not easy to give a clear and accurate definition of virtualization. Currently, the industry has given the following multiple definitions of virtualization. - Virtualization is an abstract method of expressing computer resources. Through virtualization, the abstracted resources can be accessed in the same way as the resources before the abstraction. This abstract method of resources is not limited by implementation, geographic location, or physical configuration of underlying resources. - Virtualization is a virtual (relative to the real) version created for certain things, such as operating systems, computer systems, storage devices, and network resources. - Although the above definitions are not all the same, after careful analysis, it is not difficult to find that they all illustrate three meanings: - The objects of virtualization are various resources. - The virtualized logical resources hide unnecessary details from users. - Users can realize part or all of their functions in the real environment in the virtual environment. Virtualization is a logical representation of resources, and physical limitations do not constrain it. In this definition, resources cover a wide range, as shown in the image. Resources can be various hardware resources, such as CPU, memory, storage, network; it can also be a variety of software environments, such as operating systems, file systems, and applications. According to this definition, we can better understand the memory virtualization in the operating system mentioned in Sect. 2.1.2. Memory is a real resource, and virtualized memory is a substitute for this resource. The two have the same logical representation. The virtualization layer hides the details of how to achieve unified addressing and swap in/out between the memory and the hard disk to the upper layer. For software that uses virtual memory, they can still operate on virtual memory with consistent allocation, access, and release instructions, just like accessing real physical memory. The image shows that multiple resources can be virtualized. The main goal of virtualization is to simplify the representation, access, and management of IT resources, including infrastructure, systems, and software, and provide standard interfaces for theses resources to receive input and provide output. The users of virtualization can be end-users, programs, or services. Through standard interfaces, virtualization can minimize the impact on users when IT infrastructure changes. End-users can reuse the original interface because the way they interact with virtual resources has not changed. Even if the implementation of the underlying resources has changed, they will not be affected. Virtualization technology reduces the degree of coupling between the resource user and the specific realization of the resource, so that the user no longer depends on a particular realization of the resource. Using this loose coupling relationship, system administrators can reduce the impact on users when maintaining and upgrading IT resources. ### 3.1.2 Development of Virtualization Technology With the rapid development of information technology today, enterprises and individual users favor virtualization technology mainly since virtualization technology is conducive to solving problems from resource allocation and business management. Firstly, the main function of a virtual computer is to give full play to the capacity of idle resources of high-performance computers to achieve the purpose of increasing server utilization even without purchasing hardware; at the same time, it can also complete the rapid delivery and rapid recovery of customer system applications. This is the most basic and intuitive understanding of the public on virtual computers. Secondly, virtualization technology is gradually playing a vital role in enterprise management and business operations. It enables rapid deployment and migration of servers and data centers and reflects its characteristics of transparent behavior management. The important position of virtualization technology makes its development become the focus of attention in the industry. At the technological development level, virtualization technology is facing four major trends: platform openness, connection protocol standardization, client hardwareization, and public cloud privatization. Platform openness refers to the closed architecture of the basic platform, through virtualization management to enable virtual machines of multiple vendors to coexist under the open platform, and different vendors can implement rich applications on the platform; the standardization of connection protocols aims to solve the current multiple connections Protocols (VMware's PCoIP, Citrix's ICA, etc.) in the case of public desktop cloud complex terminal compatibility issues, so as to solve the wide compatibility issues between terminals and cloud platforms, optimize the industrial chain structure; client hardware in view of the lack of hardware support for desktop virtualization and customer multimedia experience using virtualization technology, the terminal chip technology is gradually improved, and virtualization technology is implemented on mobile terminals; the development trend of public cloud privatization is through technology similar to VPN. Turn the enterprise's IT architecture into a "private cloud" superimposed on the public cloud and ensure that the private cloud supports the security of enterprise data without sacrificing the convenience of the public cloud. ### 3.1.3 Advantages of Virtualization Technology Virtualization technology abstracts and transforms various physical resources of a computer (CPU, memory, disk space, network adapter, etc.), divided and combined into one or more computer configuration environments. Allows users to run multiple operating systems on a server simultaneously, and programs can run in mutually independent spaces without affecting each other, thereby significantly improving the efficiency of the computer. The virtualization layer simulates a set of independent hardware devices for each virtual machine, including hardware resources such as CPU, memory, motherboard, graphics card, and network card, and installs a guest operating system on it. The end-user's program runs in the guest operating system. Virtual machines can support the dynamic sharing of physical resources and resource pools and improve resource utilization, especially for those different loads whose average demand is far lower than the need to provide dedicated resources for them. This way of virtual machine operation has the following advantages. 1. **Reduce the number of terminal equipment** Reduce the number of terminal equipment and reduce maintenance and management costs. Using virtualization technology can effectively reduce the number of managed physical resources such as servers, workstations, and other equipment, curb the growth of such equipment, and hide part of physical resources' complexity. Simplify public management tasks through automation, access to better information, and central management. Realize load management automation, support the use of common tools on multiple platforms, and improve staff efficiency. Integrating multiple systems into one host through virtualization technology can still guarantee one server for one system. Thus, on the basis of not affecting the use of the business, the number of hardware devices can be effectively reduced, and the energy consumption of power resources can be reduced. Simultaneously, it can also reduce the rack location space required by the equipment and avoid the transformation of the computer room environment caused by the increase in the number of equipment. 2. **Higher security** Virtualization technology can achieve isolation and division that simpler sharing mechanisms cannot achieve. These features can achieve controllable and secure access to data and services. By dividing the host and the internal virtual machine, you can prevent one program from affecting other programs' performance or causing the system to crash. Even if the original program or system is unstable, it can run safely and isolated. If a comprehensive virtualization strategy is implemented in the future, system administrators can make available fault tolerance planning to ensure business continuity in the event of an accident. By converting operating systems and process instances into data files, it can help realize automated and streamlined backup, replicate, provide more robust business continuity, and speed up recovery after failures or natural disasters. Further development of virtual cluster technology can realize the uninterrupted business function and realize multi-machine hot backup. 3. **Higher availability** Virtualize the entire computing infrastructure, and then use specialized software to centrally manage the system and virtual hosts, which can manage physical resources without affecting users. Reduce the management of resources and processes, thereby reducing the complexity of the network management system's hardware architecture. Through centralized, policy-based management, the advantages of end-to-end virtualization technology can be used for both virtual and physical resources, allowing maintenance personnel to handle enterprise-level installation configuration and change management from a central location. Significantly reduce the resources and time required to manage system hardware. ## 3.2 Basic Knowledge of Server Virtualization ### 3.2.1 System Virtualization System virtualization is the most widely recognized and accepted virtualization technology. System virtualization realizes the separation of operating system and the physical machine, so that one or more virtual operating systems can be installed and run on a physical machine at the same time, as shown in the image. From the perspective of the applications inside the operating system, there is no significant difference between the virtual operating system and the operating system directly installed on the physical machine. The core idea of system virtualization is to use virtualization software to virtualize one or more virtual machines on a physical machine. A virtual machine refers to a logical computer system that uses system virtualization technology to run in an isolated environment and has complete hardware functions, including a guest operating system and its application programs. In system virtualization, multiple operating systems can run simultaneously on the same physical machine without affecting each other, reusing physical machine resources. There are various system virtualization technologies, such as system virtualization applied to IBM z-series mainframes, system virtualization applied to IBM p-series servers based on Power Architecture, and system virtualization applied to x86 architecture personal computers. For these different types of system virtualization, the virtual machine operating environment's design and implementation are not the same. However, the virtual operating environment of system virtualization needs to provide a virtual hardware environment for the virtual machine running on it, including virtual processors, memory, devices and I/O, network interfaces, etc., as shown in the image. At the same time, the virtual operating environment also provides many features for these operating systems, such as hardware sharing, system management, and system isolation. The more excellent value of system virtualization lies in server virtualization. At present, a large number of x86 servers are used in data centers, and a large data center often hosts tens of thousands of x86 servers. For safety, reliability, and performance considerations, these servers only run one application service, leading to low server utilization. Since servers usually have strong hardware capabilities, if multiple virtual servers are virtualized on the same physical server, each virtual server runs a different service, increasing server utilization, reducing the number of machines, and reducing operating costs. Save physical storage space and electrical energy so as to achieve both economic and environmentally friendly purposes. In addition to using virtual machines for system virtualization on personal computers and servers, desktop virtualization can also achieve the purpose of running multiple different systems in the same terminal environment. Desktop virtualization removes the coupling relationship between the desktop environment (including applications and files) of the personal computer and the physical machine. The virtualized desktop environment is stored on a remote server instead of on the personal computer's local hard disk. This means that when the user is working on his desktop environment, all applications and data are running and ultimately saved on this remote server. The user can use any compatible device with sufficient display capabilities to access and use his desktop environment, such as personal computers and smart phones. ### 3.2.2 Server Virtualization Server virtualization applies system virtualization technology to servers, virtualizing one server into several servers. As shown in the image, before server virtualization was adopted, three applications were running on three independent physical servers; after server virtualization was adopted, these three applications were running on three separate virtual servers. On the server, the same physical server can host these three virtual servers. Simply put, server virtualization makes it possible to run multiple virtual servers on a single physical server. Server virtualization provides the virtual server with hardware resource abstraction that can support its operation, including virtual BIOS, virtual processor, virtual memory, virtual device I/O, and provides sound isolation and security for virtual machines. Server virtualization technology was first used in mainframes manufactured by IBM. It was introduced to the x86 platform by VMware in the 1990s, and it was quickly accepted by the industry after 2000, becoming a more popular technology. Seeing the huge advantages of server virtualization, major IT vendors have increased their investments in server virtualization-related technologies. Microsoft's server operating system Windows Server 2008 optional components include server virtualization software Hyper-V and promises that Windows Server 2008 supports other existing mainstream virtualization platforms. At the end of 2007, Cisco announced a strategic investment in VMware through the purchase of shares. Many mainstream Linux operating system distributions, such as Novell's SUSE Enterprise Linux and Red Hat's Red Hat Enterprise Linux, have added Xen or KVM virtualization software, and users are encouraged to install and use it. Virtualization technology is a key direction in technology and strategic business planning by many mainstream technology companies, including Huawei, Cisco, Google, IBM, Microsoft, etc. ### 3.2.3 Typical Implementation Server virtualization provides an abstraction of hardware devices and management of virtual servers through virtualization software. At present, the industry usually uses two special terms when describing such software. They are as follows: - **Virtual Machine Monitor (VMM):** responsible for providing hardware resource abstraction for virtual machines and providing a running environment for guest operating systems. - **Virtualization platform (Hypervisor):** responsible for hosting and management of virtual machines. It runs directly on the hardware, so the underlying architecture directly constrains its implementation. These two terms are usually not strictly distinguished, and Hypervisor can also be translated as a virtual machine monitor. In server virtualization, virtualization software needs to implement functions such as hardware abstraction, resource allocation, scheduling, management, isolation between virtual machines and host operating systems, and multiple virtual machines. The virtualization layer provided by this software is above the hardware platform and below the guest operating system. According to the virtualization layer's different implementation methods, server virtualization mainly has two implementation methods, as shown in the image. The image shows the comparison of these two implementations. - **Residence Virtualization.** VMM is an application program running on the host operating system, which uses the functions of the host operating system to implement the abstraction of hardware resources and the management of virtual machines. Virtualization in this way is easier to implement, but because the virtual machine's resource operations need to be completed by the host operating system, its performance is usually low. Typical implementations of this approach are VMware Workstation and Microsoft Virtual PC. - **Bare Metal Virtualization.** In bare metal virtualization, it is not the host operating system that runs directly on the hardware, but the virtualization platform, and the virtual machine runs on the virtualization platform. The virtualization platform provides instruction sets and device interfaces to provide support for virtual machines. This method usually has higher performance, but it is more difficult to implement. Typical implementations of this approach are Xen Server and Microsoft Hyper-V. ### 3.2.4 Full Virtualization From the perspective of the guest operating system, the fully virtualized virtual platform is the same as the real platform, and the guest operating system can run without any modification. This means that the guest operating system will operate the virtual processor, virtual memory, and virtual I/O device just like a normal processor, memory, and I/O device. From an implementation point of view, VMM needs to handle all possible behaviors of the client correctly. Furthermore, the client's behavior is reflected through instructions, so the VMM needs to process all possible instructions correctly. For full virtualization, all possible instructions refer to all instructions defined in the virtual processor's manual specification. In terms of implementation, taking the x86 architecture as an example, full virtualization has gone through two stages: software-assisted full virtualization and hardware-assisted full virtualization. 1. **Software-assisted full virtualization** In the early days of x86 virtualization technology, the x86 system did not support virtualization at the hardware level, so full virtualization can only be achieved through software. A typical approach is a combination of priority compression (Ring Compression) and binary code translation (Binary Translation). The principle of priority compression is: because VMM and the client run at different privilege levels, corresponding to the x86 architecture, usually VMM runs at Ring0 level, guest operating system kernel runs at Ring1 level, and guest operating system applications run at Ring3 level. When the guest operating system kernel executes related privileged instructions because it is at the non-privileged Ring1 level, an exception is usually triggered, and the VMM intercepts the privileged instruction and virtualizes it. Priority compression can correctly handle most of the privileged instructions, but because the x86 instruction system did not consider virtualization at the beginning of its design, some instructions still cannot be processed normally through priority compression, that is, when performing privileged operations in the Ring1 level, there is no an exception is triggered, so that the VMM cannot intercept the privileged instruction and deal with it accordingly. Binary code translation is therefore introduced to handle these virtualization-unfriendly instructions. The principle of binary code translation is also very simple, that is, by scanning and modifying the client's binary code, instructions that are difficult to virtualize are converted into instructions that support virtualization. VMM usually scans the binary code of the operating system, and once it finds an instruction that needs to be processed, it translates it into an instruction block (Cache Block) that supports virtualization. These instruction blocks can cooperate with VMM to access restricted virtual resources, or explicitly trigger exceptions for further processing by VMM. In addition, because the technology can modify the binary code of the client, it is also widely used in performance optimization, that is, replacing some instructions that cause performance bottlenecks with more efficient instructions to improve performance. Although priority compression and binary code translation technology can achieve full virtualization, this patching method is difficult to ensure its integrity in the architecture. Therefore, x86 vendors have added support for virtualization to the hardware, thus realizing virtualization on the hardware architecture. 2. **Hardware-assisted full virtualization** If many problems are difficult to solve at their level, the next level will become easier to solve by adding one level. Hardware-assisted full virtualization is one such way. Since the operating system is the last layer of system software on top of the hardware, if the hardware itself ads sufficient virtualization functions, it can intercept the execution of sensitive instructions or sensitive to the operating system's sensitive instructions. The resource access is reported to the VMM in an abnormal manner, which solves the virtualization problem. Intel's VT-x technology is representative of this approach. VT-x technology introduces a new execution mode on the processor for running virtual machines. When the virtual machine executes in this particular mode, it still faces a complete set of processor registers and execution environment, but any privileged operation will be intercepted by the processor and reported to the VMM. The VMM itself runs in the normal mode. After receiving the processor report, it finds the corresponding virtualization module for simulaton by decoding the target instruction and reflecting the final effect in the environment in the special mode. Hardware-assisted full virtualization is a complete virtualization method because instructions also carry access to memory and peripherals themselves. The interception of the processor instruction level means that VMM can simulate a virtual host the same as the real host. In this environment, as long as any operating system can run on an equivalent host in reality, it can run seamlessly in this virtual machine environment. ### 3.2.5 Paravirtualization Paravirtualization is also called quasi-virtualization. Paravirtualization enables VMM to virtualize physical resources by modifying instructions at the source code level to avoid virtualization vulnerabilities. As mentioned above, x86 has some instructions that are difficult to virtualize. Full virtualization uses binary code translation to avoid virtualization vulnerabilities at the binary code level. Paravirtualization takes another approach: to modify the code of the operating system kernel (i.e., the API level) so that the operating system kernel completely avoids these instructions that are difficult to virtualize. The operating system usually uses all the processor functions, such as privilege levels, address space, and control registers. The first problem that paravirtualization needs to solve is how to insert the VMM. The typical approach is to modify the processor-related code of the operating system to allow the operating system to actively surrender the privilege level and run on the next level of privilege. In this way, when the operating system tries to execute a privileged instruction, the protection exception is triggered, thereby providing an interception point for the VMM to simulate. Now that the kernel code needs to be modified, paravirtualization can be further used to optimize I/O. In other words, paravirtualization does not simulate real-world devices because too many register simulations will reduce performance. On the contrary, paravirtualization can customize highly optimized I/O protocols. This I/O protocol is wholly based on transactions and can reach the speed of a physical machine. ### 3.2.6 Mainstream Server Virtualization Technology Many mainstream virtualization technologies are generally divided into two types: open source and closed source. Open source virtualization technologies include KVM and Xen, and closed source virtualization technologies include Microsoft's Hyper-V, VMware's vSphere, and Huawei's FusionSphere. Open source virtualization technology is free and can be used at any time. Their source code is public, and users can customize some special functions according to their needs. Open source virtualization technology has high technical requirements for users. Once the system has problems, you need to rely on your own technology and experience to complete the repair of the system. With closed-source virtualization technology, users cannot see the source code, nor can they perform personalized customization. Closed-source virtualization products are generally charged and provide users with "out-of-the-box" services. During use, if there is a problem with the system, the manufacturer will provide full support. There is no difference between open source and closed source for users, only which one is more suitable. In the open source virtualization technology, KVM and Xen are equally divided, KVM is full virtualization, and Xen supports both paravirtualization and full virtualization. KVM is a module in the Linux kernel, which is used to realize the virtualization of CPU and memory. It is a process of Linux, and other I/O devices (network cards, disks, etc.) need QEMU to realize. Xen is different from KVM in that it runs directly on the hardware and then runs a virtual machine on it. Virtual machines in Xen are divided into two categories: Domain0 and DomainU. Domain0 is a privileged virtual machine that can directly access hardware resources and manage the DomainU of other ordinary virtual machines. Domain0 needs to be started before other virtual machines are started. DomainU is an ordinary virtual machine and cannot directly access hardware resources. All operations need to be forwarded to Domain0 through the front/back-end driver, and then Domain0 completes the specific operations and returns the results to DomainU. ## 3.3 Supporting Technology of Server Virtualization ### 3.3.1 CPU Virtualization The CPU virtualization technology abstracts the physical CPU into a virtual CPU, and a physical CPU thread can only run the instructions of one virtual CPU at any time. Each guest operating system can use one or more virtual CPUs. Between these guest operating systems, the virtual CPUs are isolated from each other and do not affect each other. Operating systems based on the x86 architecture are designed to run directly on the physical machine. At the beginning of the design, these operating systems are designed assuming that they completely own the underlying physical machine hardware, especially the CPU. In the x86 architecture, the processor has four operating levels, namely Ring0, Ring1, Ring2, and Ring3. Among them, the Ring0 level has the highest authority and can execute any instructions without restrictions. The run level decreases sequentially from Ring0 to Ring3. Applications generally run at the Ring3 level. The kernel mode code of the operating system runs at the Ring0 level because it needs to control and modify the state of the CPU directly, and operations like this require privileged instructions running at the Ring0 level to complete. To realize virtualization in the x86 architecture, a virtualization layer needs to be added below the guest operating system layer to realize the sharing of physical resources. It can be seen that this virtualization layer runs at the Ring0 level, and the guest operating system can only run at the level above Ring0, as shown in the image. However, the privileged instructions in the guest operating system, such as interrupt processing and memory management instructions, will have different semantics and produce other effects if they are not run at the Ring0 level, or they may not work at all. Due to the existence of these instructions, it is not easy to virtualize the x86 architecture. The key to the problem is that these sensitive instructions executed in the virtual machine cannot directly act on the real hardware but need to be taken over and simulated by the virtual machine monitor. Full virtualization uses dynamic binary translation technology to solve the guest operating system's privileged instruction problem. The so-called dynamic translation of binary code means that when the virtual machine is running, the trapping instruction is inserted before the sensitive instruction, and the execution is trapped in the virtual machine monitor. The virtual machine monitor dynamically converts these instructions into a sequence of instructions that can perform the same function before executing them. In this way, full virtualization converts sensitive instructions executed in the kernel state of the guest operating system into a sequence of instructions with the same effect that can be executed through the virtual machine monitor, while non-sensitive instructions can be run directly on the physical processor. Different from full virtualization, paravirtualization solves the problem of virtual machines executing privileged instructions by modifying the guest operating system. In paravirtualization, the guest operating system hosted by the virtualization platform needs to modify its operating system and replace all sensitive instructions with super calls to the underlying virtualization platform. The virtualization platform also provides a calling interface for these sensitive privileged commands. Whether it is full virtualization or paravirtualization, they are pure software CPU virtualization and do not require any changes to the processor itself under the x86 architecture. However, pure software virtualization solutions have many limitations. Whether it is the fully virtualized binary code dynamic translation technology or the paravirtualized super call technology, these intermediate links will inevitably increase the complexity and performance overhead of the system. In addition, in paravirtualization, support for guest operating systems is limited by the capabilities of the virtualization platform. As a result, hardware-assisted virtualization came into being. This technology is a hardware solution. The CPU that supports virtualization technology adds a new instruction set and processor operating mode to complete CPU virtualization functions. At present, Intel and AMD have introduced hardware-assisted virtualization technologies Intel VT and AMD-V, respectively, and gradually integrated them into newly launched microprocessor products. Taking Intel VT technology as an example, processors that support hardware-assisted virtualization have added a set of virtual machine extensions (VMX), which adds about 10 instructions to support virtual related operations. In addition, Intel VT technology defines two operating modes for the processor, namely root mode and non-root mode. The virtualization platform runs in root mode, and the guest operating system runs in non-root mode. Since hardware-assisted virtualization supports the guest operating system to run directly on it, there is no need for dynamic translation or hyper-calling of binary codes, thus reducing the related performance overhead and simplifying the design of the virtualization platform. ### 3.3.2 Memory Virtualization Memory virtualization technology manages the real physical memory of a physical machine in a unified manner and packs it into multiple virtual physical memories for use by several virtual machines, so that each virtual machine has its own independent memory space. Since memory is the most frequently accessed device by virtual machines in server virtualization, memory virtualization, and CPU virtualization have an equally important position. In memory virtualization, the virtual machine monitor must manage the memory on the physical machine and divide the machine memory according to the memory requirements of each virtual machine, while keeping the memory access of each virtual machine isolated from each other. Essentially, a physical machines' memory is a contiguous address space, and access to the memory by upper-level applications is mostly random. Therefore, the virtual machine monitor needs to maintain the mapping relationship between the memory address block in the physical machine and the continuous memory block seen inside the virtual machine to ensure that the virtual machine's memory access is continuous. Modern operating systems use segment, page, segment page, multi-level page tables, cache, virtual memory, and other complex technologies for memory management. The virtual machine monitor must be able to support these technologies so that they remain valid in a virtual machine environment and guarantee a high level of performance. Before discussing memory virtualization, let's review classic memory management techniques. Memory as a storage device is indispensable for applications' operation because all applications must submit codes and data to the CPU for processing and execution through memory. If there are too many applications running on the computer, it will exhaust the memory in the system and become a bottleneck in improving computer performance. People usually use extended memory and optimization procedures to solve this problem, but this method is very costly. Therefore, virtual memory technology was born. For virtual memory, all CPUs based on the x86 architecture are now equipped with Memory Management Unit (MMU) and Translation Lookaside Buffer (TLB) to optimize virtual memory performance. In short, classic memory management maintains the mapping relationship between virtual memory and physical memory as seen by the application. To run multiple virtual machines on a physical server, the virtual machine monitor must have a mechanism for managing virtual machine memory, that is, a memory virtual management unit. Because a new memory management layer is added, virtual machine memory management is different from classic memory management. The "physical" memory seen by the operating system is no longer the real physical memory, but the "pseudo" physical memory managed by the virtual machine monitor. Corresponding to this "physical" memory is a newly introduced concept-machine memory. Machine memory refers to the real memory on the physical server hardware. In memory virtualization, there are three types of memory: process logical memory, virtual machine physical memory, and server machine memory, as shown in the image. The address spaces of these three types of memory are called logical addresses, "physical" addresses, and machine addresses. In memory virtualization, the mapping relationship between process logic memory and server machine memory is taken care of by the virtual machine memory management unit. There are two main methods for the realization of the virtual machine memory management unit. - The first is the shadow page table method, as shown in the image. The guest operating system maintains its page table, and the memory address in the page table is the "physical" address seen by the guest operating system. Simultaneously, the virtual machine monitor also maintains a corresponding page table for each virtual machine, but this page table records the real machine address. The page table in the virtual machine monitor is established based on the page table maintained by the guest operating system and will be updated of the guest operating system page table, just like its "shadow," so it is called a "shadow page" table. VMware Workstation, VMware ESX Server, and KVM all use the shadow page table method. - The second is the page table writing method, as shown in the image. When the guest operating system creates a new page table, it needs to register the page table with the virtual machine monitor. At this time, the virtual machine monitor will deprive the guest operating system of the write permission of the page table and write the machine address maintained by the virtual machine monitor to the page table. When the guest operating system accesses the memory, it can obtain the real machine address in its page table. Each modification of the page table by the guest operating system will fall into the virtual machine monitor, and the virtual machine monitor will update the page table to ensure that its page table entries always record the real machine address. The page table writing method needs to modify the guest operating system. Xen is a typical representative of this method. ### 3.3.3 Device and I/O Virtualization In addition to CPU and memory, other vital components in the server that need to be virtualized include equipment and I/O. Device and I/O virtualization technology unified management of the real devices of physical machines, packaged them into multiple virtual devices for use by several virtual machines, and responded to the device access requests and I/O requests of each virtual machine. At present, mainstream equipment and I/O virtualization are all realized through software. As a platform between shared hardware and virtual machines, the virtualization platform provides convenience for device and I/O management and provides rich virtual device functions for virtual machines. Take VMware's virtualization platform as an example. The virtualization platform virtualizes the devices of physical machines, standardizes these devices into a series of virtual devices, and provides a set of virtual devices that can be used for virtual machines, as shown in the image. It is worth noting that the virtualized device may not completely match the model, configuration, and parameters of the physical device. However, these virtual devices can effectively simulate the actions of the physical device and translate the device operations of the virtual machine to the physical device. And return the running result of the physical device to the virtual machine. Another benefit of this unified and standardized approach to virtual devices is that virtual machines do not depend on the implementation of underlying physical devices. Because for the virtual machine, it always sees these standard equipment provided by the virtualization platform. In this way, as long as the virtualization platform is always consistent, virtual machines can be migrated on different physical platforms. In server virtualization, the network interface is a unique device that plays an important role. Virtual servers provide services to the outside world through the network. In server virtualization, each virtual machine becomes an independent logical server, and the communication between them is carried out through a network interface. Each virtual machine is assigned a virtual network interface, which is a virtual network card from the inside of the virtual machine. Server virtualization requires modification of the network interface driver of the host operating system. After modification, the network interface of the physical machine must be virtualized with a switch through software, as shown in the image. The virtual switch works at the data link layer and is responsible for forwarding data packets delivered from the physical machine's external network to the virtual machine network interface and maintains the connection between multiple virtual machine network interfaces. When a virtual machine communicates with other virtual machines on the same physical machine, its data packets will be sent out through its virtual network interface, and the virtual switch will forward the data packet to the virtual network interface of the target virtual machine after receiving the data packet. This forwarding process does not need to occupy physical bandwidth because a virtualization platform manages the network in software. ### 3.3.4 Storage Virtualization With the continuous development of information services, network storage systems have become the core platform of enterprises. Many high-value data have accumulated, and applications surrounding these data have increasingly higher requirements for the platform. Not only in storage capacity, but also in data access performance, data transmission performance, data management capabilities, storage expansion capabilities, and many other aspects. RAID technology is the embryonic form of storage virtualization technology. It provides a unified storage space for the upper layer by combining multiple physical disks in an array. For the operating system and upper-level users, they don't know how many disks there are in the server, they can only see a large "virtual" disk, that is, a logical storage unit. NAS and SAN appeared after RAID technology. NAS decouples file storage from the local computer system and centralizes file storage in NAS storage units connected to the network, such as NAS file servers. Heterogeneous devices on other networks can use standard network file access protocols, such as NFS under the UNIX operating system and the Server Message Block (SMB) protocol under the Windows operating system, to follow the permissions of the files on it Restrict access and updates. Unlike NAS, although it also separates storage from the local system and concentrates it on the local area network for users to share and use, SAN is generally composed of disk arrays connected to Fibre Channel. Servers and clients use SCSI protocol for high-speed data communication. SAN users feel these storage resources are the same as the devices directly connected to the local system. The share stored in the SAN is at the disk block-level, while the share stored in the NAS is at the file-level. At present, not limited to RAID, NAS and SAN, storage virtualization has been given more meaning. Storage virtualization allows logical storage units to be integrated within a wide area network and can be moved from one disk array to another without downtime. In addition, storage virtualization can also allocate storage resources based on users' actual usage. For example, the operating system disk manager allocates 300GB of space to the user, but the user's current usage is only 2GB, and it remains stable for a while, the actual allocated space may only be 10GB, which is less than the nominal capacity provided to the user. When the user's actual usage increases, the appropriate allocation of new storage space will improve resource utilization. ### 3.3.5 Network Virtualization Network virtualization usually includes virtual local area networks and virtual private network. A virtual local area network can divide a physical local area network into multiple virtual local area networks, and even divide the nodes in multiple physical local area network into one virtual local area network. Therefore, the communication in the virtual local area network is similar to the way of physical local area networks and is transparent to users. The virtual private network abstracts network connections, allowing remote users to access the internal network of the organization as if they were physically connected to the network. Virtual private networks help administrators protect the network environment, prevent threats from unrelated network segments on the Internet or Intranet, and enable users to quickly and securely access applications and data. At present, virtual private networks are used in a large number of office environments and become an important supporting technology for mobile office. Recently, various vendors have added new content to network virtualization technology. For network equipment providers, network virtualization is the virtualization of network equipment, that is, traditional routers, switches and other equipment are enhanced to support a large number of scalable applications. The same network equipment can run multiple virtual network equipment, such as firewalls, VoIP, and mobile services. The specific content of network virtualization will be introduced in detail in Chap. 4. ### 3.3.6 Desktop Virtualization Before introducing desktop virtualization in detail, we must first clarify the difference between server virtualization and desktop virtualization. Server virtualization is the division of a physical server into multiple small virtual servers. With server virtualization, numerous servers rely on one physical machine to survive. The most common server virtualization method is to use a virtual machine, which can make a virtual server look like an independent computer. IT departments usually use server virtualization to support various tasks, such as supporting databases, file-sharing, graphics virtualization, and media delivery. By consolidating servers into less hardware, server virtualization reduces business costs and increases efficiency. But this kind of merger is not often used in desktop virtualization, and the scope of desktop virtualization is wider. Desktop virtualization is to replace the physical computer with a virtual computer environment and deliver it to the client. The virtual computer is stored in a remote server and can be delivered to the user's device. Its operation mode is the same as that of a physical machine. One server can deliver multiple personalized virtual desktop images. There are many ways to achieve desktop virtualization, including terminal server virtualization, operating system streaming, virtual desktop infrastructure (VDI), and desktop as a service (DaaS). Servers are easier to know what to

Use Quizgecko on...
Browser
Browser