Virtualization Unit 2 PDF

Summary

This document is a unit on the topic of virtualization, including definitions, benefits, and various approaches to virtualization. It also discusses the use cases of virtualization in data centers and the different components of a virtualization system.

Full Transcript

UNIT - II Virtualization: Definition, Understanding and Benefits of Virtualization. Implementation Level of Virtualization, Virtualization Structure / Tools and Mechanisms, Issues with virtualization, Virtualization technologies and architectures, Internals of virtual machine monitors / h...

UNIT - II Virtualization: Definition, Understanding and Benefits of Virtualization. Implementation Level of Virtualization, Virtualization Structure / Tools and Mechanisms, Issues with virtualization, Virtualization technologies and architectures, Internals of virtual machine monitors / hypervisors, Introduction to Various Hypervisors, virtualization of data centers, and Issues with Multi-tenancy. Virtualization Virtualization enables users to disjoint operating systems from the underlying hardware. Virtualization can be defined as the abstraction of the four computing resources (storage, processing power, memory, and network or I/O). A great opportunity for parallel, cluster, grid, cloud, and distributed computing. Virtualization is globally adopted in enterprise IT architecture and drives cloud computing economics. It enables cloud users to purchase only necessary computing resources Virtualization ?? Virtualization in computing often refers to the abstraction of some physical component into a logical object” Virtual Machines, Virtual Disks, Virtual Networks etc. are some examples of what can be virtualized Virtual versions of physical entities are expressed generally as one or more files For example, a Virtual Machine can be described using a file that describe its configuration (such as CPU, memory, storage, network etc.) and some other files that represent the virtual disks attached to the machine Modifying a virtual resource often implies modifying one or more corresponding files Who does virtualizationcommercially? The idea and execution of virtualization is not new IBM mainframes had implementations of the idea in early 1970s The first commercially available solution to provide virtualization for x86 computers came from VMware in 2001 (namely, ESX 1.0 and GSX 1.0) A parallel open-source offering called Xen arrived two years later In addition, currently, there are a number of virtualization offerings available, provided by different vendors including VMWare, IBM, Microsoft, Oracle etc. Some examples of platform specific solutions are Solaris Zones, BSD jails, PowerVM on IBM machines etc. VMWare Workstation and Oracle Virtualbox are examples of virtualization offerings that can run on wider range of underlying platforms Why should we care for virtualization? (1/2) Moore’s law is an observation on expansion of processing power over a period of time Simply stated, it says that the processing power roughly doubles every 18 months Moore’s law applies not just to processing power but to many other related technologies, including Memory Capacities too Organizations generally replace their servers in three to five years time because they may no longer be enough to fit their IT requirements This happens because of rapidly growing databases and the applications built to process data in them efficiently within acceptable timing constraints Some organizations prefer leasing over purchasing, yet the overall constraints still apply to them as well 5 / 31 Why should we care for virtualization? (2/2) This is where virtualization can be a saviour Recall that unlike their physical counterparts, virtual resources are generally “a set of files” Modifying or upgrading a virtual resource is almost always much easier and quicker than doing so with a physical resource For example, updating the Memory in a Virtual Machine would mean changing the value of a field in a configuration file Organizations thus, are moving preferring to deploy their applications on Virtual Servers rather than buying Physical Servers to host them 6 / 31 When should we go for virtualization? Virtualization has its own drawbacks too Since virtualization often involves a layer of abstraction between the application and the hardware, it can lead to performance degradation However, if the applications running on a physical server do not make full use of the hardware resources, consolidation can lead to better hardware utilisation Condensing multiple servers on to one (multiple VMs on single Physical Host) is called Consolidation and the number of servers condensed is called Consolidation Ratio (e.g. for 8 VMs running on a physical host, consolidation ratio is 8:1) The consolidation ratios of the first generation of x86 hypervisors were in the range of 5:1 Even a modest consolidation ratio of 4:1 could remove three-quarters of the servers in a Datacentre ! 7 / 31 Where can we do virtualization? Virtualization is not just an option for Datacentres There are virtualization solutions which you can run on your Desktops or Laptops Virtualization provides the underlying foundation to build Cloud Environments Virtualization can also be limited to specific aspects such as “virtualizing only the desktop” and not the whole system Citrix’s XenDesktop and VMWare’s View are to popular solutions for Desktop Virtualization There are also solutions available to support “application virtualization” such as Microsoft’s App-V and VMware’s ThinApp 8 / 31 Virtualization Virtualization is a computer architecture technology by which multiple virtual machines (VMs) are multiplexed in the same hardware machine. Purpose of a VM is to enhance resource sharing by many users and improve computer performance. The VMs are shown in the upper boxes, where applications run with their own guest OS over the virtualized CPU, memory, and I/O resources. The main function of the software layer for virtualization is to virtualize the physical hardware of a host machine into virtual resources to be used by the VMs, exclusively The Five Levels of Implementing Virtualization The Five Levels of Implementing Virtualization A computer runs an OS that is configured to that particular hardware. Running a different OS on the same hardware is not exactly feasible. To tackle this, it acts as a bridge between virtual OS and hardware to enable its smooth functioning of the instance. There are five levels of virtualizations available that are most commonly used in the industry. These are as follows: Levels of Virtualization Implementation Instruction Set Architecture Level Virtualization is performed by emulating a given ISA by the ISA of the host machine. It enables the running of applications and virtual machines designed for one processor to run on other processors with different instruction set architectures. Instruction set virtualization is delivered through a software framework that has the essential compiler, assembler and other software libraries essential for emulating different instruction set architectures. The basic emulation method is through code interpretation. An interpreter program interprets the source instructions to target instructions one by one. this process is relatively slow. For better performance, dynamic binary translation is desired. virtual instruction set architecture (V-ISA) thus requires adding a processor-specific software translation layer to the compiler. Hardware Abstraction Level Hardware-level virtualization is performed right on top of the bare hardware. The idea is to virtualize a computer’s resources, such as its processors, memory, and I/O devices. The intention is to upgrade the hardware utilization rate by multiple users concurrently. This makes use of a hypervisor which is used for functioning. At this level, the virtual machine is formed, and this manages the hardware using the process of virtualization. Xen hypervisor has been applied to virtualize x86-based machines to run Linux or other guest OS applications. VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES are taken into consideration in hardware abstraction level Hardware Abstraction Level To support virtualization, processors such as the x86 employ a special running mode and instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS run in different modes and all sensitive instructions of the guest OS and its applications are trapped in the VMM. Modern operating systems and processors permit multiple processes to run simultaneously. all processors have at least two modes, user mode and supervisor mode, to ensure controlled access of critical hardware. Instructions running in supervisor mode are called privileged instructions. Other instructions are unprivileged instructions Hardware Abstraction Level Virtualized computing system holds thousands of resources. All of them need some directions for processing, which was not an easy task. And due to, this instruction was classified into two primary forms to make the processing smooth. Non-privileged instruction– These instructions execute directly without interfering with other tasks. Privileged instruction– These instructions require some modification before it executes. Hardware Abstraction Level For processor virtualization, Intel offers the VT-x or VT-i technique. VT-x adds a privileged mode (VMX Root Mode) and some instructions to processors. This enhancement traps all sensitive instructions in the VMM automatically. For memory virtualization, Intel offers the EPT, which translates the virtual address to the machine’s physical addresses to improve performance. For I/O virtualization, Intel implements VT-d and VT-c to support this. Why OS-Level Virtualization? It is slow to initialize a hardware-level VM because each VM creates its own image from scratch. In a cloud computing environment, perhaps thousands of VMs need to be initialized simultaneously. Storing the VM images, repeated content among VM images, slow performance and low density, need for para-virtualization, performance overhead of hardware-level virtualization are common problem in Hardware Virtualization. OS-level virtualization provides a feasible solution for these hardware-level virtualization issues. It inserts a virtualization layer inside an operating system to partition a machine’s physical resources. Operating System Level An abstraction layer between traditional OS and user applications. OS-level virtualization creates isolated containers on a single physical server and the OS instances to utilize the hardware and software in data centers. The containers behave like real servers. OS-level virtualization is commonly used in creating virtual hosting environments to allocate hardware resources among a large number of mutually distrusting users. When the number of users is high, and no one is willing to share hardware, this level of virtualization comes in handy. Here, every user gets their own virtual environment with dedicated virtual hardware resources. Why OS-Level Virtualization? It enables multiple isolated VMs within a single operating system kernel. This kind of VM is often called a virtual execution environment (VE), Virtual Private System (VPS), or simply container. From the user’s point of view, VEs look like real servers. Advantages of OS Extensions The benefits of OS extensions are twofold: (1) VMs at the operating system level have minimal startup/shutdown costs, low resource requirements, and high scalability; and (2) for an OS-level VM, it is possible for a VM and its host environment to synchronize state changes when necessary. These benefits can be achieved via two mechanisms of OS-level virtualization: (1) All OS-level VMs on the same physical machine share a single operating system kernel; and (2) the virtualization layer can be designed in a way that allows processes in VMs to access as many resources of the host machine as possible, but never to modify them. The benefits can be used to overcome the defects of slow initialization of VMs at the hardware level, and being unaware of the current application state, respectively Disadvantages of OS Extensions All the VMs at operating system level on a single container must have the same kind of guest operating system. To implement OS-level virtualization, isolated execution environments (VMs) should be created based on a single OS kernel. The access requests from a VM need to be redirected to the VM’s local resource partition on the physical machine. Operating System Virtualization supports only a single operating system for the guest OS and as the base in the single server. The client can use a single OS, either Linux or Windows. The container should have the same OS version and the same patch level as the base OS. If the base OS crashes, all virtual containers become unavailable inaccessible. Library Support Level Most applications use APIs exported by user-level libraries rather than using lengthy system calls by the OS. Systems provide well-documented APIs, such an interface becomes another candidate for virtualization. Virtualization with library interfaces is possible by controlling the communication link between applications and the rest of a system through API hooks. These API hooks control the communication link from the system to the applications. The software tool WINE has implemented this approach to support Windows applications on top of UNIX hosts. Library Support Level Library-level virtualization is also known as user-level Application Binary Interface (ABI) or API emulation. This type of virtualization can create execution environments for running alien programs on a platform rather than creating a VM to run the entire operating system. API call interception and remapping are the key functions performed. Library Support Level Example CUDA CUDA is a programming model and library for general-purpose GPUs. The vCUDA employs a client-server model to implement CUDA virtualization. It consists of three user space components: the vCUDA library, a virtual GPU in the guest OS (which acts as a client), and the vCUDA stub in the host OS (which acts as a server). The vCUDA library resides in the guest OS as a substitute for the standard CUDA library. User-Application Level Virtualization at the application level virtualizes an application as a VM. The most popular approach is to deploy high level language (HLL) VMs. In this scenario, the virtualization layer sits as an application program on top of the operating system, and the layer exports an abstraction of a VM that can run programs written and compiled to a particular abstract machine definition. Other forms of application-level virtualization are known as application isolation, application sandboxing, or application streaming. The process involves wrapping the application in a layer that is isolated from the host OS and other applications. Any program written in the HLL and compiled for this VM will be able to run on it. The Microsoft.NET CLR and Java Virtual Machine (JVM) are two good examples of this class of VM. Relative Merits of Different Approaches The number of X’s in the table cells reflects the advantage points of each implementation level. Five X’s implies the best case and one X implies the worst case. Virtualization Structures/Tools and Mechanisms In general, there are three typical classes of VM architecture.  Hypervisor architecture,  Full virtualization  Para virtualization,  Host-based virtualization. Before virtualization, the operating system manages the hardware. After virtualization, a virtualization layer is inserted between the hardware and the operating system. In such a case, the virtualization layer is responsible for converting portions of the real hardware into virtual hardware. The hypervisor is also known as the VMM (VirtualMachine Monitor). They both perform the same virtualization operations Hypervisor The hypervisor supports hardware-level virtualization. The hypervisor software sits directly between the physical hardware and its OS. This virtualization layer is referred to as either the VMM or the hypervisor. The hypervisor provides hypercalls for the guest OSes and applications. Depending on the functionality, a hypervisor can assume a micro-kernel architecture like the Microsoft Hyper-V. A micro-kernel hypervisor includes only the basic and unchanging functions. A monolithic hypervisor implements all the aforementioned functions, including those of the device drivers. Therefore, the size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a monolithic hypervisor. Essentially, a hypervisor must be able to convert physical devices into virtual resources dedicated for the deployed VM to use Type 1Hypervisors Type 2Hypervisors Runs over thehost hardware directly Runsover the host operating system Efficient since there is no Slower because of translations from guest abstraction layer to host The guestsmust have the sameOS The guestscan run anyOSbase base Problems in one guest generally Problems in one guest can affect remains isolated hypervisor process, hence affecting other guests too Examples: VMware ESX, Microsoft Examples: VMwareWorkstation, Hyper-V and Microsoft many Xenvariants VirtualServer and OracleVirtualbox Xen Architecture Xen Architecture Xen is an open source hypervisor program developed by Cambridge University. Xen is a microkernel hypervisor, which separates the policy from the mechanism. The core components of a Xen system are the hypervisor, kernel, and applications. The organization of the three components is important. Like other virtualization systems, many guest OSes can run on top of the hypervisor. The guest OS, which has control ability, is called Domain 0, and the others are called Domain U. Domain 0 has special privileges like being able to cause new domains to start or being able to access the hardware directly. It is responsible for running all of the device drivers for the hardware. Xen Architecture Any operating system can be ported to run on Xen as a DomU, only Linux has been given the tools and kernel patches necessary to run in Dom0. A DomU is the counterpart to Dom0; it is an unprivileged domain with (by default) no access to the hardware. It must run a Frontend Driver for multiplexed hardware it wishes to share with other domains. The hypervisor is Xen itself. It is between the hardware and the operating system of the various domains. The hypervisor is responsible for checking page tables, allocating resources for new domains, and scheduling domains. It presents the domains with a virtual machine that looks similar but not identical to the native architecture. It is also responsible for booting the machine to start the Dom0. The Dom0, the DomU and the hypervisor make up the virtualization environment. The whole system is able to run multiple operating systems simultaneously. Binary Translation with Full Virtualization Depending on implementation technologies, hardware virtualization can be classified into two categories: full virtualization and host-based virtualization. Full virtualization does not need to modify the host OS. It relies on binary translation to trap and to virtualize the execution of certain sensitive, non virtualizable instructions. The guest OSes and their applications consist of noncritical and critical instructions. In a host-based system, both a host OS and a guest OS are used. A virtualization software layer is built between the host OS and guest OS. noncritical instructions run on the hardware directly while critical instructions are discovered and replaced with traps into the VMM to be emulated by software. Both the hypervisor and VMM approaches are considered full virtualization Ring Concepts in OS Protection rings are one of the key solutions for sharing resources and hardware. The processes are executed in these protection rings, where each ring has its own access rights to resources. The central ring has the highest privilege. The outer ones have fewer privileges than the inner ones. Ring 0 The kernel, which is at the heart of the operating system and has access to everything, can access Ring 0. The code that runs here is said to be in kernel mode. Kernel-mode processes have the potential to affect the entire system. Ring 3 User processes running in user mode have access to Ring 3. Therefore, this is the least privileged ring. This is where we’ll find the majority of our computer applications The OS uses ring 1 to interact with the computer’s hardware. This ring would need to run commands such as streaming a video through a camera on our monitor. Instructions that must interact with the system storage, loading, or saving files are stored in ring 2. Ring Concepts in OS Benefits of implementing protection rings in the OS. First and foremost, it protects the system against crashes. An application that we use in our computers can freeze or crash, however, we can recover them by restarting the application. Errors like these in higher rings are recoverable. Protection rings offer increased security a process can require some instructions that require more CPU resources. In such a case, this process must request permission from the OS. The OS can decide whether grant the request or deny it. This process protects the system from malicious behavior. While Linux and Windows use only ring 0 and ring 3, some other operating systems can utilize three different protection levels. Binary Translation of Guest OS Requests Using a VMM Vmware Company puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the instruction stream and identifies the privileged, control- and behavior-sensitive instructions. When these instructions are identified, they are trapped into the VMM, which emulates the behavior of these instructions. The method used in this emulation is called binary translation. full virtualization combines binary translation and direct execution. The guest OS is completely decoupled from the underlying hardware. The performance of full virtualization may not be ideal, because it involves binary translation which is rather time-consuming. Binary translation employs a code cache to store translated hot instructions to improve performance, but it increases the cost of memory usage. Host-Based Virtualization Para-Virtualization Architecture Para-virtualization needs to modify the guest operating systems. A para-virtualized VM provides special APIs requiring substantial OS modifications in user applications. Performance degradation is a critical issue of a virtualized system virtualization attempts to reduce the virtualization overhead, and thus improve performance by modifying only the guest OS kernel The guest operating systems are para- virtualized. They are assisted by an intelligent compiler to replace the nonvirtualizable OS instructions by hypercalls. Para-Virtualization Architecture Para-virtualization replaces nonvirtualizable instructions with hypercalls that communicate directly with the hypervisor or VMM. full virtualization architecture which intercepts and emulates privileged and sensitive instructions at runtime, para-virtualization handles these instructions at compile time The performance advantage of para-virtualization varies greatly due to workload variations. Compared with full virtualization, para-virtualization is relatively easy and more practical. The main problem in full virtualization is its low performance in binary translation. To speed up binary translation is difficult. Therefore, many virtualization products employ the para- virtualization architecture. Para-Virtualization Architecture KVM (Kernel-Based VM) This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory management and scheduling activities are carried out by the existing Linux kernel. The KVM does the rest, which makes it simpler than the hypervisor that controls the entire machine. KVM is a hardware-assisted para-virtualization tool, VMware ESX Server for Para-Virtualization ESX server employs a para-virtualization architecture in which the VM kernel interacts directly with the hardware without involving the host OS. The VMM layer virtualizes the physical hardware resources such as CPU, memory, network and disk controllers, and human interface devices. Every VM has its own set of virtual hardware resources. The resource manager allocates CPU, memory disk, and network bandwidth and maps them to the virtual hardware resource set of each VM created The service console is responsible for booting the system, initiating the execution of the VMM and resource manager, and relinquishing control to those layers. It also facilitates the process for system administrators. Full Virtualization Paravirtualization In Full virtualization, virtual machines permit the execution of In paravirtualization, a virtual machine does not implement full the instructions with the running of unmodified OS in an isolation of OS but rather provides a different API which is utilized entirely isolated way. when OS is subjected to alteration. While the Paravirtualization is more secure than the Full Full Virtualization is less secure. Virtualization. Full Virtualization uses binary translation and a direct approach While Paravirtualization uses hypercalls at compile time for as a technique for operations. operations. Paravirtualization is faster in operation as compared to full Full Virtualization is slow than paravirtualization in operation. virtualization. Full Virtualization is more portable and compatible. Paravirtualization is less portable and compatible. Examples of full virtualization are Microsoft and Parallels Examples of paravirtualization are Microsoft Hyper-V, Citrix Xen, systems. etc. The guest operating system has to be modified and only a few It supports all guest operating systems without modification. operating systems support it. Using the drivers, the guest operating system will directly The guest operating system will issue hardware calls. communicate with the hypervisor. It is less streamlined compared to para-virtualization. It is more streamlined. It provides the best isolation. It provides less isolation compared to full virtualization. Types of Virtualization in Cloud Computing Operating System Virtualization Hardware Virtualization Server Virtualization Storage Virtualization Operating system virtualizations Operating system virtualizations includes a modified form than a normal operating system so that different users can operate its end use different applications. This whole process shall perform on a single computer at a time. In OS virtualizations, the virtual eyes environment accepts command from any of the user operating it and performs different task on the same machine by running different applications. The kernel of an operating system allows more than one isolated user-space instance to exist. To integrate server hardware by moving services on separate servers. It providing security to the hardware resources which harm by distrusting users. Operating system virtualizations Operating system virtualization allows security and locates final IT hardware resources among a large number of mutually distrusting users. Operating system virtualizations uses the software which allows system hardware to run multiple operating systems concurrently. In Operating system virtualization the OS kernel runs a single operating system and provides that operating system with the ability to replicate on each of the isolated platforms. Types of OS Virtualization Linux OS Virtualization Windows OS virtualizations Advantages of OS Virtualization Operating system virtualizations eliminates the use of physical space which utilizes by the IT system. As everything is virtual it will require less space and hence it will save money. As there is no hardware required the maintenance will be less and therefore it will save both time and money. The number of Machines will be late so there will be lower power consumption, lower cooling requirement, low maintenance, and more electricity savings. It also allows the companies to make enhancement in terms of efficiency to use the server hardware and thus there is a greater return on investment (ROI) on the purchase and greater operational works. Operating system virtualizations has quick deployment capability and the traditional environment in the traditional deployment every machine needs to load individually which is not a problem in operating system virtualization. Hardware Virtualization in Cloud Computing It refers to a single physical server that consolidates multiple physical servers into virtual servers for running the key physical server. Every small server has the capability to host a virtual machine, but if any task requires Hardware, it will treat the entire server cluster as one device. It means creating a virtual platform of something, which will include virtual computer hardware, virtual storage devices, and virtual computer network. The work of hypervisor is that it manages the physical hardware resource which is shared between the customer and the provider. Hardware virtualization can be done by extracting the physical hardware with the help of the virtual machine monitor (VVM). Hardware Virtualization in Cloud Computing hypervisor creates an abstraction layer between the software and the hardware in use Types of Hardware Virtualization Full Virtualization Emulation Virtualization Para-Virtualization Server Virtualization in Cloud Computing Server virtualization is a partition of physical servers into multiple virtual servers. Here, each virtual server is running its own operating system and applications. It can be said that server virtualization in cloud computing is the masking of server resources. The server is familiar with the identity of individual physical servers. The single physical server is divided into multiple isolated virtual servers, with the help of software. Today, the companies contain a large number of servers but don’t use them. This results as, the waste of expensive servers. We can use server virtualization in IT infrastructure, this can reduce cost by increasing the utilization of existing servers. Server virtualization generally benefits from small to medium scale applications. Server Virtualization in Cloud Computing Three Types of Server Virtualization: Full Virtualization: Full virtualization uses a hypervisor, a type of software that directly communicates with a physical server's disk space and CPU. The hypervisor monitors the physical server's resources and keeps each virtual server independent and unaware of the other virtual servers. It also relays resources from the physical server to the correct virtual server as it runs applications. The biggest limitation of using full virtualization is that a hypervisor has its own processing needs. This can slow down applications and impact server performance. Para-Virtualization: Unlike full virtualization, para-virtualization involves the entire network working together as a cohesive unit. Since each operating system on the virtual servers is aware of one another in para-virtualization, the hypervisor does not need to use as much processing power to manage the operating systems. OS-Level Virtualization: Unlike full and para-virtualization, OS-level visualization does not use a hypervisor. Instead, the virtualization capability, which is part of the physical server operating system, performs all the tasks of a hypervisor. However, all the virtual servers must run that same operating system in this server virtualization method. Importance of Server Virtualization Server virtualization is a cost-effective way to provide web hosting services and effectively utilize existing resources in IT infrastructure. Without server virtualization, servers only use a small part of their processing power. This results in servers sitting idle because the workload is distributed to only a portion of the network’s servers. Data centers become overcrowded with underutilized servers, causing a waste of resources and power. By having each physical server divided into multiple virtual servers, server virtualization allows each virtual server to act as a unique physical device. Each virtual server can run its own applications and operating system. This process increases the utilization of resources by making each virtual server act as a physical server and increases the capacity of each physical machine. Benefits of Server Virtualization Economical This is one of the major benefits of server virtualization because it can divide a single server into multiple virtual servers which eliminate the cost of physical hardware. Quick Deployment and Provisioning Disaster Recovery Increase Productivity Server virtualization in cloud computing is helping a lot to the IT industries as each virtual server run its own operating system and is capable to perform complicated tasks. It saves the cost, which can be used in other works. Storage Virtualization in Cloud Computing It is nothing but the sharing of physical storage into multiple storage devices which further appears to be a single storage device. It can be also called as a group of an available storage device which simply manages from a central console. This virtualization provides numerous benefits such as easy backup, achieving, and recovery of the data. This whole process requires very less time and works in an efficient manner. Storage virtualization separates software from the hardware infrastructure to provide flexibility and scalability of storage resources. More and more companies are adopting this technology because storage virtualization helps them consolidate and manage their scattered data under a single console. Storage virtualization does not show the actual complexity of the Storage Area Network (SAN). This virtualization is applicable to all levels of SAN Storage Virtualization in Cloud Computing Block Storage is a method that abstracts storage on a low-level storage device. Block storage devices are managed as a cluster of units called blocks. In each block, you store a portion of a single file. This block is then assigned a unique address, enabling files to be spread across multiple machines for more efficient storage use. When you want to retrieve a file, a request is made to the block device your file is stored on. Once the request is translated to a block request, the reassembled file is returned to your machine, just as if the device was a standard hard drive. The benefit of block storage is that it enables low-latency operations on a volume that functions like a plug-n-play storage disk. When you attach block storage to your services, you can format it to accept any file system you need, including NTFS, XFS, or ext4. Blocks are also typically duplicated across devices, ensuring that data is recoverable if one device is corrupted. Storage Virtualization in Cloud Computing File storage is a method of storing data in a hierarchical system. File storage is the standard method of storage that most users are familiar with. With file storage, your data is stored in the same format it is retrieved in. You can access file storage through the Server Message Block (SMB) protocol in Windows or the Network File System (NFS) protocol in Unix or Linux. SMB and NFS are protocols that enable you to store files on your server in the same manner as data is stored on a client computer. You can mount all or part of your file system and share access across multiple client devices. These protocols are also commonly used with network-attached storage (NAS) devices. NAS devices are often used to scale file storage and may also be used in the form of NAS backups, used to provide redundancy for file storage. These devices make it possible to scale file storage, which is otherwise limited to individual disks or physically connected storage devices. Storage Virtualization in Cloud Computing One benefit of using file storage is that it is easier to use. Most people are familiar with file system navigation as opposed to storage volumes found in block-level storage, where more knowledge about partitioning is required to create volumes. Partitioning is the creation of sections on a disk that are set aside for certain files or software. Based on the operating system used, the partitions will be assigned a name or letter. For example, the letter C: is given to the main partition on Windows devices. Storage Virtualization in Cloud Computing Types of Storage Virtualization  Hardware Assisted Virtualization  Kernel Level Virtualization  Hypervisor Virtualization  Para-Virtualization  Full Virtualization i. Hardware Assisted Virtualization This type of virtualization requires hardware support. It is similar to full Para- virtualization. Here, the unmodified OS can run as hardware support for virtualization and we can also use to handle hardware access requests and protect operations. ii. Kernel Level Virtualization It runs a separate version of the Linux Kernel. Kernel level allows running multiple servers in a single host. It uses a device driver to communicate between main Linux Kernel and the virtual machine. This virtualization is a special form of Server Virtualization. Storage Virtualization in Cloud Computing iii. Hypervisor Virtualization A hypervisor is a layer between the Operating system and hardware. With the help of hypervisor multiple operating systems can work. Moreover, it provides features and necessary services which help OS to work properly. iv. Para-Virtualization It is based on hypervisor which handles emulation and trapping of software. Here, the guest operating system is modified before installing it to any further machine. The modified system communicates directly with the hypervisor and improves the performance. v. Full Virtualization This virtualization is similar to Para-Virtualization. In this, the hypervisor traps the machine operations which is used by the operating system to perform the operations. After trapping the operations, it emulates in particular software and the status codes returned. Storage Virtualization in Cloud Computing Storage Virtualization in Cloud Computing The different ways for storage applies to the virtualization: Host-Based Network-Based Array-Based i. Host-Based Storage Virtualization Here, all the virtualizations and management is done at the host level with the help of software and physical storage, it can be any device or array. The host is made up of multiple hosts which present virtual drives of a set to the guest machines. Doesn’t matter whether they are VMs in an enterprise or PCs. ii. Network-Based Storage Virtualization Network-based storage virtualization is the most common form which are using nowadays. Devices such as a smart switch or purpose-built server connect to all the storage device in a fibre channel storage network and present the storage as a virtual pool. iii. Array-Based Storage Virtualization Here the storage array provides different types of storage which are physical and used as storage tiers. The software is available which handles the amount of storage tier made up of solid-state drives hard drives. VIRTUALIZATION FOR DATA-CENTER AUTOMATION Data center virtualization is the process of creating a modern data center that is highly scalable, available and secure. Data center virtualization is the transfer of physical data centers into digital data centers using a cloud software platform, so that companies can remotely access information and applications. With data center virtualization we can increase IT agility and create a seamless foundation to manage private and public cloud services alongside traditional on- premises infrastructure. The latest virtualization development highlights several benefits high availability (HA), backup services, workload balancing, Data Mobility and further increases in client bases VIRTUALIZATION FOR DATA-CENTER AUTOMATION Server Consolidation in Data Centers Virtual Storage Management Cloud OS for Virtualized Data Centers Trust Management in Virtualized Data Centers VIRTUALIZATION FOR DATA-CENTER AUTOMATION Server Consolidation in Data Centers VIRTUALIZATION FOR DATA-CENTER AUTOMATION Server Consolidation in Data Centers Consolidation enhances hardware utilization. Many underutilized servers are consolidated into fewer servers to enhance resource utilization. Consolidation also facilitates backup services and disaster recovery. This approach enables more agile provisioning and deployment of resources. In a virtual environment, the images of the guest OSes and their applications are readily cloned and reused. The total cost of ownership is reduced. In this sense, server virtualization causes deferred purchases of new servers, a smaller data- center footprint, lower maintenance costs, and lower power, cooling, and cabling requirements. This approach improves availability and business continuity. The crash of a guest OS has no effect on the host OS or any other guest OS. It becomes easier to transfer a VM from one server to another, because virtual servers are unaware of the underlying hardware. VIRTUALIZATION FOR DATA-CENTER AUTOMATION Virtual Storage Management Virtual storage is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. It allows IT administrators to migrate workloads easily across data centers independently of the physical hardware. Storage virtualization pools multiple storage resources, eliminating the need for storage systems. storage virtualization was largely used to describe the aggregation and repartitioning of disks at very coarse time scales for use by physical machines. The storage primitives used by VMs are not nimble. Hence, operations such as remap-ping volumes across hosts and checkpointing disks are frequently clumsy and esoteric, and sometimes simply unavailable. In data centers, there are often thousands of VMs, which cause the VM images to become flooded. The main purposes is to make management easy while enhancing performance and reducing the amount of storage occupied by the VM images VIRTUALIZATION FOR DATA-CENTER AUTOMATION Virtual Storage Management Parallax is a distributed storage system customized for virtualization environments. Content Addressable Storage (CAS) is a solution to reduce the total size of VM images, and therefore supports a large set of VM-based systems in data centers. It supports all popular system virtualization techniques, such as paravirtualization and full virtualization. For each physical machine, Parallax customizes a special storage appliance VM. The storage appliance VM acts as a block virtualization layer between individual VMs and the physical storage device. It provides a virtual disk for each VM on the same physical machine. VIRTUALIZATION FOR DATA-CENTER AUTOMATION Cloud OS for Virtualized Data Centers VIRTUALIZATION FOR DATA-CENTER AUTOMATION Cloud OS for Virtualized Data Centers VIRTUALIZATION FOR DATA-CENTER AUTOMATION Cloud OS for Virtualized Data Centers Eucalyptus is an open source software system intended mainly for supporting Infrastructure as a Service (IaaS) clouds. The system primarily supports virtual networking and the management of VMs; virtual storage is not supported. Its purpose is to build private clouds that can interact with end users through Ethernet or the Internet. The system also supports interaction with other private clouds or public clouds over the Internet.  Instance Manager controls the execution, inspection, and terminating of VM instances on the host where it runs.  Group Manager gathers information about and schedules VM execution on specific instance managers, as well as manages virtual instance network.  Cloud Manager is the entry-point into the cloud for users and administrators. It queries node managers for information about resources, makes scheduling decisions, and implements them by making requests to group managers. VIRTUALIZATION FOR DATA-CENTER AUTOMATION Cloud OS for Virtualized Data Centers VIRTUALIZATION FOR DATA-CENTER AUTOMATION Cloud OS for Virtualized Data Centers vSphere is primarily intended to offer virtualization support and resource management of data- center resources in building private clouds. VMware claims the system is the first cloud OS that supports availability, security, and scalability in providing cloud computing services. The vSphere 4 is built with two functional software suites: infrastructure services and application services. It also has three component packages intended mainly for virtualization purposes: vCompute : It is supported by ESX, ESXi, and DRS virtualization libraries from VMware; vStorage: It is supported by VMS and thin provisioning libraries; and vNetwork: offers distributed switching and networking functions. These packages interact with the hardware servers, disks, and networks in the data center. These infrastructure functions also communicate with other external clouds VIRTUALIZATION FOR DATA-CENTER AUTOMATION Trust Management in Virtualized Data Centers VIRTUALIZATION FOR DATA-CENTER AUTOMATION VM-Based Intrusion Detection VIRTUALIZATION FOR DATA-CENTER AUTOMATION VM-Based Intrusion Detection VIRTUALIZATION FOR DATA-CENTER AUTOMATION VM-Based Intrusion Detection The VM-based IDS contains a policy engine and a policy module. The policy framework can monitor events in different guest VMs by operating system interface library and PTrace indicates trace to secure policy of monitored host. It’s difficult to predict and prevent all intrusions without delay. computer systems use logs to analyze attack actions, but it is hard to ensure the credibility and integrity of a log. The IDS log service is based on the operating system kernel. Thus, when an operating system is invaded by attackers, the log service should be unaffected VIRTUALIZATION FOR DATA-CENTER AUTOMATION Trusted zones for virtual clusters VIRTUALIZATION FOR DATA-CENTER AUTOMATION Trusted Zones For Virtual Clusters The physical infrastructure is shown at the bottom, and marked as a cloud provider. The virtual clusters or infrastruc-tures are shown in the upper boxes for two tenants. The public cloud is associated with the global user communities at the top. The arrowed boxes on the left and the brief description between the arrows and the zoning boxes are security functions and actions taken at the four levels from the users to the providers. The small circles between the four boxes refer to interactions between users and providers and among the users themselves. The arrowed boxes on the right are those functions and actions applied between the tenant environments, the provider, and the global communities. Almost all available countermeasures, such as anti-virus, worm containment, intrusion detection, encryption and decryption mechanisms, are applied here to insulate the trusted zones and isolate the VMs for private tenants. The main innovation here is to establish the trust zones among the virtual clusters. Multi-Tenancy In Cloud Computing It means that many tenants or users can use the same resources. The users can independently use resources provided by the cloud computing company without affecting other users. Multi-tenancy is a crucial attribute of cloud computing. It applies to all the three layers of cloud, namely infrastructure as a service (IaaS), Platform as a Service (PaaS), and software as a Service (SaaS). Multi-tenancy issues in cloud computing are a growing concern, especially as the industry expands. Cloud computing brings many benefits for its users like flexibility, scalability, not need to worry about the maintenance of the cloud, can expand and shrink their resources according to the needs of their workload. Multi-Tenancy In Cloud Computing Multi-tenancy Issues Security: This is one of the most challenging and risky issues in multi-tenancy cloud computing. There is always a risk of data loss, data theft, and hacking. The database administrator can grant access to an unauthorized person accidentally. Despite software and cloud computing companies saying that client data is safer than ever on their servers, there are still security risks. Performance: SaaS applications are at different places, and it affects the response time. SaaS applications usually take longer to respond and are much slower than server applications. This slowness affects the overall performance of the systems and makes them less efficient. Less Powerful: Many cloud services run on web 2.0, with new user interfaces and the latest templates, but they lack many essential features. Without the necessary and adequate features, multi-tenancy cloud computing services can be a nuisance for clients. Multi-Tenancy In Cloud Computing Multi-tenancy Issues Noisy Neighbor Effect: If a tenant uses a lot of the computing resources, other tenants may suffer because of their low computing power. However, this is a rare case and only happens if the cloud architecture and infrastructure are inappropriate. Interoperability: users remain restricted by their cloud service providers. Users can not go beyond the limitations set by the cloud service providers to optimize their systems. For example, users can not interact with other vendors and service providers and can’t even communicate with the local applications. Monitoring: constant monitoring is vital for cloud service providers to check if there is an issue in the multi-tenancy cloud system. Multi-tenancy cloud systems require continuous monitoring, as computing resources get shared with many users simultaneously. If any problem arises, it must get solved immediately not to disturb the system’s efficiency. Capacity Optimization: Before giving users access, database administrators must know which tenant to place on what network. The tools applied should be modern and latest that offer the correct allocation of tenants. Capacity must get generated, or else the multi-tenancy cloud system will have increased costs. As the demands keep on changing, multi-tenancy cloud systems must keep on upgrading and providing sufficient capacity in the cloud system.

Use Quizgecko on...
Browser
Browser