Full Transcript

10/01/2023 Defining a cloud Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e...

10/01/2023 Defining a cloud Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. A different and more practical characterization of cloud computing. According to Reese , we can define three criteria to discriminate whether a service is delivered in the cloud computing style: The service is accessible via a Web browser (nonproprietary) or a Web services application programming interface (API). Zero capital expenditure is necessary to get started. You pay only for what you use as you use it. Cloud computing Buyyaetal.: A cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers. 1 10/01/2023 Leading Examples of Cloud Service Users 1. Large enterprises can offload some of their activities to cloud-based systems. 1. Small enterprises and start-ups can afford to translate their ideas into business results more quickly, without excessive up-front costs. 1. System developers can concentrate on the business logic rather than dealing with the complexity of infrastructure management and scalability. 1. End users can have their documents accessible from everywhere and any device. 2 10/01/2023 3 10/01/2023 Characteristics and benefits Cloud computing has some interesting characteristics that bring benefits to both cloud service consumers(CSCs) and cloud service providers(CSPs). These characteristics are: No up-front commitments On-demand access Nice pricing Simplified application acceleration and scalability Efficient resource allocation Energy efficiency Seamless creation and use of third-party services 4 10/01/2023 Challenges ahead 1. Legal issues may also arise. These are specifically tied to the ubiquitous nature of cloud com- putting, which spreads computing infrastructure across diverse geographical locations. Different legislation about privacy in different countries may potentially create disputes as to the rights that third parties. 2. Security in terms of confidentiality, secrecy, and protection of data in a cloud environment is another important challenge. Organizations do not own the infrastructure they use to process data and store information. This condition poses challenges for confidential data, which organizations cannot afford to reveal. 3. Besides the practical aspects, which are related to configuration, networking, and sizing of cloud computing systems, a new set of challenges concerning the dynamic provisioning of cloud computing services and resources arises. For example, in the Infrastructure-as-a-Service domain, how many resources need to be provisioned, and for how long should they be used, in order to maximize the benefit? Technical challenges also arise for cloud service providers for the management of large computing infrastructures and the use of virtualization technologies on top of them. 5 10/01/2023 History of Cloud Computing In tracking the historical evolution, we briefly review five core technologies that played an important role in the realization of cloud computing. These technologies are. 1. Distributed systems 2. virtualization 3. Web 2.0 4. service orientation 5. utility computing. Distributed systems Three major milestones have led to cloud computing: 1. mainframe computing 2. Cluster computing 3. Grid computing. 1 10/01/2023 Mainframes 1. These were the first examples of large computational facilities leveraging multiple processing units. 2. Mainframes were powerful, highly reliable computers specialized for large data movement and massive input/output (I/O) operations. 3. They were mostly used by large organizations for bulk data processing tasks such as online transactions, enterprise resource planning, and other operations involving the processing of significant amounts of data. 4. One of the most attractive features of mainframes was the ability to be highly reliable computers that were “always on” and capable of tolerating failures transparently. Clusters. Cluster computing 1. Cluster computing started as a low-cost alternative to the use of mainframes and supercomputers. 2. The technology advancement that created faster and more powerful mainframes and supercomputers eventually generated an increased availability of cheap commodity machines as a side effect. These machines could then be connected by a high- bandwidth network and controlled by specific software tools that manage them as a single system. 3. One of the attractive features of clusters was that the computational power of commodity machines could be leveraged to solve problems that were previously manageable only on expensive supercomputers. 4. clusters could be easily extended if more computational power was required. 2 10/01/2023 Grids. Grid computing 1. Grid computing proposed a new approach to access large computational power, huge storage facilities, and a variety of services. 2. Users can “consume” resources in the same way as they use other utilities such as power, gas, and water. 3. Different from a “large cluster,” a computing grid was a dynamic aggregation of heterogeneous computing nodes, and its scale was nationwide or even worldwide. Grids. Grid computing Several developments made possible the diffusion of computing grids: (a) clusters became quite common resources; (b) they were often underutilized; (c) new problems were requiring computational power that went beyond the capability of single clusters; (d) the improvements in networking and the diffusion of the Internet made possible long-distance, high-bandwidth connectivity. 3 10/01/2023 Virtualization 1. Virtualization is another core technology for cloud computing. It encompasses a collection of solutions allowing the abstraction of some of the fundamental elements for computing ,such as hardware, run time environments, storage, and networking. 2. Virtualization confers that degree of customization and control that makes cloud computing appealing for users and, at the same time, sustainable for cloud services providers. 3. These environments are called virtual because they simulate the interface that is expected by a guest. 4. The most common example of virtualization is hardware virtualization. This technology allows simulating the hardware interface expected by anoperating system. Hardware virtualization allows the coexistence of different software stacks on top of the same hardware. Web 2.0 1. The Web is the primary interface through which cloud computing delivers its services. At present, the Web encompasses a set of technologies and services that facilitate interactive information sharing, collaboration, user-centered design, and application composition. 2. Web 2. 0 brings interactivity and flexibility into Web pages, providing enhanced user experience by gaining Web based access to all the functions that are normally found in desktop applications. These capabilities are obtained by integrating a collection of standards and technologies such as XML, Asynchronous Java Script and XML(AJAX), Web Services, and others. 3. Web 2.0 applications are extremely dynamic: they improve continuously, and new updates and features are integrated at a constant rate by following the usage trend of the community. 4 10/01/2023 Service-oriented computing 1. Service oriented computing(SOC) supports the development of rapid, low cost, flexible, interoperable, and evolvable applications and systems. 2. A service is an abstraction representing a self describing and platform agnostic component that can perform any function, any thing from a simple function to a complex business process. 3. Service oriented computing introduces and diffuses two important concepts, which are also fundamental to cloud computing:. Quality of service(QoS).Software-as-a-Service(SaaS). Quality of service (QoS) 1. Quality of service (QoS) identifies a set of functional and nonfunctional attributes that can be used to evaluate the behavior of a service from different perspectives. 2. These could be performance metrics such as response time, or security attributes, transactional integrity, reliability, scalability, and availability. 3. QoS requirements are established between the client and the provider via an SLA that identifies the minimum values (or an acceptable range) for the QoS attributes that need to be satisfied upon the service call. 5 10/01/2023 Software-as-a-Service 1. The term has been inherited from the world of application service providers (ASPs), which deliver software services-based solutions across the wide area network from a central datacenter and make them available on a subscription or rental basis. 2. The ASP is responsible for maintaining the infrastructure and making available the application, and the client is freed from maintenance costs and difficult upgrades. 3. This software delivery model is possible because economies of scale are reached by means of multitenancy. 4. The SaaS approach reaches its full development with service-oriented computing (SOC), where loosely coupled software components can be exposed and priced singularly, rather than entire applications. This allows the delivery of complex business processes and transactions as a service while allowing applications to be composed on the fly and services to be reused from everywhere and by anybody. 6 10/01/2023 Virtualization technologies have gained renewed interested recently due to the confluence of several phenomena. 1. Increased performance and computing capacity. 2. Underutilized hardware and software resources. 3. Lack of space 4. Greening initiatives. 5. Rise of administrative costs. 1. Increased performance and computing capacity. The average end-user desktop PC is powerful enough to meet almost all the needs of everyday computing, with extra capacity that is rarely used. Almost all these PCs have resources enough to host a virtual machine manager and execute a virtual machine with by far acceptable performance. The same consideration applies to the high-end side of the PC market, where supercomputers can provide immense compute power that can accommodate the execution of hundreds or thousands of virtual machines. 1 10/01/2023 Underutilized hardware and software resources Hardware and software underutilization is occurring due to (1)increased performance and computing capacity (2)the effect of limited or sporadic use of resources. Using these resources for other purposes after hours could improve the efficiency of the IT infrastructure. Lack of space The continuous need for additional capacity, whether storage or compute power, makes data centers grow quickly. This condition, along with hardware underutilization, has led to the diffusion of a technique called server consolidation, 1 for which virtualization technologies are fundamental. 2 10/01/2023 Greening initiatives 1. Recently, companies are increasingly looking for ways to reduce the amount of energy they consume and to reduce their carbon footprint. 2. Data centers are one of the major power consumers; they contribute consistently to the impact that a company has on the environment. 3. Maintaining a data center operation not only involves keeping servers on, but a great deal of energy is also consumed in keeping them cool. 4. Infrastructures for cooling have a significant impact on the carbon footprint of a data center. Hence, reducing the number of servers through server consolidation will definitely reduce the impact of cooling and power consumption of a data center. 5. Virtualization technologies can provide an efficient way of consolidating servers Rise of administrative costs Computers—in particular, servers—do not operate all on their own, but they require care and feeding from system administrators. Common system administration tasks include hardware monitoring, defective hardware replacement, server setup and updates, server resources monitoring, and backups. These are labor-intensive operations, and the higher the number of servers that have to be managed, the higher the administrative costs. Virtualization can help reduce the number of required servers for a given workload, thus reducing the cost of the administrative personnel. 3 10/01/2023 Characteristics of virtualized environments Characteristics of virtualized environments There are three major components: guest, host, and virtualization layer. The guest represents the system component that interacts with the virtualization layer rather than with the host, as would normally happen. The host represents the original environment where the guest is supposed to be managed. The virtualization layer is responsible for recreating the same or a different environment where the guest will operate. 4 10/01/2023 advantages have always been characteristics of virtualized solutions Increased security Increased security The ability to control the execution of a guest in a completely transparent manner opens new possibilities for delivering a secure, controlled execution environment. The virtual machine represents an emulated environment in which the guest is executed. All the operations of the guest are generally performed against the virtual machine , which then translates and applies them to the host. This level of indirection allows the virtual machine manager to control and filter the activity of the guest, thus preventing some harmful operations from being performed. Resources exposed by the host can then be hidden or simply protected from the guest. 5 10/01/2023 Managed execution Virtualization allows the creation of a separate computing environments within the same host. In this way it is possible to fully exploit the capabilities of a powerful guest, which would otherwise be underutilized A group of separate hosts can be tied together and represented to guests as a single virtual host. This function is naturally implemented in middleware for distributed computing, with a classical example represented by cluster management software, which harnesses the physical resources of a homogeneous group of machines and represents them as a single resource. 6 Taxonomy of virtualization techniques Execution virtualization 1. It includes all techniques that aim to emulate an execution environment that is separate from the one hosting the virtualization layer. 2. All these techniques concentrate their interest on providing support for the execution of programs, whether these are the operating system, a binary specification of a program compiled against an abstract machine model, or an application. virtualization techniques covers into two major categories by considering the type of host they require. 1.Process-level techniques are implemented on top of an existing operating system, which has full control of the hardware. 2. System-level techniques are implemented directly on hardware and do not require or require a minimum of support from an existing operating system. Machine reference model 1. At the bottom layer, the model for the hardware is expressed in terms of the Instruction Set Architecture (ISA) which defines the instruction set for the processor, registers, memory, and interrupt management. 2. ISA is the interface between hardware and software, and it is important to the operating system (OS) developer(System ISA)and developers of applications that directly manage the underlying hardware (User ISA). 3. The application binary interfaceABI separates the operating system layer from the applications and libraries, which are managed by the OS. ABI covers details such as low-level data types, alignment, and call conventions and defines a format for executable programs. 4.System calls are defined at this level. This interface allows portability of applications and libraries across operating systems that implement the same ABI.For any operation to be performed in the application level API, ABI and ISA are responsible for making it happen. 5.The high-level abstraction is converted into machine-level instructions to perform the actual operations supported by the processor. 6. The machine-level resources, such as processor registers and main memory capacities, are used to perform the operation at the hardware level of the central processing unit (CPU). the instruction set exposed by the hardware has been divided into different security classes that define who can operate with them. Nonprivileged instructions are those instructions that can be used without interfering with other tasks because they do not access shared resources. This category contains, for example, all the floating, fixed-point, and arithmetic instructions. Privileged instructions are those that are executed under specific restrictions and are mostly used for sensitive operations, which expose (behavior sensitive) or modify(control sensetive) privileged state. For instance, behavior-sensitive instructions are those that operate on the I/O, whereas control-sensitive instructions alter the state of the CPU registers. Some types of architecture feature more than one class of privileged instructions and implement a finer control of how these instructions can be accessed. For instance, a possible implementation features a hierarchy of privileges in the form of ring-based security. Ring 0 is in the most privileged level and Ring 3 in the least privileged level. Ring 0 is used by the kernel of the OS rings 1 and 2 are used by the OS-level services, and Ring 3 is used by the user. Recent systems support only two levels, with Ring 0 for supervisor mode and Ring 3 for user mode. All the current systems support at least two different execution modes: 1. The first mode (supervisor mode) denotes an execution mode in which all the instructions (privileged and nonprivileged) can be executed without any restriction. This mode is also called as master mode or kernel mode. 2. is generally used by the operating system (or the hypervisor) to perform sensitive operations on hardware- level resources. 3. In user mode, there are restrictions to control the machine-level resources. If code running in user mode invokes the privileged instructions, hardware interrupts occur and trap the potentially harmful execution of the instruction. 4. Despite this, there might be some instructions that can be invoked as privileged instructions under some conditions and as nonprivileged instructions under other conditions. Hardware-level virtualization Hardware-level virtualization is a virtualization technique that provides an abstract execution environment in terms of computer hardware on top of which a guest operating system can be run. In this model, the guest is represented by the operating system, the host by the physical computer hardware, the virtual machine by its emulation, and the virtual machine manager by the hypervisor. Hardware level virtualization is also called system virtualization. since it provides ISA to virtual machines, which is the representation of the hardware interface of a system. Process virtualization expose ABI to virtual machines. Hypervisors A fundamental element of hardware virtualization is the hypervisor, or virtual machine manager (VMM). It recreates a hardware environment in which guest operating systems are installed. There are two major types of hypervisor: Type I Type II Type 1 : hypervisors run directly on top of the hardware. Therefore, they take the place of the operating systems and interact directly with the ISA interface exposed by the underlying hardware, and they emulate this interface in order to allow the management of guest operating systems. Type 2 : hypervisors require the support of an operating system to provide virtualization services. This means that they are programs managed by the operating system, which interact with it through the ABI and emulate the ISA of virtual hardware for guest operating systems A virtual machine manager is internally organized as described in Fig. 1. The dispatcher constitutes the entry point of the monitor and reroutes the instructions issued by the virtual machine instance to one of the two other modules. 2. The allocator is responsible for deciding the system resources to be provided to the VM: whenever a virtual machine tries to execute an instruction that results in changing the machine resources associated with that VM. 3. The interpreter module consists of interpreter routines. These are executed whenever a virtual machine executes a privileged instruction: a trap is triggered and the corresponding routine is executed. The criteria that need to be met by a virtual machine manager to efficiently support virtualization are 1. Equivalence : A guest running under the control of a virtual machine manager should exhibit the same behavior as when it is executed directly on the physical host. 2. Resource Control : The virtual machine manager should be in complete control of virtualized resources. 3. Efficiency : A statistically dominant fraction of the machine instructions should be executed without intervention from the virtual machine manager. Theorem 1For any conventional third-generation computer, a VMM may be constructed if the set of sensitive instructions for that computer is a subset of the set of privileged instructions Hardware virtualization techniques Hardware-assisted virtualization. This term refers to a scenario in which the hardware provides architectural support for building a virtual machine manager able to run a guest operating system in complete isolation. This technique was originally introduced in the IBM System/370. At present, examples of hardware-assisted virtualization are the extensions to the x86-64 bit architecture introduced with Intel VT. Full Virtualization refers to the ability to run a program, most likely an operating system, directly on top of a virtual machine and without any modification, as though it were run on the raw hardware. To make this possible, virtual machine managers are required to provide a complete emulation of the entire underlying hardware. The principal advantage of full virtualization is complete isolation, which leads to enhanced security, ease of emulation of different architectures, and coexistence of different systems on the same platform. Whereas it is a desired goal for many virtualization solutions, full virtualization poses important concerns related to performance and technical implementation. Para Virtualization. 1. This is a not-transparent virtualization solution that allows implementing thin virtual machine managers. Paravirtualization techniques expose a software interface to the virtual machine that is slightly modified from the host and, as a consequence, guests need to be modified. 2. The aim of paravirtualization is to provide the capability to demand the execution of performance critical operations directly on the host, thus preventing performance losses that would otherwise be experienced in managed execution. 3. This allows a simpler implementation of virtual machine managers that have to simply transfer the execution of these operations, which were hard to virtualize, directly to the host. 4. To take advantage of such an opportunity, guest operating systems need to be modified and explicitly ported by remapping the performance-critical operations through the virtual machine software interface. Partial virtualization provides a partial emulation of the underlying hard- ware, thus not allowing the complete execution of the guest operating system in complete isolation. Partial virtualization allows many applications to run transparently, but not all the features of the operating system can be supported, as happens with full virtualization. An example of partial virtualization is address space virtualization used in time-sharing systems; this allows multiple applications and users to run concurrently in a separate memory space, but they still share the same hardware resources (disk, processor, and network). Operating system-level virtualization offers the opportunity to create different and separated execution environments for applications that are managed concurrently. Differently from hardware virtualization, there is no virtual machine manager or hypervisor, and the virtualization is done within a single operating system, where the OS kernel allows for multiple isolated user space instances. The kernel is also responsible for sharing the system resources among instances and for limiting the impact of instances on each other. A user space instance in general contains a proper view of the file system, which is completely isolated, and separate IP addresses, software configurations, and access to devices. Operating systems supporting this type of virtualization are general-purpose, time- shared operating systems with the capability to provide stronger namespace and resource isolation. Programming language-level virtualization 1. Programming language-level virtualization is mostly used to achieve ease of deployment of applications, managed execution, and portability across different platforms and operating systems. 2. It consists of a virtual machine executing the byte code of a program, which is the result of the compilation process. 3. Compilers implemented and used this technology to produce a binary format representing the machine code for an abstract architecture Application-level virtualization 1. Application-level virtualization is a technique allowing applications to be run in runtime environments that do not natively support all the features required by such applications. 2. Emulation can also be used to execute program binaries compiled for different hardware architectures. In this case, one of the following strategies can be implemented: 3.Emulation:In this technique every source instruction is interpreted by an emulator for executing native ISA instructions, leading to poor performance. Interpretation has a minimal startup cost but a huge overhead, since each instruction is emulated. 4. Binary Translation:In this technique every source instruction is converted to native instructions with equivalent functions. After a block of instructions is translated, it is cached and reused. Binary translation has a large initial overhead cost, but over time it is subject to better performance, since previously translated instruction blocks are directly executed. storage virtualization 1. Is a system administration practice that allows decoupling the physical organization of the hardware from its logical representation. Using this technique, users do not have to be worried about the specific location of their data, which can be identified using a logical path. 2. Storage virtualization allows us to harness a wide range of storage facilities and represent them under a single logical file system. There are different techniques for storage virtualization, one of the most popular being network-based virtualization by means of storage area network. Network virtualization combines hardware appliances and specific software for the creation and management of a virtual network. Network virtualization can aggregate different physical networks into a single logical network or provide network-like functionality to an operating system partition Desktop virtualization Similarly to hardware virtualization, desktop virtualization makes accessible a different system as though it were natively installed on the host, but this system is remotely stored on a different host and accessed through a network connection. Application server virtualization Abstracts a collection of application servers that provide the same services as a single virtual application server by using load-balancing strategies and providing a high-availability infrastructure for the services hosted in the application server. This is a particular form of virtualization and serves the same purpose of storage virtualization: providing a better quality of service rather than emulating a different environment. 10/01/2023 Virtualization and cloud computing Virtualization in cloud Computing Hardware Virtualization --🡪 Iaas Programming virtualization -🡪 PaaS 10/01/2023 1 10/01/2023 2 1 10/01/2023 Virtualization and cloud computing 1.Virtualization gives the opportunity to design more efficient computing systems by means of consolidation, which is performed transparently to cloud computing service users. 2. Since virtualization allows us to create isolated and controllable environments, it is possible to serve these environments with the same resource without them interfering with each other. 3. This opportunity is particularly attractive when resources are underutilized, because it allows reducing the number of active resources by aggregating virtual machines over a smaller number of resources that become fully utilized. 4. This practice is called as server consolidation and movement of virtual machine is called as virtual machine migration. 10/01/2023 3 1. Server consolidation and virtual machine migration are principally used in the case of hardware virtualization, even though they are also technically possible in the case of programming language virtualization. 10/01/2023 4 2 10/01/2023 Pros and cons of virtualization Advantages of virtualization 1. Managed execution and isolation are perhaps the most important advantages of virtualization. In the case of techniques supporting the creation of virtualized execution environments, these two characteristics allow building secure and controllable computing environments. 2. A virtual execution environment can be configured as a sandbox, thus preventing any harmful operation to cross the borders of the virtual host. 3. Moreover, allocation of resources and their partitioning among different guests is simplified, being the virtual host controlled by a program. This enables fine-tuning of resources, which is very important in a server consolidation scenario and is also a requirement for effective quality of service. 10/01/2023 5 1. Portability is another advantage of virtualization, especially for execution virtualization techniques. Virtual machine instances are normally represented by one or more files that can be easily transported with respect to physical systems. 2. Moreover, they also tend to be self-contained since they do not have other dependencies besides the virtual machine manager for their use. Portability and self- containment simplify their administration. 3. Portability and self-containment also contribute to reducing the costs of maintenance, since the number of hosts is expected to be lower than the number of virtual machine instances. Since the guest program is executed in a virtual environment, there is very limited opportunity for the guest program to damage the underlying hardware. 10/01/2023 6 3 10/01/2023 Finally, by means of virtualization it is possible to achieve a more efficient use of resources. Multiple systems can securely coexist and share the resources of the underlying host, without interfering with each other. 10/01/2023 7 The other side of the coin: disadvantages Performance degradation Performance is definitely one of the major concerns in using virtualization technology. Since virtualization interposes an abstraction layer between the guest and the host, the guest can experience increased latencies. For instance, in the case of hardware virtualization, where the intermediate emulates a bare machine on top of which an entire system can be installed, the causes of performance degradation can be traced back to the overhead introduced by the following activities: Maintaining the status of virtual processors Support of privileged instructions (trap and simulate privileged instructions) Support of paging within VM Console functions 10/01/2023 8 4 10/01/2023 Inefficiency and degraded user experience Virtualization can sometime lead to an inefficient use of the host. In particular, some of the specific features of the host cannot be exposed by the abstraction layer and then become inaccessible. In the case of hardware virtualization, this could happen for device drivers: The virtual machine can sometime simply provide a default graphic card that maps only a subset of the features available in the host. In the case of programming-level virtual machines, some of the features of the underlying operating systems may become inaccessible unless specific libraries are used. For example, in the first version of Java the support for graphic programming was very limited and the look and feel of applications was very poor compared to native applications. These issues have been resolved by providing a new framework called swing for designing the user interface, and further improvements have been done by integrating support for the OpenGL libraries in the software development kit. 10/01/2023 9 Security holes and new threats 1. Virtualization opens the door to a new and unexpected form of phising. The capability of emulating a host in a completely transparent manner led the way to malicious programs that are designed to extract sensitive information from the guest. 2. In the case of hardware virtualization, malicious programs can preload themselves before the operating system and act as a thin virtual machine manager toward it. The operating system is then controlled and can be manipulated to extract sensitive information of interest to third parties. 3. The same considerations can be made for programming-level virtual machines: Modified versions of the runtime environment can access sensitive information or monitor the memory locations utilized by guest applications while these are executed. 10/01/2023 10 5 10/01/2023 VMware: full virtualization VMware implements full virtualization either in the type I or type II hypervisors. Full virtualization and binary translation 10/01/2023 11 Full virtualization and binary translation x86 architecture design does not satisfy the first theorem of virtualization, since the set of sensitive instructions is not a subset of the privileged instructions. This causes a different behavior when such instructions are not executed in Ring 0, which is the normal case in a virtualization scenario where the guest OS is run in Ring 1. Generally, a trap is generated and the way it is managed differentiates the solutions in which virtualization is implemented for x86 hard- ware. In the case of dynamic binary translation, the trap triggers the translation of the offending instructions into an equivalent set of instructions that achieves the same goal without generating exceptions. Moreover, to improve performance, the equivalent set of instruction is cached so that translation is no longer necessary for further occurrences of the same instructions. 10/01/2023 12 Figure 3.12 gives an idea of the process. 6 10/01/2023 10/01/2023 13 Virtualization solutions VMware is a pioneer in virtualization technology and offers a collection of virtualization solutions covering the entire range of the market, from desktop computing to enterprise computing and infra- structure virtualization. 1. End-user (desktop) virtualization 2. Server virtualization 10/01/2023 14 7 10/01/2023 End-user (desktop) virtualization 10/01/2023 15 VMware supports virtualization of operating system environments and single applications on end- user computers. The first option is the most popular and allows installing a different operating systems and applications in a completely isolated environment from the hosting operating system. The virtualization environment is created by an application installed in guest operating systems, which provides those operating systems with full hardware virtualization of the underlying hardware. This is done by installing a specific driver in the host operating system that provides two main services: It deploys a virtual machine manager that can run in privileged mode. It provides hooks for the VMware application to process specific I/O requests eventually by relaying such requests to the host operating system via system calls. 10/01/2023 16 8 10/01/2023 Server virtualization 10/01/2023 17 VMware provided solutions for server virtualization with different approaches over time. Initial support for server virtualization was provided by VMware GSX server, which replicates the approach used for end-user computers and introduces remote management and scripting capabilities. The architecture is mostly designed to serve the virtualization of Web servers. A daemon process, called served controls and manages VMware application processes. These applications are then connected to the virtual machine instances by means of the VMware driver installed on the host operating system. Virtual machine instances are managed by the VMM as described previously. User requests for virtual machine management and provisioning are routed from the Web server through the VMM by means of serverd 10/01/2023 18 9 10/01/2023 Microsoft Hyper-V 10/01/2023 19 Hypervisor The hypervisor is the component that directly manages the underlying hardware. Hyper calls interface This is the entry point for all the partitions for the execution of sensitive instructions. This is an implementation of the paravirtualization approach already discussed with Xen. This interface is used by drivers in the partitioned operating system to contact the hypervisor using the standard Windows calling convention. The parent partition also uses this interface to create child partitions. 10/01/2023 20 10 10/01/2023 Memory service routines (MSRs). These are the set of functionalities that control the memory and its access from partitions. By leveraging hardware-assisted virtualization, the hypervisor uses the IO to fast-track access to devices from partitions by translating virtual memory addresses. Advanced programmable interrupt controller This component represents the interrupt controller, which manages the signals coming from the underlying hardware when some event occurs (timer expired, I/O ready, exceptions and traps). Each virtual processor is equipped with a synthetic interrupt controller(SynIC), which constitutes an extension of the local APIC. The hypervisor is responsible of dispatching, when appropriate, the physical interrupts to the synthetic interrupt controllers. 10/01/2023 21 Scheduler This component schedules the virtual processors to run on available physical processors. The scheduling is controlled by policies that are set by the parent partition. Address manager. This component is used to manage the virtual network addresses that are allocated to each guest operating system. Partition manager. This component is in charge of performing partition creation, finalization, destruction, enumeration, and configurations. Its services are available through the hypercalls interface API previously discussed. 10/01/2023 22 11 10/01/2023 Enlightened I/O and synthetic devices Provides an optimized way to perform I/O operations, allowing guest operating systems to leverage an interpretation communication channel rather than traversing the hardware emulation stack provided by the hypervisor. There are three fundamental components. 1. VMBus 2. Virtual Service Providers(VSPs) 3. Virtual Service Clients(VSCs) 10/01/2023 23 1. VMBus implements the channel and defines the protocol for communication between partitions. 2. VSPs are kernel-level drivers that are deployed in the parent partition and provide access to the corresponding hardware devices. These interact with VSCs, which represent the virtual device drivers seen by the guest operating systems in the child partitions. 3. Operating systems supported by Hyper-V utilize this preferred communication channel to perform I/O for storage, networking, graphics, and input subsystems. This also results in enhanced perfor- mance in child-to-child I/O as a result of virtual networks between guest operating systems. 10/01/2023 24 12 10/01/2023 Parent partition 1. The parent partition executes the host operating system and implements the virtualization stack that complements the activity of the hypervisor in running guest operating systems. 2. This partition always hosts an instance of the Windows Server 2008 R2, which manages the virtualization stack made available to the child partitions. 3. This partition is the only one that directly accesses device drivers and mediates the access to them by child partitions by hosting the VSPs. 4. The parent partition is also the one that manages the creation, execution, and destruction of child partitions. 10/01/2023 25 Child partitions 1. Child partitions are used to execute guest operating systems. These are isolated environments that allow secure and controlled execution of guests. 2. Two types of child partition exist, they differ on whether the guest operating system is supported by Hyper-V or not. Enlightened and Unenlightened partitions. 3. The first ones can benefit from Enlightened I/O; the other ones are executed by leveraging hardware emulation from the hypervisor. 10/01/2023 26 13

Use Quizgecko on...
Browser
Browser