CC-UNIT-1.pdf
Document Details
Uploaded by Deleted User
Full Transcript
Khandesh College Education Society's COLLEGE OF ENGINEERING AND MANAGEMENT, JALGAON Affiliated to Dr. Babasaheb Ambedkar Technological University, Maharashtra FINAL YEAR B.TECH COMPUTERENGINEERING CLOUD COMPUTING Code: BTCOE702 UNIT 1: INTRODUCTION TO CLOUD Introduction to...
Khandesh College Education Society's COLLEGE OF ENGINEERING AND MANAGEMENT, JALGAON Affiliated to Dr. Babasaheb Ambedkar Technological University, Maharashtra FINAL YEAR B.TECH COMPUTERENGINEERING CLOUD COMPUTING Code: BTCOE702 UNIT 1: INTRODUCTION TO CLOUD Introduction to Cloud: Cloud Computing at a Glance The Vision of Cloud Computing Defining a Cloud A Closer Look Cloud Computing Reference Model Characteristics and Benefits, Challenges Ahead Historical Developments Virtualization: Introduction Characteristics of Virtualized Environment Taxonomy of Virtualization Techniques Virtualization and Cloud computing Pros and Cons of Virtualization Technology Examples- VMware and Microsoft Hyper-V Introduction to Cloud: Cloud Computing at a Glance Cloud Computing provides us means of accessing the applications as utilities over the Internet. It allows us to create, configure, and customize the applications online. What is Cloud? The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something, which is present at remote location. Cloud can provide services over public and private networks, i.e., WAN, LAN or VPN. Applications such as e-mail, web conferencing, customer relationship management (CRM) execute on cloud. What is Cloud Computing? Cloud Computing refers to manipulating, configuring, and accessing the hardware and software resources remotely. It offers online data storage, infrastructure, and application. Cloud computing offers platform independency, as the software is not required to be installed locally on the PC. Hence, the Cloud Computing is making our business applications mobile and collaborative. Basic Concepts There are certain services and models working behind the scene making the cloud computing feasible and accessible to end users. Following are the working models for cloud computing: Deployment Models Service Models Deployment Models Deployment models define the type of access to the cloud, i.e., how the cloud is located? Cloud can have any of the four types of access: Public, Private, Hybrid, and Community. Public Cloud The public cloud allows systems and services to be easily accessible to the general public. Public cloud may be less secure because of its openness. Private Cloud The private cloud allows systems and services to be accessible within an organization. It is more secured because of its private nature. Community Cloud The community cloud allows systems and services to be accessible by a group of organizations. Hybrid Cloud The hybrid cloud is a mixture of public and private cloud, in which the critical activities are performed using private cloud while the non-critical activities are performed using public cloud. Service Models Cloud computing is based on service models. These are categorized into three basic service models which are - Infrastructure-as–a-Service (IaaS) Platform-as-a-Service (PaaS) Software-as-a-Service (SaaS) Anything-as-a-Service (XaaS) is yet another service model, which includes Network-as-a-Service, Business-as-a-Service, Identity-as-a-Service, Database-as-a-Service or Strategy-as-a-Service. The Infrastructure-as-a-Service (IaaS) is the most basic level of service. Each of the service models inherit the security and management mechanism from the underlying model, as shown in the following diagram: Infrastructure-as-a-Service (IaaS) IaaS provides access to fundamental resources such as physical machines, virtual machines, virtual storage, etc. Platform-as-a-Service (PaaS) PaaS provides the runtime environment for applications, development and deployment tools, etc. Software-as-a-Service (SaaS) SaaS model allows to use software applications as a service to end-users. History of Cloud Computing The concept of Cloud Computing came into existence in the year 1950 with implementation of mainframe computers, accessible via thin/static clients. Since then, cloud computing has been evolved from static clients to dynamic ones and from software to services. The following diagram explains the evolution of cloud computing: Cloud Computing History Vision of Cloud Computing In Simplest terms, cloud computing means storing and accessing the data and programs on remote servers that are hosted on internet instead of computer’s hard drive or local server. Cloud computing is also referred as Internet based computing. These are following Vision of Cloud Computing : Cloud computing provides the facility to provision virtual hardware, runtime environment and services to a person having money. These all things can be used as long as they are needed by the user. The whole collection of computing system is transformed into collection of utilities, which can be provisioned and composed together to deploy systems in hours rather than days, with no maintenance cost. The long term vision of a cloud computing is that IT services are traded as utilities in an open market without technological and legal barriers. In the future, we can imagine that it will be possible to find the solution that matches with our requirements by simply entering out request in a global digital market that trades with cloud computing services. The existence of such market will enable the automation of discovery process and its integration into its existing software systems. Due to the existence of a global platform for trading cloud services will also help service providers to potentially increase their revenue. A cloud provider can also become a consumer of a competition service in order to fulfill its promises to customers. In the near future we can imagine a solution that suits our needs by simply applying our application to the global digital market for cloud computing services. The presence of this market will enable the acquisition process to automatically integrate with its integration into its existing software applications. The availability of a global cloud trading platform will also help service providers to increase their revenue. A cloud provider can also be a buyer of a competitive service to fulfill its promises to customers. Defining a Cloud “Cloud computing is a virtualization-based technology that allows us to create, configure, and customize applications via an internet connection. The cloud technology includes a development platform, hard disk, software application, and database.” Cloud Computing Reference Model What is Cloud Computing Reference Model? The cloud computing reference model is an abstract model that divides a cloud computing environment into abstraction layers and cross-layer functions to characterize and standardize its functions. This reference model divides cloud computing activities and functions into three cross -layer functions and five logical layers. Each of these layers describes different things that might be present in a cloud computing environment, such as computing systems, networking, storage equipment, virtualization software, security measures, control and management software, and so forth. It also explains the connections between these organizations. The five layers are the Physical layer, virtual layer, control layer, service orchestration layer, and service layer. Cloud Computing reference model is divided into 3 major service models: 1. Software as a Service (SaaS) 2. Platform as a Service (PaaS) 3. Infrastructure as a Service (IaaS) Cloud Computing Reference Model Overview IaaS, PaaS, and SaaS are the three most prevalent cloud delivery models, and together they have been widely adopted and formalized. A cloud delivery service model is a specific, preconfigured combination of IT resources made available by a cloud service provider. But the functionality and degree of administrative control each of these three delivery types offer cloud users varies. These abstraction layers can also be considered a tiered architecture, where services from one layer can be combined with services from another, for example, SaaS can supply infrastructure to create services from a higher layer. Let us have a look at the layers of cloud computing reference model. 1. SaaS Software as a Service (SaaS) is a form of application delivery that relieves users of the burden of software maintenance while making development and testing easier for service providers. The cloud delivery model's top layer is where applications are located. End customers get access to the services this tier offers via web portals. Because online software services provide the same functionality as locally installed computer programs, consumers (users) are rapidly switching from them. Today, ILMS and other application software can be accessed via the web as a service. In terms of data access, collaboration, editing, storage, and document sharing, SaaS is unquestionably a crucial service. Email service in a web browser is the most well-known and widely used example of SaaS, but SaaS applications are becoming more cooperative and advanced. Features of SaaS are as follows: The cloud consumer has full control over all the cloud services. The provider has full control over software applications-based services. The cloud provider has partial control over the implementation of cloud services. The consumer has limited control over the implementation of these cloud services. 2. PaaS Platform as a Service is a strategy that offers a high level of abstraction to make a cloud readily programmable in addition to infrastructure-oriented clouds that offer basic compute and storage capabilities (PaaS). Developers can construct and deploy apps on a cloud platform without necessarily needing to know how many processors or how much memory their applications would use. A PaaS offering that provides a scalable environment for creating and hosting web applications is Google App Engine, for instance. Features of PaaS layer are as follows: The cloud provider has entire rights or control over the provision of cloud services to consumers. The cloud consumer has selective control based on the resources they need or have opted for on the application server, database, or middleware. Consumers get environments in which they can develop their applications or databases. These environments are usually very visual and very easy to use. Provides options for scalability and security of the user’s resources. Services to create workflows and websites. Services to connect users’ cloud platforms to other external platforms. 3. IaaS Infrastructure as a Service (IaaS) offers storage and computer resources that developers and IT organizations use to deliver custom/business solutions. IaaS delivers computer hardware (servers, networking technology, storage, and data center space) as a service. It may also include the delivery of OS and virtualization technology to manage the resources. Here, the more important point is that IaaS customers rent computing resources instead of buying and installing them in their data centers. The service is typically paid for on a usage basis. The service may include dynamic scaling so that if the customers need more resources than expected, they can get them immediately. The control of the IaaS layer is as follows: The consumer has full/partial control over the infrastructure of the cloud, servers, and databases. The consumer has control over the Virtual Machines' implementation and maintenance. The consumer has a choice of already installed VM machines with pre-installed Operating systems. The cloud provider has full control over the data centers and the other hardware involved in them. It has the ability to scale resources based on the usage of users. It can also copy data worldwide so that data can be accessed from anywhere in the world as soon as possible. You can learn in-depth about these layers when you go for AWS certification Cloud Practitioner course. Types of Cloud Computing Reference Model There is various type of cloud computing reference model used based on different requirements of the consumers. The most important type of cloud computing reference model is the cloud reference model in cloud computing. The National Institute of Standards and Technology (NIST) is an organization designed by the US government (USG) agency for the adoption and development of cloud computing standards. The principle of NIST Cloud computing reference architecture are: 1. Create a vendor-neutral architecture that adheres to the NIST standard. 2. Create a solution that does not inhibit innovation by establishing a required technological solution. 3. The NIST Cloud computing reference architecture provides characteristics like elasticity, self- service, the collaboration of resources, etc. The service models involved in this architecture are: 1. Software as a Service (SaaS) 2. Platform as a Service (PaaS) 3. Infrastructure as a Service (IaaS) NIST Cloud computing also has 4 deployment models, which are as follows: 1. Public This is the model where cloud infrastructure and resources are given to the public via a public network. These models are generally owned by companies that sell cloud services. 2. Private This is the model where cloud infrastructure and resources are only accessible by the cloud consumer. These models are generally owned by cloud consumers themselves or a third party. 3. Community This is the model where a group of cloud consumers might share their cloud infrastructure and resources as they may have the same goal and policies to be achieved. These models are owned by organizations or third-party. 4. Hybrid This model consists of a mixture of different deployment models like public, private, or community. This helps in the exchange of data or applications between various models. Examples of Cloud Computing Reference Model Apart From NIST 1. IBM Architecture 2. Oracle Architecture 3. HP Architecture 4. Cisco Reference Architecture Major Actors of Cloud Computing Reference Model There are five major actors in NIST cloud computing reference architecture. They are: 1. Cloud Consumer 2. Cloud Provider 3. Cloud Carrier 4. Cloud Auditor 5. Cloud Broker 1. Cloud Consumer The end user that the cloud computing service is designed to support is the cloud consumer. An individual or corporation with a working relationship with a cloud provider and utilizing its services is referred to as a cloud consumer. A cloud customer peruses a cloud provider's service catalog, makes the proper service request, enters into a service agreement with the cloud provider, and then utilizes the service. The cloud customer may be charged for the service provided, in which case payment arrangements must be made. They need to have a cloud Service Level Agreement (SLA). 2. Cloud Provider Any individual, group, or other entity in charge of making a service accessible to cloud users is a cloud provider. A cloud provider creates the requested software, platforms, and infrastructure services, manages the technical infrastructure needed to supply the services, provisions the services at agreed- upon service levels, and safeguards the services' security and privacy. Through service interfaces and virtual network interfaces that aid in resource abstraction, the cloud provider implements the cloud software to make computing resources accessible to cloud consumers that use the infrastructure as a service. 3. Cloud Carrier A cloud carrier serves as an intermediary between cloud providers and customers, facilitating connectivity and transport of cloud services. Customers can access the cloud through the network, telecommunication, and other access equipment provided by cloud carriers. Customers of cloud services, for instance, can get them through network access devices, including laptops, mobile phones, PCs, and mobile Internet devices (MIDs), among others. Network and telecommunication carriers typically handle the distribution of cloud services, while a transport agent is a company that arranges for the physical delivery of storage devices like high-capacity hard drives. Remember that a cloud provider will establish service level agreements (SLAs) with a cloud carrier to provide services at a level consistent with the SLAs offered to cloud consumers. The cloud provider may also demand that the cloud carrier provide dedicated and encrypted connections between cloud consumers and cloud providers. 4. Cloud Auditor An unbiased evaluation of cloud services, information system operations, performance, and the security of a cloud computing implementation can be done by a cloud auditor. A cloud auditor can assess a cloud provider's services in terms of performance, service level agreement compliance, privacy implications, and security controls. The management, operational, and technical precautions or countermeasures used inside an organizational information system to ensure the privacy, availability, and integrity of the system and its data are known as security controls. To do a security audit, a cloud auditor can evaluate the information system's security controls to see how well they are being implemented, functioning as intended, and achieving the required results in relation to the system's security needs. Verifying compliance with law and security policy should be part of the security audit. 5. Cloud Broker An organization called a "Cloud Broker" controls how cloud services are used, performed, and delivered and negotiates contracts between cloud providers and cloud users. The integration of cloud services could become too difficult for cloud consumers to handle as cloud computing develops. Instead of contacting a cloud provider directly in certain circumstances, a cloud consumer may request cloud services through a cloud broker. A single point of access for controlling numerous cloud services is offered by cloud brokers. The capacity to offer a single consistent interface to numerous different providers, whether the interface is for commercial or technical objectives, separates a cloud broker from a cloud service provider. Cloud Brokers provide services in three categories: Intermediation By enhancing a certain feature and offering cloud consumers value-added services, a cloud broker improves a given service. The enhancement may take the shape of identity management, performance reporting, improved security, etc. Aggregation Several services are combined and integrated into one or more new services by a cloud broker. The broker offers data and service integration, guarantees secure data transfer between the cloud consumer and various cloud providers, and provides these services. Arbitrage Like service aggregation, service arbitrage differs from it in that the services being integrated or aggregated are not fixed. Service arbitrage refers to the freedom a Broker has to select services from various service Providers. Interactions Between Actors in Cloud Computing in Cloud Security Reference Model 1. Instead of contacting a cloud provider directly, a cloud consumer may request service through a cloud broker. The cloud broker may combine several services to form a new service or may improve an existing one. In this illustration, the cloud consumer interacts directly with the cloud broker and is unaware of the actual cloud providers. 2. An unbiased evaluation of the functionality and security of a cloud service's implementati on is done by a cloud auditor. Interactions with the cloud consumer and cloud provider may be necessary for the audit. 3. The connectivity and delivery of cloud services from cloud providers to cloud consumers are handled by cloud carriers. Figure 4 shows how a cloud provider arranges and participates in two distinct service level agreements (SLAs), one with a cloud carrier (for example, SLA2) and one with a cloud consumer (e.g., SLA1). To ensure that the cloud services are used at a consistent level in accordance with the contractual responsibilities with the cloud consumers, a cloud provider negotiates service level agreements (SLAs) with a cloud carrier and may ask for dedicated and encrypted connections. In this situation, the provider may express its functionality, capability, and flexibility needs in SLA2 to meet SLA1's basic requirements. Security Reference Model in Cloud Computing The formal model for the NIST Cloud Computing Security Reference Architecture is NIST SP 500 - 292: A connected collection of security components generated from the CSA TCI-RA, the NIST Cloud Computing Reference Architecture, and a way for utilizing the formal model and the security components to orchestrate a safe cloud ecosystem. The Cloud Security reference model is agnostic about the cloud deployment model, and its methodology may easily be applied to data about Private, Community, or Hybrid clouds. It is a formal model, a collection of Security Components, and a methodology for applying a cloud-adapted Risk Management Framework. Since a public cloud deployment model best supports illustrative examples of all the NCC-SRA Security Components and security considerations, this document uses it to describe the methodology for illustration purposes. The Cloud Security reference model introduces a risk-based methodology to establish each cloud actor's accountability for putting particular controls throughout the cloud ecosystem's life cycle. The Security Components are specifically examined for each instance of the cloud Ecosystem to determine the degree to which each cloud actor participated in the implementation of those components. This document's main goal is to demystify the process of describing, identifying, classifying, analyzing, and choosing cloud-based services for cloud consumers who are trying to figure out which cloud service offering best addresses their cloud computing needs and supports their business and mission - critical processes and services in the most secure and effective way. Characteristics of Cloud Computing There are many characteristics of Cloud Computing here are few of them: On-demand self-services: The Cloud computing services does not require any human administrators, user themselves are able to provision, monitor and manage computing resources as needed. Broad network access: The Computing services are generally provided over standard networks and heterogeneous devices. Rapid elasticity: The Computing services should have IT resources that are able to scale out and in quickly and on as needed basis. Whenever the user require services it is provided to him and it is scale out as soon as its requirement gets over. Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and services) present are shared across multiple applications and occupant in an uncommitted manner. Multiple clients are provided service from a same physical resource. Measured service: The resource utilization is tracked for each application and occupant, it will provide both the user and the resource provider with an account of what has been used. This is done for various reasons like monitoring billing and effective use of resource. Multi-tenancy: Cloud computing providers can support multiple tenants (users or organizations) on a single set of shared resources. Virtualization: Cloud computing providers use virtualization technology to abstract underlying hardware resources and present them as logical resources to users. Resilient computing: Cloud computing services are typically designed with redundancy and fault tolerance in mind, which ensures high availability and reliability. Flexible pricing models: Cloud providers offer a variety of pricing models, including pay-per- use, subscription-based, and spot pricing, allowing users to choose the option that best suits their needs. Security: Cloud providers invest heavily in security measures to protect their users’ data and ensure the privacy of sensitive information. Automation: Cloud computing services are often highly automated, allowing users to deploy and manage resources with minimal manual intervention. Sustainability: Cloud providers are increasingly focused on sustainable practices, such as energy-efficient data centers and the use of renewable energy sources, to reduce their environmental impact. Benefits of Cloud Computing Cloud computing benefits organizations in many ways. In fact, the benefits are so numerous that it makes it almost impossible not to consider moving business operations to a cloud-based platform. And yet many organizations rely on outdated and inefficient processes because they don’t understand the benefits. This article breaks down the top 10 benefits of cloud computing for all organizations considering adopting a cloud-based system. Accessibility anywhere, with any device each branch or office across various states or countries. The improved accessibility doesn’t just impact employees; clients and customers can also log in to an account and access their information as well. This ensures everyone has up-to-date information whether they’re at the office or on the go. Ability to get rid of most or all hardware and software With cloud computing, you’re no longer required to have your own server, cables, network switches, backup generators, redundant routers, and so on. Depending on the cloud provider you choose, they can manage all of this for a monthly fee. Reducing expenses is essential in any business model and every cloud-based platform benefits from this factor alone. Centralized data security When you use cloud computing, data backups are centralized in the cloud providers' data centers, removing the need for individual users or teams to maintain their own backups onsite or offsite. This lowers the risk of data loss should any one backup fail or be destroyed by a disaster. Cloud providers can restore the data from another copy maintained in their cloud storage, which is continuously updated with every piece of data added. Teams can take advantage of cloud security technologies such as data encryption and two-factor authentication for greater privacy than they'd have when relying on their own equipment or servers at home or in the office. Oracle uses a security-first cloud architecture with automated protection built in. Higher performance and availability By using cloud computing resources together simultaneously, you reap greater performance gains than by having your own dedicated server hardware. Cloud computing increases input/output operations per second (IOPS). Oracle cloud delivers as much as 20X the IOPS of Amazon Web Services. Cloud services also offer high availability with no downtime because they’re distributed across multiple cloud facilities. Cloud providers are responsible for updating cloud systems and fixing bugs and security issues in cloud software, which is transparent to end users. Quick application deployment Unpredictable business needs often require cloud computing resources on short notice. You can improve your cloud application development by quickly deploying cloud applications because they are readily available without the need to procure additional hardware or wait for IT staff to set up servers. In addition you can choose from a broad range of services that support different types of cloud infrastructure technologies. Instant business insights Cloud-based platforms provide a unique opportunity to access data as soon as it’s collected. This facilitates better decision-making as well as insight into what the future may hold for your organization based on predictions from historical data. Business continuity In the event of disaster or unforeseen circumstances, do you have an effective backup plan? If not, relying on cloud computing services can benefit your organization. Cloud computing uses infinite data storage space and systems that can be activated remotely if necessary to ensure business continuity. Price-performance and cost savings Although an initial financial investment is required to implement a cloud strategy, organizations save substantial amounts in the long run because they don’t have to maintain expensive hardware or local data centers. Also, since there are no upfront costs to use cloud-based systems, businesses can test them out before investing in them at their own pace. Oracle provides price-performance and flexible sizing. Virtualized computing Cloud computing is perfect for virtualized computer environments because cloud resources can be allocated instantly to support significant increases in demand so you never experience downtime again. With cloud computing, your business can expand its capabilities almost effortlessly to meet growing demands without increasing staff or capital expenditures. Cloud computing is greener Cloud computing is a greener technology than traditional IT solutions. By moving to the cloud, businesses can reduce their energy consumption and carbon footprint by up to 90%. Rather than having in-house servers and software, businesses can use cloud-based services to access the same applications and data from any computer or device with an internet connection. This eliminates the need for businesses to purchase and maintain their own IT infrastructure. Challenges of Cloud Computing Cloud computing is a hot topic at the moment, and there is a lot of ambiguity when it comes to managing its features and resources. Technology is evolving, and as companies scale up, their need to use the latest Cloud frameworks also increases. Some of the benefits introduced by cloud solutions include data security, flexibility, efficiency, and high performance. Smoother processes and improved collaboration between enterprises while reducing costs are among its perks. However, the Cloud is not perfect and has its own set of drawbacks when it comes to data management and privacy concerns. Thus, there are various benefits and challenges of cloud computing. The list below discusses some of the key challenges in the adoption of cloud computing. 1. Data Security and Privacy Data security is a major concern when working with Cloud environments. It is one of the major challenges in cloud computing as users have to take accountability for their data, and not all Cloud providers can assure 100% data privacy. Lack of visibility and control tools, no identity access management, data misuse, and Cloud misconfiguration are the common causes behind Cloud privacy leaks. There are also concerns with insecure APIs, malicious insiders, and oversights or neglect in Cloud data management. Solution: Configure network hardware and install the latest software updates to prevent security vulnerabilities. Using firewalls, antivirus, and increasing bandwidth for Cloud data availability are some ways to prevent data security risks. 2. Multi-Cloud Environments Common cloud computing issues and challenges with multi-cloud environments are - configuration errors, lack of security patches, data governance, and no granularity. It is difficult to track the security requirements of multi-clouds and apply data management policies across various boards. Solution: Using a multi-cloud data management solution is a good start for enterprises. Not all tools will offer specific security functionalities, and multi-cloud environments grow highly sophisticated and complex. Open-source products like Terraform provide a great deal of control over multi-cloud architectures. 3. Performance Challenges The performance of Cloud computing solutions depends on the vendors who offer these services to clients, and if a Cloud vendor goes down, the business gets affected too. It is one of the major challenges associated with cloud computing. Solution: Sign up with Cloud Service Providers who have real-time SaaS monitoring policies. 4. Interoperability and Flexibility Interoperability is a challenge when you try to move applications between two or multiple Cloud ecosystems. It is one of the challenges faced in cloud computing. Some common issues faced are: Rebuilding application stacks to match the target cloud environment's specifications Handling data encryption during migration Setting up networks in the target cloud for operations Managing apps and services in the target cloud ecosystem Solution: Setting Cloud interoperability and portability standards in organizations before getting to work on projects can help solve this problem. The use of multi-layer authentication and authorization tools is also encouraged for account verifications in public, private, and hybrid cloud ecosystems. 5. High Dependence on Network Lack of sufficient internet bandwidth is a common problem when transferring large volumes of information to and from Cloud data servers. It is one of the various challenges in cloud computin g. Data is highly vulnerable, and there is a risk of sudden outages. Enterprises that want to lower hardware costs without sacrificing performance need to ensure there is high bandwidth, which will help prevent business losses from sudden outages. Solution: Pay more for higher bandwidth and focus on improving operational efficiency to address network dependencies. 6. Lack of Knowledge and Expertise Organizations are finding it tough to find and hire the right Cloud talent, which is another common challenge in cloud computing. There is a shortage of professionals with the required qualifications in the industry. Workloads are increasing, and the number of tools launched in the market is increasing. Enterprises need good expertise in order to use these tools and find out which ones are ideal for them. Solution: Hire Cloud professionals with specializations in DevOps and automation 7. Reliability and Availability High unavailability of Cloud services and a lack of reliability are two major concerns in these ecosystems. Organizations are forced to seek additional computing resources in order to keep up with changing business requirements. If a Cloud vendor gets hacked or affected, the data of organizations using their services gets compromised. It is another one of the many cloud security risks and challenges faced by the industry. Solution: Implementing the NIST Framework standards in Cloud environments can greatly improve both aspects. 8. Password Security Account managers use the same passwords to manage all their Cloud accounts. Password management is a critical problem, and it is often found that users resort to using reused and weak passwords. Solution: Use a strong password management solution to secure all your accounts. To further improve security, use Multifactor Authentication (MFA) in addition to a password manager. Good cloud-based password managers alert users of security risks and leaks. 9. Cost Management Even though Cloud Service Providers (CSPs) offer a pay-as-you-go subscription for services, the costs can add up. Hidden costs appear in the form of underutilized resources in enterprises. Solution: Auditing systems regularly and implementing resource utilization monitoring tools are some ways organizations can fix this. It's one of the most effective ways to manage budgets and deal with major challenges in cloud computing. 10. Lack of expertise Cloud computing is a highly competitive field, and there are many professionals who lack the required skills and knowledge to work in the industry. There is also a huge gap in supply and demand for certified individuals and many job vacancies. Solution: Companies should retrain their existing IT staff and help them in upskilling their careers by investing in Cloud training programs. 11. Control or Governance Good IT governance ensures that the right tools are used, and assets get implemented according to procedures and agreed-to policies. Lack of governance is a common problem, and companies use tools that do not align with their vision. IT teams don't get total control of compliance, risk management, and data quality checks, and there are many uncertainties faced when migrating to the Cloud from traditional infrastructures. Solution: Traditional IT processes should be adopted in ways to accommodate Cloud migrati ons. 12. Compliance Cloud Service Providers (CSP) are not up-to-date when it comes to having the best data compliance policies. Whenever a user transfers data from internal servers to the Cloud, they run into compliance issues with state laws and regulations. Solution: The General Data Protection Regulation (GDPR) Act is expected to expedite compliance issues in the future for CSPs. 13. Multiple Cloud Management Enterprises depend on multiple cloud environments due to scaling up and provisioning resources. One of the hybrid cloud security challenges is that most companies follow a hybrid cloud strategy, and many resort to multi-cloud. The problem is that infrastructures grow increasingly complex and difficult to manage when multiple cloud providers get added, especially due to technological cloud computing challenges and differences. Solution: Creating strong data management and privacy policies is a starting point when it comes to managing multi-cloud environments effectively. 14. Migration Migration of data to the Cloud takes time, and not all organizations are prepared for it. Some report increased downtimes during the process, face security issues, or have problems with data formatting and conversions. Cloud migration projects can get expensive and are harder than anticipated. Solution: Organizations will have to employ in-house professionals to handle their Cloud data migration and increase their investments. Experts must analyze cloud computing issues and solutions before investing in the latest platforms and services offered by CSPs. 15. Hybrid-Cloud Complexity Hybrid-cloud complexity refers to cloud computing challenges arising from mixed computing, storage, and services, and multi-cloud security causes various challenges. It comprises private cloud services, public Clouds, and on-premises infrastructures, for example, products like Microsoft Azure and Amazon Web Services - which are orchestrated on various platforms. Solution: Using centralized Cloud management solutions, increasing automation, and hardening security are good ways to mitigate hybrid-cloud complexity. Historical Developments 1. In 1950 the main frame and time sharing are born, introducing the concept of shared computer resources. 2. During this time word cloud was not in use. 3. Cloud computing is believed to have been invented by Joseph Carl Robnett Licklider in the 1960s with his work on ARPANET to connect people and data from anywhere at any time. 4. In 1969 the first working prototype of ARPANET is launched. 5. In 1970 the word “Client-Server” come in to use. 6. Client server defines the computing model where client access the data and applications from a central server. 7. In 1995, pictures of cloud are started showing in diagrams, for not technical people to understand. 8. At that time AT & T had already begun to develop an architecture and system where data would be located centrally. 9. In 1999 the salesforce.com was launched, the first company to make enterprise applications available from a website. 10. In 1999, the search engine Google launches. 11. In 1999, Netflix was launched, introducing the new revenue way. 12. In 2003, web2.0 is born, which is characterized by rich multimedia. Now user can generate content. 13. In 2004 Facebook launches giving users facility to share themselves. 14. In 2006, Amazon launched Amazon Web Services(AWS), giving users a new way. 15. In2006, Google CEO Eric Schmidt uses the word “cloud” as an industry event. 16. In 2007, Apple launches iPhone, which could be used on any wireless network. 17. In 2007, Netflix launches streaming services, and live video watching is born. 18. In 2008, private cloud come in to existence. 19. In 2009, browser based application like google apps are introduced. 20. In 2010, hybrid cloud (private+public cloud) comes in to existence. 21. In 2012, Google launches google drive with free cloud storage. 22. Now cloud adoption is present, which makes cloud computing more stronger. 23. The IT services progressed over the decades with the adoption of technologies such as Internet Service Providers (ISP) Application Service Providers. Virtualization in Cloud Computing Virtualization is a technique how to separate a service from the underlying physical delivery of that service. It is the process of creating a virtual version of something like computer hardware. It was initially developed during the mainframe era. It involves using specialized software to create a virtual or software- created version of a computing resource rather than the actual version of the same resource. With the help of Virtualization, multiple operating systems and applications can run on the same machine and its same hardware at the same time, increasing the utilization and flexibility of hardware. In other words, one of the main cost-effective, hardware-reducing, and energy-saving techniques used by cloud providers is Virtualization. Virtualization allows sharing of a single physical instance of a resource or an application among multiple customers and organizations at one time. It does this by assigning a logical name to physical storage and providing a pointer to that physical resource on demand. The term virtualization is often synonymous with hardware virtualization, which plays a fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for cloud computing. Moreover, virtualization technologies provide a virtual environment for not only executing applications but also for storage, memory, and networking. Host Machine: The machine on which the virtual machine is going to be built is known as Host Machine. Guest Machine: The virtual machine is referred to as a Guest Machine. Work of Virtualization in Cloud Computing Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing, users store data in the cloud, but with the help of Virtualization, users have the extra benefit of sharing the infrastructure. Cloud Vendors take care of the required physical resources, but these cloud providers charge a huge amount for these services which impacts every user or organization. Virtualization helps Users or Organisations in maintaining those services which are required by a company through external (third-party) people, which helps in reducing costs to the company. This is the way through which Virtualization works in Cloud Computing. Benefits of Virtualization More flexible and efficient allocation of resources. Enhance development productivity. It lowers the cost of IT infrastructure. Remote access and rapid scalability. High availability and disaster recovery. Pay peruse of the IT infrastructure on demand. Enables running multiple operating systems. Drawback of Virtualization High Initial Investment: Clouds have a very high initial investment, but it is also true that it will help in reducing the cost of companies. Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires highly skilled staff who have skills to work with the cloud easily, and for this, you have to hire new staff or provide training to current staff. Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it has the chance of getting attacked by any hacker or cracker very easily. Characteristics of Virtualization Virtualization is a technology that allows multiple virtual machines to run on a single physical machine. It is a powerful tool that has revolutionized the way we use computers and has become an essential component of modern IT infrastructure. The concept of virtualization has been around for decades, but it has only recently become mainstream as technology has advanced and costs have dropped. In this article, we will explore the characteristics of virtualization and how it is being used today. Abstracting Physical Resources One of the most significant characteristics of virtualization is the ability to abstract physical resources. This means that virtual machines can be created that are completely independent of the underlying physical hardware. This allows multiple virtual machines to run on the same physical machine, each with their own operating system and applications. This is known as server virtualization, and it is the most common use of virtualization today. For example, a single physical server can host multiple virtual machines, each running its own operating system and applications. This allows for efficient use of resources, as a single physical machine can be used to run multiple applications, rather than having to purchase and maintain multiple physical servers. Isolation of Resources Another key characteristic of virtualization is the isolation of resources. This means that each virtual machine is isolated from the others, and they cannot access each other's resources. This provides security and stability, as a problem with one virtual machine will not affect the others. For example, a company may use virtualization to run their email server, web server, and database server on the same physical machine. If the email server were to be compromised, the web server and database server would still be protected and continue to function properly. Flexibility Virtualization also provides flexibility in terms of resource allocation. Virtual machines can be easily created, deleted, and modified as needed. This allows for easy scaling, as more resources can be allocated to a virtual machine as needed. It also allows for easier testing and development, as virtual machines can be created to test new software and configurations without affecting the production environment. For example, a company may use virtualization to create a test environment for their new software. They can create a virtual machine with the same specifications as their production environment and test the software without affecting their live systems. Once the software has been tested and is ready for production, the virtual machine can be deleted, and the software can be deployed to the production environment. Portability Virtualization also provides portability, as virtual machines can be easily moved between physical machines. This allows for easy disaster recovery, as virtual machines can be quickly moved to a different physical machine in the event of a disaster. It also allows for easy migration between physical machines, as virtual machines can be moved to new hardware without affecting the applications or data. For example, a company may use virtualization to create a disaster recovery plan. They can create a virtual machine that contains all of their important data and applications and store it on a separate physical machine. In the event of a disaster, the virtual machine can be quickly moved to a new physical machine, and the company can continue to operate as normal. Networking Virtualization also provides networking capabilities, as virtual machines can be connected to virtual networks. This allows for easy communication between virtual machines, as well as the ability to connect to physical networks. This allows for easy integration of virtual machines into existing networks and the ability to create isolated networks for specific purposes. For example, a company may use virtualization to create a virtual network for their development team. They can create a virtual network that connects all of their development virtual machines, allowing them to easily communicate and share resources. They can also connect this virtual network to their physical network, allowing the development team to access the internet and other resources. Additionally, this virtual network can be isolated from the rest of the company's network for added security. Snapshots and Backup Virtualization also provides the ability to create snapshots of virtual machines. This allows for easy backup and recovery of virtual machines, as well as the ability to quickly revert to a previous state. This is especially useful for testing and development, as it allows for easy experimentation without the risk of losing data or compromising the production environment. For example, a company may use virtualization to test a new software update. They can create a snapshot of their virtual machine before installing the update, and if the update causes any issues, they can easily revert to the previous snapshot. This eliminates the need to manually restore data and configurations, saving both time and resources. Desktop Virtualization Desktop virtualization is another form of virtualization that allows multiple virtual desktops to run on a single physical machine. This allows for easy deployment and management of desktops, as well as the ability to access desktops remotely. This is especially useful for companies with a mobile workforce, as it allows employees to access their desktop from any location. For example, a company may use desktop virtualization to provide remote access for their sales team. The sales team can access their virtual desktop from anywhere, allowing them to work on presentations, access customer data, and communicate with the rest of the team. This eliminates the need for remote employees to carry a laptop or access company data on a personal computer, improving security and productivity. Taxonomy of Virtualization Techniques Virtualization covers a wide range of emulation techniques that are applied to different areas of computing. A classification of these techniques helps us better understand their characteristics and use. The first classification discriminates against the service or entity that isbeing emulated. Virtualization is mainly used to emulateexecution environments,storage, and networks. Among these categories,execution virtualization constitutes the oldest,most popular, and most developed area. Therefore, it deserves major investigation and a further categorization. We can divide these execution virtualization techniques into two major categories by considering the type of host they require. Process-level Techniques are implemented on top of an existing operating system, which has full control of the hardware. System-leve Techniques are implemented directly on hardware and do not require - or require a minimum of support from - an existing operating system Within these two categories we can list various techniques that offer the guest a different type of virtual computation environment: bare hardware operating system resources low-level programming language application libraries. Execution virtualization: Execution virtualization Includes all techniques that aim to emulate an execution environment that is separate from the one hosting the virtualization layer. All these techniques concentrate their interest on providing support for the execution of programs, whether these are the operating system, a binary specification of a program compiled against an abstract machine model, or an application. Therefore, executionvirtualization can be implemented directly on top of the hardware bythe operating system, an application, or libraries dynamically orstatically linked to an application image Hardware-level virtualization: Hardware-level virtualization is a virtualization technique thatprovides an abstract execution environment in terms of computerhardware on top of which a guest operating system can be run. Hardware-level virtualization is also called system virtualization,since it provides ISA to virtual machines, which is the representationof the hardware interface of a system. Hardware-level virtualization is also called system virtualization. Hypervisors: A fundamental element of hardware virtualization is thehypervisor, or virtual machine manager (VMM). It recreates ahardware environment in which guest operating systems are installed. There are two major types of hypervisors: Type I and Type II. Type I: hypervisors run directly on top of the hardware.Therefore, they take the place of the operating systems andinteract directly with underlying hardware. This type ofhypervisor is also called anative virtual machine since itruns natively on hardware. Type II: Hypervisors require the support of an operating system to provide virtualization services. This means that they are programs managed by the operating system, which interact with it hardware for guest operating systems. This type of hypervisor is also called a hosted virtual machine since it is hosted within an operating system. Hardware Virtualization Techniques : Full virtualization: Full virtualization refers to the ability to run aprogram, most likely an operating system, directly on top of a virtualmachine and without any modification, as though it were run on the rawhardware. To make this possible, virtual machine manager are requiredto provide a complete emulation of the entire underlying hardware. Para – virtualization: This is a not-transparent virtualization solutionthat allows implementing thin virtual machine managers.Paravirtualization techniques expose a software interface to the virtualmachine that is slightly modified from the host and, as a consequence,guests need to be modified. Theaim of paravirtualization is to providethe capability to demand the execution of performance-criticaloperations directly on the host. Partial virtualization: Partial virtualization provides a partial emulationof the underlying hardware, thus not allowing the complete execution ofthe guest operating system in complete isolation. Partial virtualizationallows many applications to run transparently, but not all the features ofthe operating system can be supported, as happens with fullvirtualization Operating System-Level Virtualization : It offers the opportunity to create different and separated executionenvironments for applications that are managed concurrently.Differently from hardware virtualization, there is novirtual machinemanager or hypervisor , and the virtualization is done within a singleoperating system, where the OS kernel allows for multiple isolated userspace instances. Programming language-level virtualization Programming language-level virtualization is mostly used to achieveease of deployment of applications, managed execution, and portabilityacross different platforms and operating systems The main advantage of programming-level virtual machines, also calledprocess virtual machines, is the ability to provide a uniform execution environment across different platforms. Programs compiled into bytecode can be executed on any operating system and platform for whicha virtual machine able to execute that code has been provided. Application-level virtualization: The application-level virtualization is used when there is a desire tovirtualize only one application. Application virtualization software allows users to access and use anapplication from a separate computer than the one on which theapplication is installed. Other types of virtualization: Other than execution virtualization, other types of virtualizations providean abstract environment to interact with. These mainly cover storage,networking, and client/server interaction. Storage virtualization: It is a system administration practice thatallows decoupling the physical organization of the hardware from itslogical representation. Using this technique, users do not have to beworried about the specific location of their data, which can be identifiedusing a logical path. Storage virtualization allows us to harness a widerange of storage facilities and represent them under a single logical filesystem. Network virtualization: Network Virtualization is a process of logicallygrouping physical networks and making them operate as single ormultiple independent networks called Virtual Networks. It combineshardware appliances and specific software for the creation andmanagement of a virtual network. Pros and Cons of Virtualization Pros of Virtualization in Cloud Computing: Utilization of Hardware Efficiently – With the help of Virtualization Hardware is Efficiently used by user as well as Cloud Service Provider. In this the need of Physical Hardware System for the User is decreases and this results in less costly.In Service Provider point of View, they will vitalize the Hardware using Hardware Virtualization which decrease the Hardware requirement from Vendor side which are provided to User is decreased. Before Virtualization, Companies and organizations have to set up their own Server which require extra space for placing them, engineer’s to check its performance and require extra hardware cost but with the help of Virtualization the all these limitations are removed by Cloud vendor’s who provide Physical Services without setting up any Physical Hardware system. Availability increases with Virtualization – One of the main benefit of Virtualization is that it provides advance features which allow virtual instances to be available all the times. It also has capability to move virtual instance from one virtual Server another Server which is very tedious and risky task in Server Based System. During migration of Data from one server to another it ensures its safety. Also, we can access information from any location and any time from any device. Disaster Recovery is efficient and easy – With the help of virtualization Data Recovery, Backup, Duplication becomes very easy. In traditional method , if somehow due to some disaster if Server system Damaged then the surety of Data Recovery is very less. But with the tools of Virtualization real time data backup recovery and mirroring become easy task and provide surety of zero percent data loss. Virtualization saves Energy – Virtualization will help to save Energy because while moving from physical Servers to Virtual Server’s, the number of Server’s decreases due to this monthly power and cooling cost decreases which will Save Money as well. As cooling cost reduces it means carbon production by devices also decreases which results in Fresh and pollution free environment. Quick and Easy Set up – In traditional methods Setting up physical system and servers are very time-consuming. Firstly Purchase them in bulk after that wait for shipment. When Shipment is done then wait for Setting up and after that again spend time in installing required software etc. Which will consume very time. But with the help of virtualization the entire process is done in very less time which results in productive setup. Cloud Migration becomes easy – Most of the companies those who already have spent a lot in the server have a doubt of Shifting to Cloud. But it is more cost-effective to shift to cloud services because all the data that is present in their server’s can be easily migrated into the cloud server and save something from maintenance charge, power consumption, cooling cost, cost to Server Maintenance Engineer etc. Cons of Virtualization: Data can be at Risk – Working on virtual instances on shared resources means that our data is hosted on third party resource which put’s our data in vulnerable condition. Any hacker can attack on our data or try to perform unauthorized access. Without Security solution our data is in threaten situation. Learning New Infrastructure – As Organization shifted from Servers to Cloud. They required skilled staff who can work with cloud easily. Either they hire new IT staff with relevant skill or provide training on that skill which increase the cost of company. High Initial Investment – It is true that Virtualization will reduce the cost of companies but also it is truth that Cloud have high initial investment. It provides numerous services which are not required and when unskilled organization will try to set up in cloud they purchase unnecessary services which are not even required to them. Technology Examples- VMware and Microsoft Hyper-V. What Is Hyper-V and How Does It Work? Microsoft’s hardware virtualization product, Hyper-V, enables you to create and run a software version of a computer, called a virtual machine (VM). Hyper-V can have multiple virtual machines, each with their own operating system (OS), on one computer, allowing VMs to run these multiple OSes alongside each other. This eliminates the need to dedicate a single machine to a specific OS. Microsoft Hyper-V is also a Type-1 Hypervisor. In Hyper-V, there is a parent partition and any number of child partitions. The host OS runs in the parent partition. Each child partition is a VM that is a complete virtual computer, with a guest OS (need not be Microsoft) and programs running on it. The VMs use the same hardware resources as the host. A single Hyper-V host can have many VMs created on it. Why Is Hyper-V used? Hyper-V allows you to use your physical hardware more effectively by running multiple workloads on a single machine. It lets you use fewer physical servers, thereby reducing hardware costs and saving space, power and cooling costs. With Hyper-V, you can set up and scale your own private cloud environment. Many organizations use Hyper-V to centralize the management of server farms. This allows them to control their VMs efficiently and reduce the time spent on IT infrastructure management. What Does Hyper-V Consist of? Hyper-V includes multiple components that make up the Microsoft virtualization platform. These include: Windows Hypervisor Hyper-V Virtual Machine Management Service Virtualization WMI provider Virtual machine bus (VMbus) Virtualization service provider (VSP) Virtual infrastructure driver (VID) Additional Hyper-V tools that need to be installed include: Hyper-V Manager Hyper-V module for Windows PowerShell Virtual Machine Connection (VMConnect) Windows PowerShell Direct Hyper-V is available in three versions: Hyper-V on Windows 10 Hyper-V Servers Hyper-V on Windows Server What Are the Benefits of Hyper-V? There are many benefits of Hyper-V, a few of them being: High Scalability and Flexibility With the installation of Hyper-V on a private cloud environment, organizations can be more flexible with their on-demand IT services and expand when required. Hyper-V puts existing hardware to the maximum use, ultimately reducing costs and increasing efficiency. Minimized Downtime Having multiple instances of virtual servers minimizes the impact of sudden downtime, which means system availability increases and companies can improve business continuity. Improved Security Hyper-V safeguards VMs from malware and unauthorized access, making your IT environment and your data more secure. What Is VMware? VMware is also a hypervisor-based virtualization technology that allows you to run multiple virtual machines on the same physical hardware. Each VM can run its own OS and applications. As a leader in virtualization software, VMware allows multiple copies of the same operating system or several different operating systems to run on the same x86-based machine. Hyper-V vs. VMware: What Are the Differences? Hyper-V and VMware each have their own advantages and disadvantages and choosing between the two depends on your specific business requirements. Let’s take a look at a few of the noticeable differences when it comes to their product maturity, complexity and pricing. Hyper-V VMWare Hyper-V supports Windows, Linux and FreeBSD VMware supports Windows, Linux, Unix and operating systems. macOS operating systems. Hyper-V’s pricing depends on the number of cores on VMware charges per processor and its pricing the host and may be preferred by smaller companies. structure might appeal to larger organizations. Hyper-V’s Cluster Shared Volume is somewhat more VMware’s Virtual Machine File System (VMFS) complex and more difficult to use than VMware’s holds a slight edge when it comes to clustering. storage deployment system. Hyper-V uses a single memory technique called VMware implements a variety of techniques, such “Dynamic Memory.” Using the dynamic memory as memory compression and transparent page settings, Hyper-V virtual machine memory can be sharing, to ensure that RAM use in the VM is added or released from the virtual machine back to the optimized. It is a more complex system than Hyper- Hyper-V host. V’s memory technique.