Unit 1 Mastering Cloud Computing PDF

Summary

This document provides an introduction to cloud computing and describes its core features. It details the concept of dynamic provisioning and pay-per-use access to computing resources, along with insights into its evolution from previous computing paradigms.

Full Transcript

1 Introduction Computing is being transformed to a model consisting of services that are commoditized and delivered in a manner similar to utilities such as water, electricity, gas, and telephony. In such a model, users access services based on thei...

1 Introduction Computing is being transformed to a model consisting of services that are commoditized and delivered in a manner similar to utilities such as water, electricity, gas, and telephony. In such a model, users access services based on their requirements regardless of where they are hosted. Several comput- ing paradigms such as Grid computing have promised to deliver this utility computing vision. Cloud computing is the most recent emerging paradigm promising to turn the vision of “computing utilities” into a reality. Cloud computing is a technological advancement that focuses on the way in which we design com- puting systems, develop applications, and leverage existing services for building software. It is based on the concept of dynamic provisioning, which is applied not only to services, but also to compute ca- pability, storage, networking, and Information Technology (IT) infrastructure in general. Resources are made available through the Internet and offered on a pay-per-use basis from Cloud computing vendors. Today, anyone with a credit card can subscribe to Cloud services, and deploy and configure servers for an application in hours, growing and shrinking the infrastructure serving its application according to the demand, and paying only for the time these resources have been used. This chapter provides a brief overview of the Cloud computing phenomenon, by presenting its vision, discussing its core features, and tracking the technological developments that have made it possible. The chapter also introduces some of its key technologies, as well as some insights into developments of Cloud computing environments. 1.1 CLOUD COMPUTING AT A GLANCE In 1969, Leonard Kleinrock, one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET) which seeded the Internet, said: “As of now, computer networks are still in their infancy, but as they grow up and become sophisticat- ed, we will probably see the spread of ‘computer utilities’ which, like present electric and telephone utilities, will service individual homes and offices across the country.” This vision of computing utilities based on a service provisioning model anticipated the massive trans- formation of the entire computing industry in the 21st century whereby computing services will be readily available on demand, like other utility services such as water, electricity, telephone, and gas available in today’s society. Similarly, users (consumers) need to pay providers only when they access 1.2 Mastering Cloud Computing the computing services. In addition, consumers no longer need to invest heavily, or encounter difficul- ties in building and maintaining complex IT infrastructure. In such a model, users access services based on their requirements without regard to where the services are hosted. This model has been referred to as utility computing, or recently (since 2007) as Cloud computing. The latter term often denotes the infrastructure as a “Cloud” from which businesses and users can access applications as services from anywhere in the world on demand. Hence, Cloud computing can be classified as a new paradigm for the dynamic provisioning of computing services supported by state-of-the-art data centers employing virtualization technologies for consolidation and effective utilization of resources. Cloud computing allows renting infrastructure, runtime environments, and services on pay-per-use basis. This principle finds several practical applications, and then gives different images of Cloud com- puting to different people. Chief information and technology officers of large enterprises see opportuni- ties for scaling on demand their infrastructure and size it according to their business needs. End users leveraging Cloud computing services can access their documents and data at anytime, anywhere, and from any device connected to the Internet. Many other points of view exist1. One of the most diffused views of Cloud computing can be summarized as follows: “I don’t care where my servers are, who manages them, where my documents are stored, or where my applications are hosted. I just want them always available and access them from any device con- nected through Internet. And I am willing to pay for this service for as a long as I need it.” The concept expressed above has strong similarities with the way we make use of other services such as water and electricity. In other words, Cloud computing turns IT services into utilities. Such a delivery model is made possible by the effective composition of several technologies, which have reached the appropriate maturity level. Web 2.0 technologies play a central role in making Cloud computing an attractive opportunity for building computing systems. They have transformed the Internet into a rich application and service delivery platform, mature enough to serve complex needs. Service-orientation allows Cloud computing to deliver its capabilities with familiar abstractions while virtualization confers Cloud computing the necessary degree of customization, control, and flexibility for building production and enterprise systems. Besides being an extremely flexible environment for building new systems and applications, Cloud computing also provides an opportunity for integrating additional capacity, or new features, into existing systems. The use of dynamically provisioned IT resources constitutes a more attractive opportunity than buying additional infrastructure and software, whose sizing can be difficult to estimate and needs are limited in time. This is one of the most important advantages of Cloud computing, which made it a popu- lar phenomenon. With the wide deployment of Cloud computing systems, the foundation technologies and systems enabling them are getting consolidated and standardized. This is a fundamental step in the realization of the long-term vision for Cloud computing, which provides an open environment where computing, storage, and other services are traded as computing utilities. 1.1.1 The Vision of Cloud Computing Cloud computing allows anyone having a credit card to provision virtual hardware, runtime environ- ments, and services. These are used for as long as needed and no upfront commitments are required. The entire stack of a computing system is transformed into a collection of utilities, which can be provi- sioned and composed together to deploy systems in hours, rather than days, and with virtually no main- tenance costs. This opportunity, initially met with skepticism, has now become a practice across several 1 An interesting perspective on how Cloud Computing evocates different things to different people, can be found in a series of interviews made by Rob Boothby, vice president and platform evangelist of Joyent, at the Web 2.0 Expo in May 2007. CEOs, CTOs, founders of IT companies, and IT analysts were interviewed and all of them gave their personal perception of the phenomenon, which at that time was starting to spread. The video of the interview can be found on YouTube at the following link: http://www.youtube.com/ watch?v=6PNuQHUiV3Q. Introduction 1.3 application domains and business sectors (see Fig. 1.1). The demand has fast-tracked the technical development and enriched the set of services offered, which have also become more sophisticated and cheaper. Despite its evolution, the usage of Cloud computing is often limited to a single service at time or, more commonly, a set of related services offered by the same vendor. The lack of effective standardiza- tion efforts made it difficult to move hosted services from one vendor to another. The long term vision of Cloud computing is that IT services are traded as utilities in an open market without technological and legal barriers. In this Cloud marketplace, Cloud service providers and consumers, trading Cloud services as utilities, play a central role. Many of the technological elements contributing to this vision already exist. Different stakeholders leverage Clouds for a variety of services. The need for ubiquitous storage and compute power on de- mand is the most common reason to consider Cloud computing. A scalable runtime for applications is an attractive option for application and system developers that do not have infrastructure or cannot af- ford any further expansion of existing one. The capability of Web-based access to documents and their processing using sophisticated applications is one the appealing factors for end-users. I have a lot of I need to gr g ow my y infr in fras astr truc uctu ture re th that at I infras structure, but want to rent … I do not know for I have a surplus of how long… infrastructure that I want to make use of I cannot inv vest in infrastructt ure, I just starte e d my business s…. I have infrastructure and middleware, and I can host applications I want to focus on application logic, and Global Cloud Marketplace not mainten nance and scalability y issues I have infrastructure to provide application services I want to access and edit my documents and pho otos from everyywhere.. Fig. 1.1. Cloud-Computing Vision. In all these cases, the discovery of such services is mostly done by human intervention: a person (or a team of people) looks over the Internet to identify offerings that meet his or her needs. In a near future, we imagine that it will be possible to find the solution that matches our needs by simply entering our request in a global digital market that trades Cloud-computing services. The existence of such market will enable the automation of the discovery process and its integration into existing software systems, thus allowing users to transparently leverage Cloud resources in their applications and systems. The 1.4 Mastering Cloud Computing existence of a global platform for trading Cloud services will also help service providers to become more visible, and therefore to potentially increase their revenue. A global Cloud market also reduces the bar- riers between service consumers and providers: it is no longer necessary to belong to only one of these two categories. For example, a Cloud provider might become a consumer of a competitor service in order to fulfill its promises to customers. These are all possibilities that are introduced with the establishment of a global Cloud computing market place and by defining an effective standard for the unified representation of Cloud services as well as the interaction among different Cloud technologies. A considerable shift towards Cloud com- puting has already been registered, and its rapid adoption facilitates its consolidation. Moreover, by concentrating the core capabilities of Cloud computing into large datacenters, it is possible to reduce or remove the need for any technical infrastructure on the service consumer side. This approach provides opportunities for optimizing datacenter facilities and fully utilizing their capabilities to serve multiple us- ers. This consolidation model will reduce the waste of energy and carbon emission, thus contributing to a greener IT on one end, and increase the revenue on the other end. 1.1.2 Defining a Cloud Cloud computing has become a popular buzzword and it has been widely used to refer to different technologies, services, and concepts. It is often associated with virtualized infrastructure or hardware on demand, utility computing, IT outsourcing, platform and software as a service, and many other things that now are the focus of the IT industry. Figure 1.2 depicts the plethora of different notions one portrays when defining Cloud computing. No capital investments Clou dbur sting SaaS Quality of Service Inte Green Pay as you go rne PaaS comp t uting Billing g IaaS putin y com evel Utilit Elas cing ice L nt ticity tsour S e r v eme IT ou Agre l tua Vir nters bility IT outsourcing ce Sca la Data Privacy and Tru ng st oni r o visi and Security P dem on Cloud lization Computing? Virtua Fig. 1.2. Cloud Computing Technologies, Concepts, and Ideas. Introduction 1.5 The term “Cloud” has historically been used in the telecommunication industry as an abstraction of the network in system diagrams. It then became the symbol of the most popular computer network: Internet. This meaning also applies to Cloud computing, which refers to an Internet-centric way of doing comput- ing. Internet plays a fundamental role in Cloud computing since it represents either the medium or the platform through which many Cloud computing services are delivered and made accessible. This aspect is also reflected into the definition given by Armbrust et al. : “Cloud computing refers to both the applications delivered as services over the Internet, and the hardware and system software in the datacenters that provide those services.” This definition describes Cloud computing as a phenomenon touching on the entire stack: from the underlying hardware to the high level software services and applications. It introduces the concept of everything as a service, mostly referred as XaaS2, where the different components of a system can be delivered, measured and consequently priced, as a service: IT infrastructure, development platforms, databases, and so on. This new approach significantly influences not only the way in which we build software, but also the way in which we deploy it, make it accessible, design our IT infrastructure, and even the way in which companies allocate the costs for IT needs. The approach fostered by Cloud com- puting is global: it covers both the needs of a single user hosting documents in the Cloud and the ones of a CIO deciding to deploy part of or the entire IT infrastructure in public Cloud. This notion of multiple parties using shared Cloud computing environment is highlighted in a definition proposed by American National Institute of Standards and Technology (NIST): “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Another important aspect of Cloud computing is its utility-oriented approach. More than any other trend in distributed computing, Cloud computing focuses on delivering services with a given pricing model; in most of the cases a “pay-per-use” strategy. It makes possible to access online storage, to rent virtual hardware, or to use development platforms and pay only for their effective usage, with no or minimal upfront costs. All these operations can be performed and billed simply by entering the credit card de- tails, and accessing the exposed services through a Web browser. This helps us to provide a different and more practical characterization of Cloud computing. According to Reese , we can define three criteria to discriminate whether a service is delivered in the Cloud computing style: The service is accessible via a Web browser (non-proprietary) or Web services API. Zero capital expenditure is necessary to get started. You pay only for what you use as you use it. Even though many Cloud computing services are freely available for single users, enterprise class services are delivered according a specific pricing scheme. In this case, users subscribe to the service and establish with the service provider a Service Level Agreement (SLA) defining quality of service pa- rameters under which the service is delivered. The utility-oriented nature of Cloud computing is clearly expressed by Buyya et al. : “A Cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified com- puting resources based on service-level agreements established through negotiation between the service provider and consumers.” 2 XaaS is an acronym standing for X-as-a-Service where the X letter can be replaced by everything: S for software, P for platform, I for insfrastructure, H for hardware, D for database, and so on. 1.6 Mastering Cloud Computing 1.1.3 A Closer Look Cloud computing is helping enterprises, governments, public and private institutions, as well as re- search organizations shape more effective and demand-driven computing systems. Access to, as well as integration of, Cloud computing resources and systems are now as easy as performing a credit card transaction over the Internet. Practical examples of such systems exist across all market segments: (a) Large enterprises can offload some of their activities to Cloud- based systems. Recently, the New York Times has converted its digital library about past editions into a Web friendly format. This required a considerable amount of computing power for a short period of time. By renting Amazon EC2 and S3 Cloud resources, it performed this task in 36 hours, and relinquished these resources without any additional costs. (b) Small enterprises and start-ups can afford to translate into business results their ideas more quickly without excessive upfront costs. Animoto is a company that creates videos out of images, music, and video fragments submitted by users. The process involves a considerable amount of storage and backend processing required for producing the video, which is finally made available to the user. Animoto does not own a single server and bases its computing infrastructure entirely on Amazon Web Services, which is sized on demand according to the overall workload to be processed. Such workload can vary a lot and requires instant scalability3. Upfront investment is clearly not an effective solution and Cloud computing systems become an appropriate alternative. (c) System developers can concentrate on the business logic rather than dealing with the complexity of infrastructure management and scalability. Little Fluffy Toys is a company in London that has developed a widget providing users with information about nearby rental bicycle services. The company has managed to back the widget’s computing needs on Google AppEngine and be on market in only one week. (d) End users can have their documents accessible from everywhere and any device. Apple iCloud is a service that allows users to have their documents stored in the Cloud and access them from any device they connect to it. This makes it possible taking a picture with a smart phone, going back home and editing the same picture on your laptop, and having it shown updated on their tablet. This process is completely transparent to the users who do not have to set up cables and connect these devices with each other. How all of this is made possible? The same concept of IT services on demand—whether they are computing power, storage, or runtime environments for applications—on a pay-as-you-go basis accom- modates these four different scenarios. Cloud computing does not only contribute with the opportunity of easily accessing IT services on demand, but also introduces a new thinking about how IT services and resources should be perceived: as utilities. A bird eye view of Cloud computing environment is shown in Fig. 1.3. The three major models for deployment and accessibility of Cloud computing environments are: public Clouds, private/enterprise Cloud, and hybrid Clouds (see Fig. 1.4). Public Clouds are the most common deployment models in which necessary IT infrastructure (e.g., virtualized Data Center) is established by a 3rd party service provider who makes it available to any consumer on subscription basis. Such Clouds are appealing to users as they allow them to quickly leverage compute, storage, 3 It has been reported that Animoto, in one single week, scaled from 70 to 8500 servers because of the user demand. Introduction 1.7 and application services. In this environment, users’ data and applications are deployed on Cloud Data centers on the vendor’s premises. Public Clouds Applications Development and Runtime Platform Compute Hy Cloud Storage br Manager id Cl ou d Clients Private Cloud Other Govt. Cloud Services Cloud Services Fig. 1.3. A Bird’s Eye View of Cloud Computing. Public/Internet Private/Enterprise Hybrid/Inter Clouds Clouds Clouds * 3rd party, * Mixed usage of * A public Cloud model multi-tenant Cloud private and public within a company’s infrastructure Clouds: Leasing public own Data Center/ and services: cloud services infrastructure for when private cloud * available on internal and/or capacity is subscription basis to all. partners use. insufficient Private cloud Public cloud Hybrid cloud Fig. 1.4. Major Deployment Models for Cloud Computing. 1.8 Mastering Cloud Computing Large organizations, owning massive computing infrastructures, can still benefit from Cloud comput- ing by replicating the Cloud IT service delivery model in-house. This has given birth to the concept of private Cloud, as opposed to the term public Cloud. In 2010, the U.S. federal government, one of the world’s largest consumers of IT spending around $76 billion on more than 10000 systems, has started a Cloud computing initiative aimed at providing the government agencies with a more efficient use of their computing facilities. The use of Cloud-based in-house solutions is also driven by the need of keep- ing confidential information within the organization’s premises. Institutions such as governments and banks with high security, privacy, and regulatory concerns prefer to build and use their own private or enterprise Clouds. Whenever private Cloud resources are unable to meet users quality-of-service requirements such the deadline, hybrid computing systems, partially composed by public Cloud resources and privately owned infrastructures, are created to serve the organization’s need. These are often referred as hybrid Clouds, which are becoming a common way to start exploring the possibilities offered by Cloud comput- ing by many stakeholders. 1.1.4 Cloud-Computing Reference Model A fundamental characteristic of Cloud computing is the capability of delivering on demand a variety of IT services, which are quite diverse from each other. This variety creates a different perception of what Cloud computing is among users. Despite this, it is possible to classify Cloud computing services offerings into three major categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). These categories are related to each other as described in Fig. 1.5, which provides an organic view of Cloud computing. We refer to this diagram as “Cloud Comput- ing Reference Model” and we will use it throughout the book to explain the technologies and introduce the relevant research on this phenomenon. The model organizes the wide range of Cloud computing services into a layered view that walks the computing stack from bottom to top. Web 2.0 Software as a Service Interfaces End-user applications Scientific applications Office automation, Photo editing, CRM, and Social Networking Examples: Google Documents, Facebook, Flickr, and Salesforce Platform as a Service Runtime Environment for Applications Development and Data Processing Platforms Examples: Windows Azure, Hadoop, Google AppEngine, and Aneka Infrastructure as a Service Virtualized Servers Storage and Networking Examples: Amazon EC2, S3, Rightscale, and vCloud Fig. 1.5. Cloud-Computing Reference Model. At the base of the stack, Infrastructure-as-a-Service solutions deliver infrastructure on demand in the form of virtual hardware, storage, and networking. Virtual hardware is utilized to provide compute on demand in the form of virtual machines instances. These are created on users’ request on the provider’s infrastructure, and users are given tools and interfaces to configure the software stack installed in the virtual machine. The pricing model is usually defined in terms of dollars per hours, where the hourly cost Introduction 1.9 is influenced by the characteristics of the virtual hardware. Virtual storage is delivered in the form of raw disk space or object store. The former complements a virtual hardware offering that requires persistent storage. The latter is a more high-level abstraction for storing entities rather than files. Virtual networking identifies the collection of services that manage the networking among virtual instances and their con- nectivity towards the Internet or private networks. Platform-as-a-Service solutions are the next step in the stack. They deliver scalable and elastic run- time environments on demand that host the execution of applications. These services are backed by a core middleware platform that is responsible for creating the abstract environment where applications are deployed and executed. It is the responsibility of the service provider to provide scalability and to manage fault-tolerance, while users are requested to focus on the logic of the application developed by leveraging the provider’s APIs and libraries. This approach increases the level of abstraction at which Cloud computing is leveraged but also constrains the user in a more controlled environment. At the top of the stack, Software-as-a-Service solutions provide applications and services on demand. Most of the common functionalities of desktop applications—such as office automation, document man- agement, photo editing, and customer relationship management (CRM) software—are replicated on the provider’s infrastructure, made more scalable, and accessible through a browser on demand. These applications are shared across multiple users, whose interaction is isolated from the other users. The SaaS layer is also the area of social networking Websites, which leverage Cloud-based infrastructures to sustain the load generated by their popularity. Each layer provides a different service to users. IaaS solutions are sought by users that want to leverage Cloud computing from building dynamically scalable computing systems requiring a specific software stack. IaaS services are therefore used to develop scalable Web sites or for background pro- cessing. PaaS solutions provide scalable programming platforms for developing applications, and are more appropriate when new systems have to be developed. IaaS solutions target mostly end users, who want to benefit from the elastic scalability of the Cloud without doing any software development, installation, configuration, and maintenance. This solution is appropriate when there are existing SaaS services that fit user’s needs (i.e., email, document management, CRM, etc.) and a minimum level of customization is needed. 1.1.5 Characteristics and Benefits Cloud computing has some interesting characteristics that bring benefits to both Cloud Service Consumers (CSCs) and Cloud service providers (CSPs). They are no upfront commitments; on demand access; nice pricing; simplified application acceleration and scalability; efficient resource allocation; energy efficiency; and seamless creation and the use of third-party services. The most evident benefit from the use of Cloud computing systems and technologies is the increased economical return due to the reduced maintenance costs and operational costs related to IT software and infrastructure. This is mainly because IT assets, namely software and infrastructure, are turned into utility costs, which are paid for as long as they are used and not upfront. Capital costs are costs associ- ated to assets that need to be paid in advance to start a business activity. Before Cloud computing, IT infrastructure and software generated capital costs, since they were paid upfront to afford a computing infrastructure enabling the business activities of an organization. The revenue of the business is then utilized to compensate over time for these costs. Organizations always minimize capital costs, since they are often associated to depreciable values. This is the case of hardware: a server bought today for 1000 dollars will have a market value less than its original price when it will be replaced by a new hardware. In order to make profit, organizations have also to compensate this depreciation created by 1.10 Mastering Cloud Computing time, thus reducing the net gain obtained from revenue. Minimizing capital costs is then fundamental. Cloud computing transforms IT infrastructure and software into utilities, thus significantly contributing in increasing the net gain. Moreover, it also provides an opportunity for small organizations and start-ups: these do not need large investments to start their business but they can comfortably grow with it. Finally, maintenance costs are significantly reduced: by renting the infrastructure and the application services, organizations are not responsible anymore for their maintenance. This task is the responsibility of the Cloud service provider, who, thanks to the economies of scale, can bear maintenance costs. Increased agility in defining and structuring software systems is another significant benefit. Since orga- nizations rent IT services, they can more dynamically and flexibly compose their software systems, with- out being constrained by capital costs for IT assets. There is a reduced need for capacity planning, since Cloud computing allows to react to unplanned surges in demand quite rapidly. For example, organiza- tions can add more servers to process workload spikes, and dismiss them when there is no longer need. Ease of scalability is another advantage. By leveraging the potentially huge capacity of Cloud computing, organizations can extend their IT capability more easily. Scalability can be leveraged across the entire computing stack. Infrastructure providers offer simple methods to provision customized hardware and in- tegrate it into existing systems. Platform-as-a-Service providers offer run-time environment and program- ming models that are designed to scale applications. Software-as-a-Service offerings can be elastically sized on demand without requiring users to provision hardware, or to program application for scalability. End users can benefit from Cloud computing by having their data and the capability of operating on it always available, from anywhere, at any time, and through multiple devices. Information and services stored in the Cloud are exposed to users by Web-based interfaces that make them accessible from portable devices as well as desktops at home. Since the processing capabilities (i.e., office automation features, photo editing, information management, and so on) also reside in the Cloud, end users can perform the same tasks that previously were carried out with considerable software investments. The cost for such opportunities is generally very limited, since the Cloud service provider shares its costs across all the tenants that he is servicing. Multi-tenancy allows for a better utilization of the shared infra- structure that is kept operational and fully active. The concentration of IT infrastructure and services into large datacenters also provides opportunity for considerable optimization in terms of resource allocation and energy efficiency, which eventually can lead to a less impacting approach on the environment. Finally, service orientation and on demand access create new opportunities for composing systems and applications with a flexibility not possible before Cloud computing. New service offerings can be created by aggregating together existing services and concentrating on added value. Since it is possible to provision on demand any component of the computing stack, it is easier to turn ideas into products, with limited costs and by concentrating the technical efforts on what matters: the added value. 1.1.6 Challenges Ahead As any new technology develops and becomes popular, new issues have to be faced. Cloud computing is not an exception and new interesting problems and challenges are posed to the Cloud community, including IT practitioners, managers, governments, and regulators. Besides the practical aspects, which are related to configuration, networking, and sizing of Cloud computing systems, a new set of challenges concerning the dynamic provisioning of Cloud computing services and resources arises. For example, in the Infrastructure-as-a-Service domain, how many re- sources need to be provisioned and for how long they should be used, in order to maximize the benefit? Technical challenges also arise for Cloud service providers for the management of large computing infrastructures, and the use of virtualization technologies on top of them. Also, issues and challenges concerning the integration of real and virtual infrastructure need to be taken into account from different perspectives, such as security and legislation. Security in terms of confidentiality, secrecy, and protection of data in a Cloud environment, is anoth- er important challenge. Organizations do not own the infrastructure they use to process data and store information. This condition poses challenges for confidential data, which organizations cannot afford to reveal. Therefore, assurance on the confidentiality of data and compliance to secutity standards, which give a minimum guarantee on the treatment of information on Cloud-computing system, are sought. Introduction 1.11 The problem is not as evident as it seems: even though cryptography can help in securing the transit of data from the private premises to the Cloud infrastructure, in order to be processed the information needs to be decrypted in memory for processing. This is the weak point of the chain: since virtualiza- tion allows capturing almost transparently the memory pages of an instance, these data could be easily obtained by a malicious provider. Legal issues may also arise. These are specifically tied to the ubiquitous nature of Cloud comput- ing, which spreads computing infrastructure across diverse geographical locations. Different legislation about the privacy in different countries may potentially create disputes on what are the rights that third parties (including government agencies) have on your data. American legislation is known to give ex- treme powers to government agencies to aquire confidential data, when there is the suspect of opera- tions leading to a threat to national security. European countries are more restrictive and protect the right of privacy. An interesting scenario comes up when an American organization uses Cloud services, which store their data in Europe. In this case, should this organization be suspected by the government, it would become difficult or even impossible for the American government to take control of the data stored in a Cloud Data Center located in Europe. 1.2 HISTORICAL DEVELOPMENTS The idea of renting computing services by leveraging large distributed computing facilities has been around for a long time. It dates back to the days of the mainframes in the early fifties. From there on, technology has evolved and refined. This process has created a series of favourable conditions for the realization of Cloud computing. Figure 1.6 provides an overview of the evolution of the technologies for distributed computing that have influenced Cloud computing. In tracking the historical evolution, we briefly review five core tech- nologies that played an important role in the realization of Cloud computing. These are: distributed systems, virtualization, Web 2.0, service-oriented computing computing and utility computing. 2010: Microsoft 1970: DARPA’s TCP/IP 1999: Grid Computing Azure 1984: IEEE 802.3 1997: IEEE 2008: Google Ethernet & LAN 802.11 (Wi-Fi) AppEng E ine 1966: Flynn’s Taxonomy SISD, SIMD, MISD, MIMD 1989:TCP/IP 2007: Manjrasoft Aneka IETF RFC 1122 1969: ARPANET 1984: DEC’s 2005: Amazon 1951: UNIVACI, VMScluster AWS (EC2, S3) First Mainframe 1975: Xerox PARC Invented Ethernet 2004: Web 2.0 Clouds 1990: Lee-Calliau 1960: Cray’s First WWW, HTTP, HTML Grids Supercomputer Clusters Mainframes 1950 1960 1970 1980 1990 2000 2010 Fig. 1.6. Evolution of Distributed Computing Technologies. 1.12 Mastering Cloud Computing 1.2.1 Distributed Systems Clouds are essentially large distributed computing facilities that make available their services to third parties on demand. As a reference, we consider the characterization of a distributed system proposed by Tanenbaum et al. : “A distributed system is a collection of independent computers that appears to its users as a single coherent system.” This is a general definition, which includes a variety of computer systems but it evidences two very important elements characterizing a distributed system: the fact it is composed of multiple independent components and that these components are perceived as a single entity by users. This is particularly true in case of Cloud computing, where Clouds hide the complex architecture they rely on and provide a single interface to the users. The primary purpose of distributed systems is to share resources and to utilize them better. This is true in the case of Cloud computing, where this concept is taken to the extreme and resources (infrastructure, runtime environments, and services) are rented to users. In fact, one of the driving factors for Cloud computing has been the availability of large computing facility of IT giants (Amazon, Google, etc.), who found that offering their computing capabilities as a service to be an opportunity for better utilization of their infrastructure. Distributed systems often exhibit other properties such as heterogeneity, openness, scalability, transparency, concurrency, continuous availability, and independent failures. To some extent, these also characterize Clouds, especially in the context of scal- ability, concurrency, and continuous availability. Three major milestones have led to Cloud computing: mainframe computing, cluster computing, and Grid computing. (a) Mainframes. These were the first examples of large computational facilities leverag- ing multiple processing units. Mainframes were powerful, highly reliable computers specialized for large data movement and massive IO operations. They were mostly used by large organizations for bulk data processing such as online transactions, enterprise resource planning, and other operations involving the processing of significant amount of data. Even though mainframes cannot be consid- ered distributed systems, they were offering large computational power by using multiple processors, which were presented as a single entity to users. One of the most attractive features of mainframes was the ability to be highly reliable computers that were “always on” and capable of tolerating failures transparently. No system shut down was required to replace failed components, and the system could work without interruptions. Batch processing was the main application of mainframes. Now their popularity and deployments have reduced, but evolved versions of such systems are still in use for transaction processing (i.e., online banking, airline ticket booking, supermarket and telcos, and government services). (b) Clusters. Cluster computing started as a low-cost alternative to the use of mainframes and supercomputers. The technology advancement that created faster and more powerful mainframes and supercomputers has eventually generated an increased availability of cheap commodity machines as a side effect. These machines could then be connected by a high-bandwidth network and controlled by specific software tools that manage them as a single system. By starting from the 1980s, clusters be- came the standard technology for parallel and high-performance computing. Being built by commodity machines, they were cheaper than mainframes, and made available high-performance computing to a large number of groups, including universities and small research labs. Cluster technology considerably contributed to the evolution of tools and framework for distributed computing, some of them include: Condor , Parallel Virtual Machine (PVM) , and Message Passing Interface (MPI)4. One of the 4 MPI is a specification for an API that allows many computers to communicate with one another. It defines a language independent protocol that supports point-to-point and collective communication. MPI has been designed for high-performance, scalability, and portability. At present, it is one of the dominant paradigms for developing parallel applications. Introduction 1.13 attractive features of clusters was that the computational power of commodity machines could be lever- aged to solve problems previously manageable only on expensive supercomputers. Moreover, clusters could be easily extended if more computational power was required. (c) Grids. Grid computing appeared in the early 90s as an evolution of cluster computing. In analogy with the power grid, Grid computing proposed a new approach to access large computational power, huge storage facilities, and a variety of services. Users can “consume” resources in the same way as they use other utilities such as power, gas, and water. Grids initially developed as aggregation of geographically dispersed clusters by means of Internet connection. These clusters belonged to dif- ferent organizations and arrangements were made among them to share the computational power. Dif- ferent from a “large cluster”, a computing grid was a dynamic aggregation of heterogeneous computing nodes, and its scale was nationwide or even worldwide. Several reasons made possible the diffusion of computing grids: i) clusters were now resources quite common; ii) they were often under-utilized; iii) new problems were requiring computational power going beyond the capability of single clusters; iv) the improvements in networking and the diffusion of Internet made possible long distance high bandwidth connectivity. All these elements led to the development of grids, which now serve a multitude of users across the world. Cloud computing is often considered as the successor of Grid computing. In reality, it embodies aspects of all of these three major technologies. Computing Clouds are deployed on large datacenters hosted by a single organization that provides services to others. Clouds are characterized by the fact of having virtually infinite capacity, being tolerant to failures, and always on as in the case of mainframes. In many cases, the computing nodes that form the infrastructure of computing Clouds are commodity machines as in the case of clusters. The services made available by a Cloud vendor are consumed on a pay-per-use basis and Clouds implement fully the utility vision introduced by Grid computing. 1.2.2 Virtualization Virtualization is another core technology for Cloud computing. It encompasses a collection of solutions allowing the abstraction of some of the fundamental elements for computing such as: hardware, runtime environments, storage, and networking. Virtualization has been around for more than 40 years, but its application has always been limited by technologies that did not allow an efficient use of virtualization solutions. Today these limitations have been substantially overcome and virtualization has become a fundamental element of Cloud computing. This is particularly true for solutions that provide IT infra- structure on demand. Virtualization confers that degree of customization and control that makes Cloud computing appealing for users and, at the same time, sustainable for Cloud services providers. Virtualization is essentially a technology that allows creation of different computing environments. These environments are named as virtual, because they simulate the interface that is expected by a guest. The most common example of virtualization is hardware virtualization. This technology allows simulating the hardware interface expected by an operating system. Hardware virtualization allows the co-existence of different software stacks on top of the same hardware. These stacks are contained inside virtual machine instances, which operate completely isolated from each other. High-performance server can host several virtual machine instances, thus creating the opportunity of having customized software stack on demand. This is the base technology that enables Cloud computing solutions deliver- ing virtual server on demands, such as Amazon EC2, RightScale, VMware vCloud, and others. Togeth- er with hardware virtualization, storage and network virtualization complete the range of technologies for the emulation of IT infrastructure. Virtualization technologies are also used to replicate runtime environments for programs. In the case of process virtual machines, which include the foundation of technologies such as Java or.NET, where applications instead of being executed by the operating system are run by a specific program called vir- tual machine. This technique allows isolating the execution of applications and providing a finer control on the resource they access. Process virtual machines offer a higher level of abstraction with respect to the hardware virtualization since the guest is only constituted by an application rather than a complete 1.14 Mastering Cloud Computing software stack. This approach is used in Cloud computing in order to provide a platform for scaling ap- plications on demand, such as Google AppEngine and Windows Azure. Having isolated and customizable environments with minor impact on performance is what makes virtualization an attractive technology. Cloud computing is realized through platforms that leverage the basic concepts described above and provides on-demand virtualization services to a multitude of users across the globe. 1.2.3 Web 2.0 The Web is the primary interface through which Cloud computing deliver its services. At present time, it encompasses a set of technologies and services that facilitate interactive information sharing, col- laboration, user-centered design, and application composition. This has transformed the Web into a rich platform for application development. Such evolution is known as “Web 2.0”. This term captures a new way in which developers architect applications, deliver services through the Internet, and provide a new user experience for their users. Web 2.0 brings interactivity and flexibility into Web pages, which provide enhanced user experi- ence by gaining Web-based access to all the functions that are normally found in desktop applications. These capabilities are obtained by integrating a collection of standards and technologies such as XML, Asynchronous Javascript and XML (AJAX), Web Services, and others. These technologies allow build- ing applications leveraging the contribution of users, who now become providers of content. Also, the capillary diffusion of the Internet opens new opportunities and markets for the Web, whose services can now be accessed from a variety of devices: mobile phones, car dashboards, TV sets, and others. This new scenarios require an increased dynamism for applications, which is another key element of this technology. Web 2.0 applications are extremely dynamic: they improve continuously and new updates and features are integrated at a constant rate, by following the usage trend of the community. There is no need to deploy new software releases on the installed base at the client side. Users can take advan- tage of the new software features simply by interacting with Cloud applications. Lightweight deployment and programming models are very important for effective support of such dynamism. Loose coupling is another fundamental property. New applications can be “synthesized” simply by composing existing services and integrating them together, thus providing added value. By doing this, it becomes easier to follow the interests of users. Finally, Web 2.0 applications aim to leverage the long tail of Internet users by making themselves available to everyone either in terms of media accessibility or cost. Examples of Web 2.0 applications are Google Documents, Google Maps, Flickr, Facebook, Twitter, YouTube, de.li.cious, Blogger, and Wikipedia. In particular, social networking Websites take the biggest advantage from Web 2.0. The level of interaction in Web sites like Facebook or Flickr would not have been possible without the support of AJAX, RSS, and other tools that make the user experience incred- ibly interactive. Moreover, community Websites harness the collective intelligence of the community which provides content to the applications themselves: Flickr provides advanced services for storing digital pictures and videos, Facebook is a social networking Website leveraging the user activity for providing content, and Blogger as any other blogging Website provides an online diary that is fed by the users. This idea of the Web as a transport that enables and enhances interaction was introduced in 1999 by Darcy DiNucci5 and started to fully realize in 2004. Today, it is a mature platform for supporting the need of Cloud computing, which strongly leverages Web 2.0. Applications and frameworks for delivering Rich Internet Applications (RIAs) are fundamental for making Cloud services accessible to the wider public. From a social perspective, Web 2.0 applications definitely contributed to make people more accustomed 5 Darci DiNucci in a column for Design & New Media magazine describes the Web as follows: “The Web we know now, which loads into a browser window in essentially static screenfulls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfulls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven.” Introduction 1.15 to the use of Internet in their everyday lives, and opened the path to the acceptance of Cloud computing as a paradigm where even the IT infrastructure is offered through a Web interface. 1.2.4 Service-Oriented Computing Service orientation is the core reference model for Cloud computing systems. This approach adopts the concept of services as main building blocks of application and system development. Service-Oriented Computing (SOC) supports the development of rapid, low-cost, flexible, interoperable, and evolvable applications and systems. A service is an abstraction representing a self-describing and platform agnostic component that can perform any function: this can be anything from a simple function to a complex business process. Virtu- ally, any piece of code that performs a task can be turned into a service and expose its functionalities through a network accessible protocol. A service is supposed to be loosely coupled, reusable, program- ming language independent, and location transparent. Loose coupling allows services to serve different scenarios more easily and makes them reusable. Independence from a specific platform increases services accessibility. Thus, a wider range of clients, which can look up services in global registries and consume them in location transparent manner, can be served. Services are composed and aggregated into a Service-Oriented Architecture (SOA) , which is a logical way of organizing software systems to provide end users or other entities distributed over the network with services through published and discoverable interfaces. Service-Oriented Computing introduces and diffuses two important concepts, which are also funda- mental for Cloud computing: Quality of Service (QoS) and Software as a Service (SaaS). Quality of Service identifies a set of functional and non-functional attributes that can be used to evaluate the behavior of a service from different perspectives. These could be performance metrics such as response time, or security attributes, transactional integrity, reliability, scalability, and availability. QoS requirements are established between the client and the provider between a Service Level Agreement (SLA) that identifies the minimum values (or an acceptable range) for the QoS attributes that need to be satisfied upon service call. The concept of Software as a Service introduces a new delivery model for applications. It has been inherited from the world of Application Service Providers (ASPs). These deliver software services-based solutions across the wide area network from a central data center and make them available on subscription or rental basis. The ASP is responsible for maintaining the infrastructure and making available the application, and the client is freed from maintenance cost and difficult upgrades. This software delivery model is possible because economies of scale are reached by means of multi-tenancy. The SaaS approach reaches its full development with Service Oriented Computing, where loosely coupled software component can be exposed and priced singularly, rather than entire applications. This allows the delivery of complex business processes and trans- actions as a service, while allowing applications to be composed on the fly and services to be reused from everywhere by anybody. One of the most popular expressions of service orientation is represented by Web Services (WS). These introduce the concepts of SOC into the World Wide Web, by making it consumable by applica- tions and not only humans. Web services are software components exposing functionalities accessible by using a method invocation pattern that goes over the HTTP protocol. The interface of a Web service can be programmatically inferred by metadata expressed through the Web Service Description Lan- guage (WSDL) ; this is an XML language that defines the characteristics of the service and all the methods, together with parameters descriptions and return type, exposed by the service. The interaction with Web services happens through Simple Object Access Protocol (SOAP). This is an XML lan- guage defining how to invoke a Web service method and collect the result. By using SOAP and WSDL over HTTP, Web services become platform independent and accessible as the World Wide Web. The standards and specifications concerning Web services are controlled by the W3C, while among the most popular architectures for developing Web services, we can note ASP.NET and Axis. 1.16 Mastering Cloud Computing The development of systems in terms of distributed services that can be composed together is the major contribution given by SOC to the realization of Cloud computing. Web services technologies have provided the right tools to make such composition straightforward and integrated with the mainstream World Wide Web (WWW) environment easier. 1.2.5 Utility-Oriented Computing Utility computing is a vision of computing, defining a service provisioning model for compute services in which resources such as storage, compute power, applications, and infrastructure are packaged and of- fered on a pay-per-use basis. The idea of providing computing as a utility like natural gas, water, power, and telephone connection has a long history but has become a reality today with the advent of Cloud computing. Among the earliest forerunners of this vision, we can include the American scientist, John McCarthy, who in a speech for the MIT centennial in 1961 observed: “If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility... The com- puter utility could become the basis of a new and important industry.” The first traces of this service provisioning model can be found in the mainframes era. IBM and other mainframe providers offered mainframe power to organizations such as banks and government agen- cies throughout their datacenters. The business model introduced with utility computing brought new requirements and led to an improvement of mainframe technology: additional features such as operat- ing systems, process control and user metering facilities. The idea of computing as utility remained and extended from the business domain to the academia with the advent of cluster computing. Not only businesses but also research institutes became acquainted with the idea of leveraging an external IT infrastructure on demand. Computational science, which was one of the major driving factors for build- ing computing clusters, still required huge compute power for addressing Grand Challenge problems, and not all the institutions were able to satisfy their computing needs internally. Access to external clus- ters still remained a common practice. The capillary diffusion of the Internet and the Web provided the technological means to realize utility computing at a world-wide scale and through simple interfaces. As already discussed before, computing grids-provided a planet-scale distributed computing infrastructure that was accessible on demand. Computing grids brought the concept of utility computing to a new level: market orientation. Being accessible on a wider scale, it is easier to provide a trading infrastructure where Grid products—storage, computation, and services—are bid for or sold. Moreover, e-Commerce technologies provided the infrastructure support for utility computing. In the late nineties, a significant interest in buying online any kind of good spread in the wide public: food, clothes, multimedia products, and also online services such as storage space and Web hosting. After the dot-com bubble6, this interest reduced in size but the phenomenon made the wide public keener to buy online services. As a result, infrastructures for on-line payment through credit card become easily accessible and well proven. From an application and system development perspective, service-oriented computing and Service- Oriented Architectures (SOAs) introduced the idea of leveraging external services for performing a specific task within a software system. Applications were not only distributed, but started to be com- posed as a mesh of services provided by different entities. These services, accessible through the Internet, were made available by charging according on usage. Service-oriented computing broadened the concept of what could have been accessed as a utility in a computer system. Not only compute power and storage but also services and application components could be utilized and integrated on demand. Together with this trend, Quality of Service became an important topic to investigate on. 6 The dot-com bubble is a phenomenon that started in the second half of the nineties and reached its acumen in the year 2000. During such period, a large number of companies basing their business on online services and e-Commerce started and quickly expanded without later being able to sustain their growth. As a result, they suddenly went bankrupt partly because the revenues were not enough to cover the expenses made and partly because they did never reach the required number of customers to sustain their enlarged business. Introduction 1.17 All these factors contributed to the development of the concept of utility computing and offered important steps in the realization of Cloud computing, in which “computing utilities” vision comes to its full expression. 1.3 BUILDING CLOUD-COMPUTING ENVIRONMENTS The creation of Cloud-computing environments encompasses both the development of applications and systems that leverage Cloud-computing solutions and the creation of frameworks, platforms, and infrastructures delivering Cloud-computing services. 1.3.1 Application Development Applications that leverage Cloud-computing benefit from its capability of dynamically scaling on demand. One class of applications that take the biggest advantage from this feature is Web applications. Their performance is mostly influenced by the workload generated by varying user demands. With the diffusion of Web 2.0 technologies, the Web has become a platform for developing rich and complex applications including enterprise applications that now leverage the Internet as the preferred channel for service de- livery and user interaction. These applications are characterized by complex processes that are triggered by the interaction with users and develop through the interaction between several tiers behind the Web front-end. These are the applications that are mostly sensible to inappropriate sizing of infrastructure and service deployment or variability in workload. Another class of applications that can potentially gain considerable advantage by leveraging Cloud computing is represented by resource-intensive applications. These can be either data-intensive or compute-intensive applications. In both cases, a considerable amount of resources is required to com- plete execution in a reasonable time frame. It is worth noticing that the large amount of resources is not needed constantly or for a long duration. For example, scientific applications can require huge computing capacity to perform large scale experiments once in a while, so it is not feasible to buy the infrastructure supporting them. In this case, Cloud computing can be the solution. Resource intensive applications are not interactive and they are mostly characterized by batch processing. Cloud computing provides solution for on demand and dynamic scaling across the entire stack of computing. This is achieved by (a) providing methods for renting compute power, storage, and net- working; (b) offering runtime environments designed for scalability and dynamic sizing; and (c) provid- ing application services that mimics the behavior of desktop applications but that are completely hosted and managed on the provider side. All these capabilities leverage service orientation, which allow a simple and seamless integration into existing systems. Developers access such services via simple Web interfaces, often implemented through REST Web services. These have become well-known abstractions, making the development and the management of Cloud applications and systems practi- cal and straightforward. 1.3.2 Infrastructure and System Development Distributed computing, virtualization, service orientation, and Web 2.0 form the core technologies en- abling the provisioning of Cloud services from anywhere in the globe. Developing applications and systems that leverage the Cloud requires knowledge across all these technologies. Moreover, new challenges need to be addressed from design and development standpoints. Distributed computing is a foundational model for Cloud computing, because Cloud systems are distributed systems. Besides administrative tasks mostly connected to the accessibility of resources in the Cloud, the extreme dynamism of Cloud systems—where new nodes and services are provisioned on demand—constitutes the major challenge for engineers and developers. This characteristic is pretty peculiar to Cloud computing solutions and mostly addressed at the middleware layer of computing sys- tem. Infrastructure-as-a-Service solutions provide the capabilities to add and remove resources, but it is 1.18 Mastering Cloud Computing up to those who deploy system on this scalable infrastructure to make use of such opportunity with wis- dom and effectiveness. Platform-as-a-Service solutions embed into their core offering algorithms and rules that control the provisioning process and the lease of resources. These can be either completely transparent to developers or subject to fine control. Integration between Cloud resources and existing system deployment is another element of concern. Web 2.0 technologies constitute the interface through which Cloud computing services are deliv- ered, managed, and provisioned. Beside the interaction with rich interfaces through the Web browser, Web services have become the primary access point to Cloud computing systems from a programmatic standpoint. Therefore, service orientation is the underlying paradigm that defines the architecture of a Cloud computing system. Cloud computing is often summarized with the acronym XaaS—everything as a service—that clearly underlines the central of service orientation. Despite the absence of a unique stan- dard for accessing the resources serviced by different Cloud providers, the commonality of technology smoothens the learning curve and simplifies the integration of Cloud computing into existing systems. Virtualization is another element that plays a fundamental role in Cloud computing. This technology is a core feature of the infrastructure used by Cloud providers. As discussed before, virtualization is a concept more than 40 years old, but Cloud computing introduces new challenges, especially in the management of virtual environments whether they are abstraction of virtual hardware or of a runtime environment. Developers of Cloud applications need to be aware of the limitations of the selected virtu- alization technology and the implications on the volatility of some components of their systems. These are all considerations that influence the way in which we program applications and systems based on Cloud computing technologies. Cloud computing essentially provides mechanisms to address surges in demand by replicating the required components of computing systems under stress (i.e. heav- ily loaded). Dynamism, scale, and volatility of such components are the main elements that should guide the design of such systems. 1.4 COMPUTING PLATFORMS AND TECHNOLOGIES Development of a Cloud computing application happens by leveraging platform and frameworks that provide different types of services, from the bare metal infrastructure to customizable applications serv- ing specific purposes. 1.4.1 Amazon Web Services (AWS) AWS offers comprehensive Cloud IaaS services, ranging from virtual compute, storage, and networking to complete computing stacks. AWS is mostly known for its compute and storage on demand services, namely Elastic Compute Cloud (EC2) and Simple Storage Service (S3). EC2 provides users with cus- tomizable virtual hardware that can be used as the base infrastructure for deploying computing systems on the Cloud. It is possible to choose from a large variety of virtual hardware configurations including GPU and cluster instances. EC2 instances are deployed either by using the AWS console, which is a comprehensive Web portal for accessing AWS services, or by using the Web services API available for several programming languages. EC2 also provides the capability of saving a specific running instance as image, thus allowing users to create their own templates for deploying systems. These templates are stored into S3 that delivers persistent storage on demand. S3 is organized into buckets; these are container of objects that are stored in binary form and can be enriched with attributes. Users can store objects of any size, from simple files to entire disk images and have them accessible from everywhere. Besides EC2 and S3, a wide range of services can be leveraged to build virtual computing systems in- cluding: networking support, caching systems, DNS, database (relational and not) support, and others. 1.4.2 Google AppEngine Google AppEngine is a scalable runtime environment mostly devoted to executing Web applications. These take advantage of the large computing infrastructure of Google to dynamically scale as the Introduction 1.19 demand varies over time. AppEngine provides both a secure execution environment and a collection of services that simplify the development of scalable and high-performance Web applications. These services include: in-memory caching, scalable data store, job queues, messaging, and cron tasks. De- velopers can build and test applications on their own machine by using the AppEngine SDK, which replicates the production runtime environment, and helps test and profile applications. Once develop- ment is complete developers can easily migrate their application to AppEngine, set quotas to containing the cost generated, and make it available to the world. The languages currently supported are Python, Java, and Go. 1.4.3 Microsoft Azure Microsoft Azure is a Cloud operating system and a platform for developing applications in the Cloud. It provides a scalable runtime environment for Web applications and distributed applications in general. Applications in Azure are organized around the concept of roles, which identify a distribution unit for ap- plications and embody the application’s logic. Currently, there are three types of role: Web role, worker role, and virtual machine role. The Web role is designed to host a Web application, the worker role is a more generic container of applications and can be used to perform workload processing, and the virtual machine role provides a virtual environment where the computing stack can be fully customized including the operating systems. Besides roles, Azure provides a set of additional services that comple- ment application execution such as support for storage (relational data and blobs), networking, caching, content delivery, and others. 1.4.4 Hadoop Apache Hadoop is an open source framework that is suited for processing large data sets on commodity hardware. Hadoop is an implementation of MapReduce, an application programming model developed by Google, which provides two fundamental operations for data processing: map and reduce. The former transforms and synthesizes the input data provided by the user, while the latter aggregates the output obtained by the map operations. Hadoop provides the runtime environment, and developers need only to provide the input data, and specify the map and reduce functions that need to be executed. Yahoo! is the sponsor of the Apache Hadoop project, and has put considerable effort in transforming the project to an enterprise-ready Cloud computing platform for data processing. Hadoop is an integral part of the Ya- hoo! Cloud infrastructure, and supports several business processes of the company. Currently, Yahoo! manages the largest Hadoop cluster in the world, which is also available to academic institutions. 1.4.5 Force.com and Salesforce.com Force.com is a Cloud computing platform for developing social enterprise applications. The platform is the basis of SalesForce.com—a Software-as-a-Service solution for customer relationship management. Force.com allows creating applications by composing ready-to-use blocks: a complete set of compo- nents supporting all the activities of an enterprise are available. It is also possible to develop your own components or integrate those available in AppExchange into your applications. The platform provides complete support for developing applications: from the design of the data layout, to the definition of busi- ness rules and workflows, and the definition of the user interface. The Force.com platform is completely hosted on the Cloud, and provides complete access to its functionalities, and those implemented in the hosted applications through Web services technologies. 1.4.6 Manjrasoft Aneka Manjrasoft Aneka is a Cloud application platform for rapid creation of scalable applications, and their deployment on various types of Clouds in a seamless and elastic manner. It supports a collection of programming abstractions for developing applications and a distributed runtime environment that can be deployed on heterogeneous hardware (clusters, networked desktop computers, and Cloud resources). Developers can choose different abstractions to design their application: tasks, distributed threads, 1.20 Mastering Cloud Computing and map-reduce. These applications are then executed on the distributed service-oriented runtime en- vironment, which can dynamically integrate additional resource on demand. The service-oriented ar- chitecture of the runtime has a great degree of flexibility, and simplifies the integration of new features such as abstraction of a new programming model and associated execution management environment. Services manage most of the activities happening at runtime: scheduling, execution, accounting, billing, storage, and quality of service. These platforms are key examples of technologies available for Cloud computing. These mostly fall into the three major market segments indentified in the reference model: Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. In this book, we use Aneka as a reference platform for discussing practical implementations of distributed applications. We present different ways in which Clouds can be leveraged by applications built using the various programming models and abstractions provided by Aneka. Summary In this chapter, we discussed the vision and opportunities of Cloud computing along with its character- istics and challenges. The Cloud-computing paradigm emerged as a result of the maturity and conver- gence of several of its supporting models and technologies, namely distributed computing, virtualization, Web 2.0, service orientation, and utility computing. There is no single view on this phenomenon. Throughout the book, we explore different definitions, interpretations, and implementations of this idea. The only element that is shared among all the different views of Cloud computing is that Cloud systems support dynamic provisioning of IT services (whether they are virtual infrastructure, runtime environments, or application services) and adopts a utility-based cost model to price these services. This concept is applied across the entire computing stack and en- ables the dynamic provisioning of IT infrastructure and runtime environments in the form of Cloud-host- ed platforms for the development of scalable applications and their services. This vision is what inspires the Cloud Computing Reference Model. This model identifies three major market segments (and service offerings) for Cloud computing: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). These directly map the broad classification about the different type of services offered by Cloud computing. The long term vision of Cloud computing is to fully realize the utility model that drives its service of- fering. It is envisioned that new technological developments and the increased familiarity with Cloud computing delivery models, will lead to the establishment of a global market for trading computing utili- ties. This area of study is called Market-Oriented Cloud Computing, where the term “market-oriented” further stresses the fact that Cloud computing services are traded as utilities. The realization of this vision is still far from reality, but Cloud computing has already brought economic, environmental, and technological benefits. By turning IT assets into utilities, it allows organizations to reduce operational costs and increase their revenue. This and other advantages, have also downsides that are of diverse nature. Security and legislation are two of the challenging aspects of Cloud computing that are beyond the technical sphere. From the perspective of the software design and development, new challenges arise in engineering computing systems. Cloud computing offers a rich mixture of different technologies, and harnessing them is a challenging engineering task. It introduces both new opportunities, and new techniques and strategies for architecting software applications and systems. Some of the key elements that have to be taken into account are: virtualization, scalability, dynamic provisioning, big datasets, and cost models. In order to provide a practical grasp on such concepts, we will use Aneka as a reference platform for illustrating Cloud systems and application programming environments. Introduction 1.21 Review Questions 1 What is the innovative characteristic of Cloud computing? 2. Which are the technologies that Cloud computing relies on? 3. Provide a brief characterization of a distributed system. 4. Define Cloud computing and identify its core features. 5. What are the major distributed computing technologies that led to Cloud computing? 6. What is virtualization? 7. What is the major revolution introduced by Web 2.0? 8. Give some examples of Web 2.0 applications. 9. Describe the main characteristics of service orientation. 10. What is utility computing? 11. Describe the vision introduced by Cloud computing. 12. Briefly summarize the Cloud computing reference model. 13. What is the major advantage of Cloud computing? 14. Briefly summarize the challenges still open in Cloud computing. 15. How does Cloud development differentiate from traditional software development?

Use Quizgecko on...
Browser
Browser