Open Issues in Cloud Computing PDF

Summary

This document comprises slides from a cloud computing course, which aims to examine open issues in cloud computing. The course discusses topics such as performance, reliability, security, migration and scalability. It encompasses various concepts related to cloud solution design, data governance, and resource scheduling.

Full Transcript

OPEN ISSUE IN CLOUD CLOUD COMPUTING COURSE OVERVIEW Cloud is not a solution for all consumers of IT services. Cloud is also not suitable for all applications. Cloud computing contains a number of issues which are not necessary unique to Cloud. Hardware failures and security compr...

OPEN ISSUE IN CLOUD CLOUD COMPUTING COURSE OVERVIEW Cloud is not a solution for all consumers of IT services. Cloud is also not suitable for all applications. Cloud computing contains a number of issues which are not necessary unique to Cloud. Hardware failures and security compromises are possible in complex computing systems. We shall discuss the issues related to Cloud computing in the coming modules as highlighted by NIST USA. COMPUTING PERFORMANCE The real time applications require high performance and high degree of predictability. Cloud computing shows some performance issues which are similar to those of other forms of distributed computing. ○ Latency: As measured through round-trip-time (the time from sending a message to receiving a response) is not predictable for Internet based communications. ○ Offline Data Synchronization: For the offline updates in data, the synchronization with all the copies of data on Cloud is a problem. The solution to this problem requires the mechanisms of version control, group collaboration and other synchronization capabilities. ○ Scalable Programming: The legacy applications have to be updated to fully benefit from scalable computing capacity feature of Cloud computing. ○ Data Storage Management: The consumers require the control over data life cycle and the information regarding any intrusion or unauthorized access to the data. CLOUD RELIABILITY It is a probability that a system will offer failure-free service for a specified period of time for a specified environment. It depends upon the Cloud infrastructure of the provider and the connectivity to the subscribed services. Measuring the reliability of a specific Cloud will be difficult due to the complexity of Cloud procedures. Several factors affect the Cloud reliability: ○ Network Dependence: The unreliability of Internet and the associated attacks affect the Cloud reliability. ○ Safety-Critical Processing: The critical applications and hardware such as controls of avionics, nuclear material and medical devices may harm the human life and/or cause the loss of property. These are not suitable to be hosted over Cloud CLOUD RELIABILITY Economic Goals: Although the Cloud provides economic benefits such as saving upfront costs and elimination of maintenance costs and provides consumers with economies of scale. However there are a number of economic risks associated with Cloud computing. SLA Evaluation: The lack of automated mechanisms for SLA compliance by the provider requires the development of a common template that could cover the majority of SLA clauses and could give an overview of SLA complaisance. This would be useful in decision making for investing the time and money in manual audit. Portability of Workloads: The initial barriers to Cloud adoption are the needs of a reliable and secure mechanism for data transfer to Cloud as well as to port the workload to other providers are open issues. Interoperability between Cloud Providers: The consumers face or are in fear of vendor lock-in due to lack of interoperability among different providers. Disaster Recovery: The physical and/or electronic disaster recovery requires the implementation of recovery plans for hardware as well as software based disasters so that the provider and consumers can be saved from economic and performance losses. COMPLIANCE It is with respect to any law and preferred security procedures. NIST and other US govt agencies are evolving methods to solve the compliance issues between consumers and providers. The consumer is although responsible for compliance but the implementation is actually performed by the provider. The consumer has a lack of visibility regarding the actual security procedures being adopted and/or applied by the provider. However the consumer may request for the deployment of monitoring procedures. The consumers (having their data processed on provider’s premises) need to acquire assurance from the provider regarding the compliance with various laws. For example in US: the health information protection act, payment security standard, information protection accountability act etc. The forensics support regarding any incidence should be provided. This will evaluate the type of attack, the extent and damage associated and collection of information for possible legal actions in future. The forensic analysis for SaaS is the responsibility of the provider while the forensic analysis of IaaS is the responsibility of the consumer. INFORMATION SECURITY Related to confidentiality and integrity of data. It is also linked with the assurance of availability. The following measures can be used by an organization for data security: ○ Application of administrative controls for specifying the authority of specific users to create, update, delete, disclose and transport of the data. ○ Physical controls for the security of data storage devices. Technical controls for Identity and Access Management (IAM), data encryption and data audit-handling requirements according to any regulatory need. Public and private Clouds have their own typical security exposures. The provider however may also provide physical separation of consumers’ data in addition to logical separation. According to NIST, the provider should provide the monitoring mechanism to satisfy the consumer regarding the security compliance. DISASTER RECOVERY IN CLOUD COMPUTING UNDERSTANDING THE THREATS - DISK FAILURE Disk drives are electro-mechanical devices which wear out and eventually fail. Failure can be due to disaster such as fire and floods. Can also be due to theft. All mechanical devices have mean time between failure (MTBF). The MTBF values given by the manufacturers are usually generic values calculated for a set of devices. Therefore, instead of relying upon the MTBF, there must be a disaster recovery plan for the disk failure. The following strategies can be utilized: ○ Traditional approach: It is to have backup on separate storage. If the disk fails due to any disaster, the data can be recovered on a new disk from the backup. But if the backup is also destroyed or stolen, then there is a complete loss of data. Also, the recovery process is time consuming. ○ Redundant Array of Independent Disks (RAID): It is a system consisting of multiple disk drives. Multiple copies of data are maintained and stored in a distributed way over the disks. If one disk fails, simply the disk is replaced and the RAID system copies the data over the new disk. But still backup is required because if the entire RAID is destroyed or stolen, there is a complete data loss. ○ Cloud based data storage and backup: Cloud not only provides the facility of data access over the Internet, but it also provides enhanced data replication. The replication is sometimes performed by default without any extra charges. The Cloud based backup is stored at a remote site so it is an extra advantage as compared to onsite backup placement. ○ Further, the Cloud based backup is readily available and thus reduces the downtime as compared to recovery using traditional tape-based backup. UNDERSTANDING THE THREATS - POWER FAILURE OR DISRUPTION The Computers can be damaged due to a power surge caused by a storm or some fault in power supply system. Power surge may permanently damage the disk storage. The user loses all the unsaved data when a power-blackout happens. A few disaster recovery plans are as follows: ○ Traditionally, the surge protector devices are used. But these devices are not helpful in saving the (unsaved) data in case of a blackout. ○ The in-house data centers can use huge and expensive uninterruptible power supply (UPS) devices and/or generators. ○ Another solution is to shift the data to another site. But this is expensive and time consuming. ○ The best option is to move the data center to Cloud. The Cloud providers have better(and expensive) power backups and their cost is divided among the consumers. Also, the Cloud mechanism may automatically shift the data to a remote site on another power grid (in case of power failures of longer duration). UNDERSTANDING THE THREATS - COMPUTER VIRUSES While surfing the web, the users may potentially be downloading and installing software and/or share the drive such as junk drives over their computing devices. These devices are at the risk of attacks through computer virus and spyware. Traditionally, the following techniques have been used for safeguarding against the virus attacks: ○ Making sure each computer has antivirus installed and set to auto-update to get the most recent virus and spyware signatures. ○ Restrict the user privilege to install software. ○ Using a firewall over router or on the computer or around the LAN. Cloud computing presents difficulties for non-Cloud based viruses to penetrate. This is because of the complexities of virtualization technologies. Also the Cloud providers ensure reasonable security measures for the consumer’s data and software. UNDERSTANDING THE THREATS - FIRE, FLOOD AND DISGRUNTLED EMPLOYEES The fire as well as the fire extinguishing practices can destroy the computing resources, data and backup. Similarly the heavy and/or unexpected rainfall may cause an entire block or whole city including the computing equipment to be affected by a flood. Similarly an angry employee can cause harm by launching a computer virus, deleting files and leaking the passwords. Traditionally the office equipment is ensured to lower the monetary damage. Backup is used for data protection. Data centers use special mechanisms for fire-extinguishing without water sprinkles. By residing the data center over Cloud, the consumer is freed from making efforts and expenditures for fire prevention systems as well as for data recovery. The cloud provider manages all these procedures and includes the cost as minimal part of the rental. Unlike fire, the floods can not be avoided or put-off. The only possibility to avoid the damage due to floods is to avoid setting up the data center in a flood zone. Similarly, choose a Cloud provider which is outside any flood zone. Companies apply access control and backup to limit the access to data as well as the damage to data due to unsatisfied employees. In Cloud, the Identity as a Service (IDaaS) based single sign-on excludes the access privileges of terminated employees as quickly as possible to prevent any damages. UNDERSTANDING THE THREATS - LOST EQUIPMENT & DESKTOP FAILURE The loss of equipment such as a laptop may immediately leads to the loss of data and a possible loss of identity. If the data stored on the lost device is confidential then this may lead to even more damage. Traditionally the risk of damage due to lost or stolen devices is reduced by keeping backup and to safeguard the sensitive data, login and strong password for the devices are used. But even the strong passwords are not difficult to break for the experienced hackers. Yet most of the criminals are still prevented to access the data. For the Cloud computing, the data can be synchronized over multiple devices using the Cloud service. Therefore the user can get the data from online interface or from other synced devices. In case of desktop failure, the user (such as an employee of a company) becomes offline until the worn out desktop is replaced. If there was no backup, the data stored on the failed desktop may become unrecoverable. Traditionally, data backup is kept for the desktops in an enterprise. The backup is stored on a separate computer. In case of desktop failure, the maintenance staff tries to provide alternative desktop and restore the data as soon as possible. Whereas in Cloud, the employees work on the instances of IaaS or Desktop as a Service by using the local desktops. In case of desktop failure, the employee can just walk to another computer and log in to the Cloud service to resume the work. UNDERSTANDING THE THREATS - SERVER FAILURE & NETWORK FAILURE Just like the desktops, the severs can also fail. The replacement of blade server is relatively simple process and mostly the blade servers are preferred by the users. Of Course there has to be a replacement server in stock to replace with the failed server. Traditionally the enterprises keep redundant servers to quickly replace a failed server. In case of Cloud computing, the providers of IaaS and PaaS manage to provide 99.9% up-time through server redundancy and failover systems. Therefore the Cloud consumers do not have to worry about server failure. The network failure can occur due to a faulty device and will cause downtime. Traditionally, the users keep 3G and 4G wireless hotspot devices as a backup. While the enterprises obtain redundant Internet connections from different providers. Since the Cloud consumers access the Cloud IT resources through the Internet, the consumers have to have redundant connections and/or backup devices for connectivity. Same is true for the Cloud service provider. The 99.9% uptime is assured due to backup/redundant Network connections. UNDERSTANDING THE THREATS - DATABASE SYSTEM FAILURE & PHONE SYSTEM FAILURE Most of the companies rely upon database systems to store a wide range of data. There are many applications dependent upon database in corporate environment such as customers record keeping, sale-purchase and HR systems etc. The failure of database will obviously makes the dependent application unavailable. Traditionally, the companies either use a backup or replication of database instances. The former case results in downtime of database system while the latter results in minimum downtime or no downtime but is more complicated to implement. The Cloud based storage and database systems use replication to minimize the downtime with the help of failover systems. Many companies maintain phone systems for conference calling, voicemail and call forwarding. Although the employees can switch to using mobile phones in case the phone system fails. But the customers are left unaware of the phone number to connect to the company till the phone system recovers. Traditionally, the solutions are applied to reduce the impact of phone failure. Cloud based phone systems on the other hand provide reliable and failure safe telephone service. Internally, the redundancy is used in the implementation. MEASURING BUSINESS IMPACT, DISASTER RECOVERY PLAN TEMPLATE The process of reducing risks will often have some cost. For example the resource redundancy and backups etc. This indicates that investment on risk-reduction mechanisms will be limited. The IT staff should therefore evaluate and classify each risk according to its impact upon the routine operations of the company. A tabular representation of the risks, the probability of occurrence and the business continuity impact can be shown. The next step is to formally document the disaster recovery plan (DRP). A template of DRP can contain the plan overview, goals and objectives, types of events covered, risk analysis and the mitigation techniques for each type of risk identified in earlier step. ○ Standard Programming Languages: Whenever possible, the consumers should prefer those Clouds which work in standardized programming languages and tools. ○ Right from the start, there should be a clearly documented plan of returning of data/resources to the consumer at the time of termination of service usage. MEASURING BUSINESS IMPACT, DISASTER RECOVERY PLAN TEMPLATE MEASURING BUSINESS IMPACT, DISASTER RECOVERY PLAN TEMPLATE Continuity of Operations: Consumer should assure that the Cloud provider act upon the requested parameters of disaster recovery plan of the consumer for the business-critical software and data hosted on Cloud. ○ In case of service interruption, the consumer should demand compensation in addition to reversal of service charges from provider. ○ Otherwise the consumer should host such critical applications/data/processing locally. Compliance: The consumer should determine that the provider has implemented necessary procedures and controls to comply with various legal, ISO and audit standards required as a provider and/or for the consumer to fulfill. Administrator Staff: The consumer should make sure that the internal procedures and policies of the provider are sufficient to protect against malicious insiders. Licensing: The consumer should make sure that proper licensing is obtained and provided by the provider for the proprietary software. DATA GOVERNANCE Data Access Standards: Before developing the Cloud based applications, the consumers should make sure that the application interfaces provided in Cloud are generic and/or data adaptors could be developed for portability and interoperability of the Cloud applications can happen when required. Data Separation: The consumer should make sure that proactive measures are implemented at the provider’s end for separation of sensitive and non-sensitive data. Data Integrity: Consumers should use checksum and replication technique to ensure the integrity of the data to detect any violations of data integrity. Data Regulations: The consumer is responsible to ensure that the provider is complying with all the regulations regarding data which are applicable to consumer regarding data storage and processing. Data Disposition: The consumer should make sure that the provider offer such mechanisms which delete the data of consumer whenever the consumer requests for it. Also make sure that the evidence or proof of data deletion is generated. Data Recovery: The consumer should examine the data backup, archiving and recovery procedures of the provider and make sure they are satisfactory. SECURITY & RELIABILITY Consumer-side Vulnerabilities: Consumers should ensure the implementation of proper security and hardening of consumer platforms to avoid browser or other client devices based attacks. Encryption: Consumer should require that a strong encryption is applied for web sessions, data transfer and data storage. Physical: Consumers should consider the appropriateness of the physical security implementations and procedures of the provider. There should be a recovery plan for physical attack on provider’s site. The providers having multiple installations in different geographical regions should be preferred. Authentication: Consumers should consider the use of advanced procedures for authentications provided by some providers to avoid account hijacking or identity thefts. Identity & Access Management: Consumers should have the visibility in the capabilities of provider for: ○ Authentication and access management procedures supported by the provider. ○ The tools available for consumers to extract the authentication information. ○ The tools available to consumer for granting authorization to consumer users and other applications without the involvement of provider. Performance Requirements: Consumers should benchmark the performance scores of an application before deploying it to Cloud in terms of responsiveness and performance regarding the input/output of bulk data. These scores should be used to establish key performance requirements after deploying the application over the Cloud. VMs, SOFTWARE & APPLICATIONS VM vulnerabilities: When the provider is offering Cloud IT resources in the form of VMs, the consumer should make sure that the provider has implemented sufficient mechanisms to avoid attacks from other VMs, physical host and network. Also make sure the existence of IDS/IPS systems and network segmentation techniques such as VLANs. VM Migration: The consumers should plan for VM migration across different providers just in case. Time-critical Software: Since the public Clouds have unreliable response time therefore the consumers should avoid using the Cloud for the deployment of time-critical software. Safety-critical Software: Due to the unconfirmed reliability of Cloud subsystems, the use of Cloud for deployment of safety-critical software is discouraged. Application development Tools: When using the application development tools provided by the service provider, preference should be given to the tools which support the application development lifecycle with security features integrated. Application Runtime Support: Before deploying an application over Clouds, the consumer should make sure that the libraries calls used in application work correctly and all those libraries are dependable in terms of performance and functionality. Application Configuration: The consumer should make sure that the applications being deployed over the Cloud can be configured to run in a secured environment such as in a VLAN segment. Also make sure that various security frameworks can be integrated with the applications according to requirements of security policies of the consumer. Standard Programming Languages: Whenever possible, the consumers should prefer those Clouds which work in standardized programming languages and tools. MIGRATION TO THE CLOUD DEFINE SYSTEM GOALS AND REQUIREMENTS The migration to Cloud should be well planned. The first step should be to define the system goals and requirements. The following considerations are important: Data security and privacy requirements Site capacity plan: The Cloud IT resources needed initially for application to operate. Scalability requirements at runtime System uptime requirements Business continuity and disaster requirements Budget requirements Operating system and programming language requirements Type of Cloud: public, private or hybrid Single tenant or multi tenant solution requirements Data backup requirements Client device support requirements such as for desktop, tab or smartphone Training requirements Programming API requirements Data export requirements Reporting requirements PROTECTING EXISTING DATA AND KNOW YOUR APPLICATION CHARACTERISTICS It is highly recommended that before migrating to Cloud, the consumer should backup the data. This will help in restoring the data to a certain time. The consumer should discuss with provider and agree upon a periodic backup plan. The data life cycle and disposal terms and conditions should be finalized at the start. If the consumer is required to fulfill any regulatory requirements regarding data privacy, storage and access then this should be discussed with the provider and be included in the legal document of the Cloud agreement. The consumer should know the IT resource requirements of the application being deployed over the Cloud. The following important features should be known: ○ High and low demand periods in terms of time ○ Average simultaneous users ○ Disk storage requirements ○ Database and replication requirements ○ RAM usage ○ Bandwidth consumption by the application ○ Any requirement related to data caching ESTABLISH A REALISTIC DEPLOYMENT SCHEDULE, REVIEW BUDGET AND IDENTIFY IT GOVERNANCE ISSUES Many companies use a planned schedule for Cloud migration to provide enough time for training and testing the application after deployment. Some companies use a beta-release to allow employees to interact with the Cloud based version to provide feedback and to perform testing. Many companies use key budget factors such as running cost of in-house datacenter, payrolls of the IT staff, software licensing costs and hardware maintenance costs. This helps in calculation of total cost of ownership (TCO) of Cloud based solution in comparison. Many Cloud providers offer solutions at lower price than in-house deployments. Regarding the IT governance requirements, the following are important point: ○ Identify how to align the Cloud solution with company’s business strategy. ○ Identify the controls needed within and outside the Cloud based solution so that the application can work correctly. ○ Describe the access control policies for various users ○ Describe how the Cloud provider logs the errors and system events and how to access the log and performance monitoring tools made available to the consumer. DESIGNING CLOUD BASED SOLUTION Identify functional and non-functional requirements: Before beginning the design phase of a Cloud application, the system requirements must be obtained and finalized. Personal meeting may be very helpful in this regard. Identification of errors and omission at early stage will save considerable cost and time later. The system requirements are of two types: ○ Functional ○ Non-functional Functional requirements: Define the specific tasks the system will perform. These are provided by the system analyst to the designer. Non-functional requirements: These are usually related to quality metrics such as performance, reliability and maintainability. DESIGNING CLOUD BASED SOLUTION Identify Cloud Solution Design Metrics: ○ Accessibility: The Cloud solution should consider either to maximize the user access or to restrict the access to authorized users only. ○ Audit: The Cloud solution should contain the logging and exception handling at critical processing points so that the log data can be used for audit later. ○ Availability: Identify the uptime requirements of the Cloud application being designed. Use the redundant deployment accordingly. ○ Backup: The cost of backup and the time to restore (from backup) should be considered while designing the Cloud based solution. If the data is handled by the provider, the backup policies of the provider should be reviewed and renegotiated if found not appropriate. CLOUD SOLUTION DESIGN METRICS Existing & Future capacity: If the application is being migrated to Cloud, then the current requirement of IT resources should be evaluated and used for initial deployment as well as for horizontal or vertical scaling configuration. Configuration management: Since the Cloud based solutions are accessed through any OS, browser and device, therefore the interfaces of the application should be able to render the contents with respect to OS, browser and user device. Deployment: The deployment issues such as related to OS, browser and devices should be addressed for the initial deployment as well as for future updates. Environment (Green computing): Design consideration for the Cloud based solution should contain considerations for power efficient design in order to reduce the environmental effect of carbon footprint of the Cloud based solution. Disaster recover: The Cloud solution design should have consideration of disaster recovery mechanisms. The potential risks for business continuity should be identified and cost effective mitigation techniques should be configured for these risks. Interoperability: The design consideration should contain possibility of interoperability between different Cloud solutions in terms of exchange of data. Maintainability: The Cloud solution should be designed to increase the reusability of code through loose coupling of the modules. This will lower down the maintenance cost. CLOUD SOLUTION DESIGN METRICS Performance: In order to enhance the performance, various considerations should be used such as reduce graphics on key pages, reduce network operations and use of data and application caching. Potential bottlenecks for performance should be identified and addressed. Price: The design of Cloud solution should be considered with respect to price and budget in short and long term. An inexpensive solution at the time of deployment may prove to be expensive in long run. The scalability, disaster recovery and performance etc. features should be implemented according to budget. Privacy: Privacy assurance should be implemented. Specifically for healthcare, educational and monetary data related solutions should have privacy assurance for internal as well as external unauthorize access. Same care is to be done for replicated databases and backups. Portability: The Cloud solution should be designed to be portable to other platforms when required. For that, instead of using the provider specific APIs, the generic or Opensource APIs and programming environments should be proffered for developing the Cloud solution. Recovery: In addition to disaster recovery, the solution should contain features to recover from other events not covered in disaster recovery such as user error, programing bugs, power outage etc. CLOUD SOLUTION DESIGN METRICS Reliability: Design should include the consideration for hardware failure events. The redundant configuration might be applied according to mean time between failure (MTBF) for each hardware device or establish a reasonable downtime. Response time: The response time should be as less as possible. Specifically for the online form submissions and reports. Robustness: It refers to the continuous working capability of the solution despite the errors or system failure. This can be complemented with Cloud resource usage monitoring for timely alarm for critical events. Security: The developer should consider the Cloud based security and privacy issues while designing. Testability: Test cases should be developed to test the fulfilment of functional and non-functional requirements of the solution. Usability: The design can be improved for usability by implementing a prototype and getting users’ reviews to enhance the ease of usability of the Cloud solution. CLOUD APPLICATION SCALABILITY AND RESOURCE SCHEDULING CLOUD APPLICATION SCALABILITY Review Load Balancing Process: Cloud based solutions should be able to scale up or down according to demand. ○ Remember, the scaling out and scaling up mean acquiring new resources and upgrading the resources respectively. Scaling in and scaling down are exactly the reverse of these. ○ There should be a load balancer module specially in case of horizontal scaling. ○ Load balancing or load allocating (in this regard) is performed by distribution of workload (which can be in the form of clients’ requests) to Cloud IT resources acquired by the Cloud solution. ○ The allocation pattern can be through round robin, random or a more complex algorithm containing multiple parameters. (More on this in later module). Application Design: Cloud based solutions should neither be having no-scaling nor the unlimited scaling. ○ There should be a balanced design of Cloud application regarding scaling with reasonable expectations. ○ Both horizontal and vertical scaling options should be explored either individually or in combination. CLOUD APPLICATION SCALABILITY Minimize objects on key pages: Identify the key pages such as home page, forms and frequently visited pages of Cloud based solution. ○ Reduce the number f objects such as graphics, animation, audio etc. from these pages so that they can load quickly. Selecting measurement points: Remember a rule that a 20% of code usually performs the 80% of processing. ○ Identify such code and apply scaling to it. ○ Otherwise applying scaling may not have the desired performance improvements. Analyze database operations: The read/write operations should be analyzed for improving performance. ○ The read operations are non-conflicting and hence can be performed on replicated databases (horizontal scaling). ○ But write operations on one replica database requires the synchronization of all database instances and hence the horizontal scaling becomes time consuming. ○ The statistics of database operations should be used for decision about horizontal scaling. Evaluate system's data logging requirements: The monitoring system regarding the performance and event logging may be consuming disk space and CPU. ○ Evaluate the necessity of logging operations before applying or periodically afterwards and tune them to reduce disk storage and CPU wastage CLOUD APPLICATION SCALABILITY Capacity planning vs Scalability: Capacity planning is planning for the resources needed at a specific time by the application. ○ Scalability means acquiring additional resources to process the increasing workload. ○ Both capacity planning and scalability should be performed in harmony. Diminishing return: The scaling should not be performed beyond a point where there is no corresponding improvement in performance. Performance tuning: In addition to scaling, the application performance should be tuned by reducing graphics, page load time and response time. ○ Additionally the use of caching should be applied. It is the use of faster hard disks, using RAM contents for content rendering and optimizing the code using 20/80 rule. CLOUD RESOURCE SCHEDULING OVERVIEW Effective resource scheduling reduces: ○ Execution cost ○ Execution time ○ Energy consumption Fulfills QoS requirements: ○ Reliability ○ Security ○ Availability ○ Scalability Remember that provider wants to earn more profit and maximize the resource usage. Cloud consumer wants to minimize the cost and time of execution of workload. The Cloud resource scheduling can be performed on various grounds as discussed in coming modules. COST-BASED RESOURCE SCHEDULING This type of resource scheduling is performed on the basis of cost and budget constraints. ○ The users’ requests are processed in first come first served basis along with QoS and time constraints considerations. ○ The cost constraint may take the priority over other constraints and thus may introduce starvation for some tasks. TIME-BASED RESOURCE SCHEDULING This type of resource scheduling prioritizes the processing deadline of the users’ requests. ○ The resources are allocated to those jobs with deadline approaching faster than other requests. ○ Evaluated through the statistics such as number of deadlines missed and the overall cost etc. COST & TIME-BASED RESOURCE SCHEDULING Time based scheduling may miss some tasks’ deadlines or may prove to be expensive if over provisioning of IT resources is used to meet deadlines. ○ The cost based scheduling may miss some deadlines and/or cause starvation to some costs. ○ Better to use a hybrid approach for resource scheduling to gain cost as well as to minimize task deadline violations. BARGAIN-BASED RESOURCE SCHEDULING This type of scheduling considers that a resource market exists with providers making offers to the users. ○ The users can negotiate for the processing cost. ○ The bargain-based scheduling can achieve low cost and meet deadline if negotiation is successful. ○ It is an evolving technique so far. PROFIT-BASED RESOURCE SCHEDULING This type of scheduling aims at increasing the profit of Cloud provider. ○ This can be done either by reducing the cost or increasing the number of simultaneous users. ○ The SLA violation is to be considered while making the profit based scheduling decisions. ○ The penalties of SLA violations may nullify the profit gained. SLA & QoS BASED RESOURCE SCHEDULING In this scheduling, the SLA violations are avoided and QoS is maintained. ○ The more load put on IT resources, the more tasks may be completed in a unit time. ○ Yet it may cause SLA violation when IT resources are overloaded. ○ Hence the QoS consideration is applied to ensure SLA is not violated. ○ Suitable for homogeneous tasks for which the estimation can be performed for expected workload and expected time of completion. ENERGY-BASED RESOURCE SCHEDULING The objective is to save energy at data center level to decrease the running cost and to contribute towards environment. ○ Energy consumption estimation is required for each scheduling decision. There can be a number of possible task distribution across servers and VMs. ○ Only that distribution is preferred which shows the least energy consumption for a batch of tasks at hand. OPTIMIZATION-BASED RESOURCE SCHEDULING Optimization is the process of perfecting an algorithm to make it more beneficial. Optimized solution to a problem is the best possible option (under some constraints) available for the problem at hand. ○ Popular considerations used by researchers are revenue maximization, lowering communication overhead, output efficiency, energy efficiency, reducing completion time of tasks etc. ○ Popular techniques used by researchers include Bayesian assumptions, Stochastic Integer Programming etc. ○ The only drawback is the computational time of optimal solution. It may result in deadlines missing for some time critical jobs. PRIORITY-BASED RESOURCE SCHEDULING In this type of scheduling, the task starvation can be avoided. Specially for the situation of resource contention. ○ If there is a task classification e.g., on the basis of type, user, resource requirement etc., so that the priority of one task can cause the resource scheduler to preempt the other low priority tasks. ○ May cause the low priority tasks to suffer starvation. ○ Aging factor can be applied to increase the priority of low-priority tasks to avoid or lower the starvation. VM-BASED RESOURCE SCHEDULING Since the VMs can host Cloud based applications and the VMs can be migrated, the resource scheduling can be performed on VM level. ○ The overall demand of all applications hosted on a VM is considered for scheduling. If a VM is facing resource starvation, it can be migrated to another server with available IT resources. ○ The disadvantage is, there is no guarantee that the destination host also runs out of IT resources due to already deployed VMs

Use Quizgecko on...
Browser
Browser