FALLSEM2024-25 CSI3001 ETH Reference Material I PDF
Document Details
Uploaded by FinestPeony
Vellore Institute of Technology
Dr. C.R.Dhivyaa
Tags
Summary
This document is reference material for Module 4, Cloud Environments, and includes a case study. It covers various aspects of cloud computing, including AWS and its compute services (EC2), providing an overview of key concepts and topics related to cloud environments. This document is from a Computer Science course, specifically the CSI3001 course.
Full Transcript
Module 4 Cloud Environments Case Study Dr. C.R.Dhivyaa Assistant Professor School of Computer Science and Engineering Vellore Institute of Technology, Vellore Contents Cloud Environments Cloud Environments - Case study: One clou...
Module 4 Cloud Environments Case Study Dr. C.R.Dhivyaa Assistant Professor School of Computer Science and Engineering Vellore Institute of Technology, Vellore Contents Cloud Environments Cloud Environments - Case study: One cloud service provider per service model (eg. Amazon EC2, Google App Engine, Sales Force, Microsoft Azure, Open Source tools) Choosing Cloud Service Provider Thefollowinglistgivessomeparticularlyimportantconsiderations(whichis applicable to all)whilechoosinga cloud provider. Evaluate stability. That means availability of regular releases, continuous performance, dispersedplatforms,andloadbalancing. Find a reliableprovider.This goesbeyond namerecognitionto includeemphasis on securityand feedbackfromreal customers. Consider economies of scale. What is the ratio between the cost of running an in-house server versus the available resourcesofanenterprisecloud? Look for standardized service. Does the company offer cost-effective bundles of apps and the resources you need? Bundled servicescan save 40 percent over purchasinga IaaS,SaaS,andotherdigitalproducts. Evaluate flexibility. The last thing you want is to be looked into a contract with a provider that inhibits agility and growth. Amazon web services(AWS) Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. It is a secure cloud platform that offers a broad set of global cloud based products called services that are designed to work together. AWS is a platform that allows the development o flexible applications by providing solutions for elastic infrastructure scalability,messaging, anddatastorage. It provides a Web-based console where users can handle administration and monitoring of the resources required, as well as their expenses computed on a pay-as-you-go basis. Amazon web services(AWS) Figure: Amazon Web Services ecosystem AWS - Compute Services(EC2) Amazon Elastic Compute Cloud (Amazon EC2) is a core compute service offered by Amazon Web Services (AWS) that provides resizable compute capacity in the cloud. It allows you to run virtual servers (known as instances) on-demand, which can be used for a wide range of applications, from hosting websites and web applications to running complex data processing tasks. AWS compute is an Infrastructure As A Service(IAAS) Amazon EC2 reduces the time required to obtain and boot new server instances (called Amazon EC2 instances) to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. AWS - Compute Services(EC2) Put simply, AWS compute is the means to provision and manage infrastructure (virtual machines/containers) for your use case. AWS provides many flexible computing services so as to meet the requirements of business organizations like Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), Amazon Elastic Container Service for Kubernetes (EKS), Amazon Lightsail, AWS Lambda and many more. This infrastructure as a service can be considered as the processing power required by your applications, to host applications or run computation- intensive tasks. CSI3001- Cloud Computing Methodologies AWS - Compute Services Steps to use EC2: 1. Create AMI 2. Create an instance 3. Configure instance 4. Additional storage 5. Adding Tags 6. Configure Security group 7. Review SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services Amazon Machine Images (AMI): These are templates from which it is possible to create a virtual machine. It provides the information required to launch an instance. EC2 Instances EC2 instances represent virtual machines. They are created using AMI as templates, which are specialized by selecting the number of cores, their computing power, and the installed memory SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services EC2 Instances The processing power is expressed in terms of virtual cores and EC2 Compute Units. Five major categories of EC2 1. Standard instances ( General Purpose) 2. Micro instances 3. High-CPU instances (Compute-optimized) 4.High-memory instances ( Storage optimized) 5. Cluster GPU instances. (GPU optimized) SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services 1. General purpose instances It provides a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. It can be used for gaming servers, small databases, personal projects EC2 General-Purpose Instance Types(select depends on memory, CPUs, storage, and network speed.) T2. micro M6a Instance M5 instance Application Web Servers Development and Test Environment Content delivery SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services 2. Micro instances It is a low-cost instance type designed for lower throughput applications and websites. Micro Instances (t1.micro) provide a small amount of consistent CPU resources and allow you to burst CPU capacity when additional cycles are available. SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services 3. High-CPU instances (Compute-optimized) Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC). Instance Types(select depends on memory, CPUs, storage, and network speed.) c5d.24large giant and extra-large Application Machine learning Gaming SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services 4. High-memory instances ( Storage optimized) Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. Instance Types(select depends on memory, CPUs, storage, and network speed.) Im4gn Application Distributed file systems, data warehousing applications high-frequency online transaction processing (OLTP) SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services 5.Cluster GPU Instances (GPU optimized) Use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching. Accelerated Computing Instance Types 1.Accelerated computing consists of mainly P1, Inf2, G5, G5g, G4dn, G4ad, G3, F1 and VT1. 2.P4: It offers 3.0 GHz 2nd Generation Intel Xeon Processors. of 8 GPUs, 96 CPUs, and memory of 1152(GiB) with network bandwidth of 400ENA and EFA. SCOPE CSI3001- Cloud Computing Methodologies AWS - Compute Services SCOPE Amazon EC2 EC2 Instance Lifecycle EC2 Instance Lifecycle Amazon EC2 When you launch an instance, it enters the pending state. The instance type that you specified at launch determines the hardware of the host computer for your instance. We use the Amazon Machine Image (AMI) you specified at launch to boot the instance. After the instance is ready for you, it enters the running state. You can connect to your running instance and use it the way that you'd use a computer sitting in front of you. EBS(Elastic Block Store) is used to store persistent data, which means that data is kept on the AWS EBS servers even when the EC2 instances are shut down AWS Auto Scaling AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. AWS Auto Scaling https://aws.amazon.com/autoscaling/ AWS Elastic Load Balancing The elastic load balancer is a service provided by Amazon in which the incoming traffic is efficiently and automatically distributed across a group of backend servers in a manner that increases speed and performance. It helps to improve the scalability of your application and secures your applications. Load Balancer allows you to configure health checks for the registered targets. In case any of the registered targets (Autoscaling group) fails the health check, the load balancer will not route traffic to that unhealthy target. Thereby ensuring your application is highly available and fault tolerant. Elastic Load Balancing https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load- balancing.html CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon S3 Bucket AWS provides a collection of services for data storage and information management. Ex: Amazon Simple Storage Service (S3) which is also called Bucket Object. Buckets represent virtual containers to store objects; objects represent the content that is actually stored. S3 has been designed to provide a storage service that’s accessible through a Representational State Transfer (REST) interface. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon S3 Bucket All the operations that can be performed on the storage in the form of HTTP requests. Buckets, objects, and attached metadata are represented by uniform resource identifiers(URIs). A Bucket provides users with a flat store to which they can add objects. A bucket is located in a specific geographic location, users can select the location at which to create buckets. An object is identified by a name that needs to be unique within the bucket in which the content is stored. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon S3 Bucket Amazon S3 bucket is a fundamental Storage Container feature in AWS S3 Service. It provides a secure and scalable repository for storing of Objects such as Text data, Images, Audio and Video files over AWS Cloud. Each S3 bucket name should be named globally unique and should be configured with ACL (Access Control List). Each bucket will have its own set of policies and configurations. This enables users to have more control over their data. Bucket Names must be unique. Can be thought of as a parent folder of data. There is a limit of 100 buckets per AWS account. But it can be increased if requested by AWS support. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon S3 Bucket Fundamental entity type stored in AWS S3. You can store as many objects as you want to store. The maximum size of an AWS S3 bucket is 5TB. It consists of the following: Key Version ID Value Metadata Sub-resources Access control information Tags SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon S3 Versioning and Access Control Versioning means always keeping a record of previously uploaded files in S3. Points to Versioning are not enabled by default. Once enabled, it is enabled for all objects in a bucket. Versioning keeps all the copies of your file, so, it adds cost for storing multiple copies of your data. For example, 10 copies of a file of size 1GB will have you charged for using 10GBs for S3 space. Versioning is helpful to prevent unintended overwrites and deletions. Objects with the same key can be stored in a bucket if versioning is enabled (since they have a unique version ID). SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon S3 Versioning and Access Control Access control lists (ACLs): A document for verifying access to S3 buckets from outside your AWS account. An ACL is specific to each bucket. You can utilize S3 Object Ownership, an Amazon S3 bucket-level feature, to manage who owns the objects you upload to your bucket and to enable or disable ACLs. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services How S3 Works? Amazon S3 stores data as objects within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services How S3 Works? SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon elastic block store The Amazon Elastic Block Store (EBS) allows AWS users to provide EC2 instances with persistent storage in the form of volumes that can be mounted at instance startup. A block storage service that provides storage volumes for Amazon Elastic Compute Cloud (Amazon EC2) instances. You can use Amazon EBS volumes to store files and install applications, similar to a local hard drive. EBS is a block type durable and persistent storage that can be attached to EC2 instances for additional storage. Unlike EC2 instance storage volumes which are suitable for holding temporary data, EBS volumes are highly suitable for essential and long term data. EBS volumes are specific to availability zones and can only be attached to instances within the same availability zone. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon elastic block store SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon Elastic Cache Amazon Elastic Cache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. Elastic Cache is an implementation of an elastic in-memory cache based on a cluster of EC2 instances. It's a popular choice for use cases that require frequently accessed data to be in- memory, such as e-commerce, mobile apps, gaming, and ad-tech. It provides fast data access from other EC2 instances through a Memcached- compatible protocol SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Preconfigured EC2 AMIs Preconfigured EC2AMIs are predefined templates featuring an installation of a given database management system. Available AMIs include installations of IBMDB2, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, Sybase, and Vertica. Amazon Relational Data Storage(RDS) RDS is a relational database service that relies on the EC2 infrastructure and is managed by Amazon. It provides storage for high availability, designing failover strategies, or keeping the servers up-to-date with patches. Amazon Relational Database Service (RDS) is a managed database service that helps users set up, operate, and scale relational databases in the AWS cloud. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon Simple DB Amazon Simple DB is a highly available NoSQL data store that offloads the work of database administration. Developers simply store and query data items via web service requests and Amazon Simple DB does the rest. This is a lightweight, highly scalable, and flexible data storage solution that supports semi-structured data, such as domains, items, and attributes. Simple DB uses domains as top-level elements to organize a data store. SCOPE CSI3001- Cloud Computing Methodologies AWS - Storage Services Amazon Simple DB Each domain can grow up to 10 GB of data, and by default, a single user can allocate a maximum of 250 domains. These domains are roughly comparable to tables in the relational model. Clients can create, delete, modify, and make snap-shots of domains. They can insert, modify, delete, and query items and attributes. SCOPE CSI3001- Cloud Computing Methodologies AWS – Communication Services Amazon provides facilities to structure and facilitate communication among existing applications and services residing within the AWS infrastructure. These facilities can be organized into two major categories, which are Virtual networking Messaging. SCOPE CSI3001- Cloud Computing Methodologies AWS – Communication Services Virtual networking Virtual networking is a Collection of services that allow AWS users to control the connectivity to and between compute and storage services. Example: Amazon Virtual Private Cloud(VPC), Amazon Direct Connect and Route53. Amazon VPC provides a great degree of flexibility in creating virtual private networks within the Amazon infrastructure. SCOPE CSI3001- Cloud Computing Methodologies AWS – Communication Services Virtual networking Amazon Direct Connect allows AWS users to create dedicated networks between the user’s private network and Amazon Direct Connect locations, called ports. SCOPE CSI3001- Cloud Computing Methodologies AWS – Communication Services Amazon Route 53 implements dynamic DNS services that allow AWS resources to be reached through domain names different from the amazon.com domain. create and manage your public DNS records SCOPE CSI3001- Cloud Computing Methodologies AWS – Communication Services Messaging The three different types of messaging services offered are Amazon Simple Queue Service(SQS), Amazon Simple Notification Service(SNS),and Amazon Simple Email Service(SES). Messaging SQS - exchanging messages between applications by means of message queues SNS - publish-subscribe method for connecting heterogeneous applications. SES - provides AWS users with a scalable email service that leverages the AWS infrastructure. SCOPE CSI3001- Cloud Computing Methodologies AWS – Communication Services Elastic static IP address An Elastic IP address is a static public IPv4 address associated with your AWS account in a specific Region. Unlike an auto-assigned public IP address, an Elastic IP address is preserved after you stop and start your instance in a virtual private cloud (VPC). For example, if you have an EC2 instance that has an Elastic IP address and that instance is stopped or terminated, you can remap the address and re-associate it with another instance in your account. There is a default limit of 5 Elastic IP addresses per Region per AWS account Elastic IP addresses are most commonly used to help with fault-tolerant instances, Increases the availability. SCOPE Basic structure of AWS EC2 EC2 allow users to use virtual machines of different configurations as per their requirement. It allows various configuration options, mapping of individual server, various pricing options, etc. AWS provides the Elastic Load Balancing service, it distributes the traffic to EC2 instances across multiple available sources, and dynamic addition and removal of Amazon EC2 hosts from the load-balancing rotation. Amazon’s Elastic Compute Cloud (EC2) provides a feature called security groups, which is similar to an inbound network firewall, in which we have to specify the protocols, ports, and source IP ranges that are allowed to reach your EC2 instances. Each EC2 instance can be assigned one or more security groups, each of which routes the appropriate traffic to each instance. Security groups can be configured using specific subnets or IP addresses which limits access to EC2 instances. Amazon RDS allows users to install RDBMS (Relational Database Management System) of your choice like MySQL, Oracle, SQL Server, DB2, etc. on an EC2 instance and can manage as required. Amazon EC2 uses Amazon EBS (Elastic Block Storage) similar to network-attached storage. All data and logs running on EC2 instances should be placed on Amazon EBS volumes, which will be available even if the database host fails. Amazon EBS volumes automatically provide redundancy within the availability zone, which increases the availability of simple disks. AWS cloud provides various options for storing, accessing, and backing up web application data and assets. The Amazon S3 (Simple Storage Service) provides a simple web-services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. Amazon S3 stores data as objects within resources called buckets. The user can store as many objects as per requirement within the bucket, and can read, write and delete objects from the bucket. An Elastic IP address is a static public IPv4 address associated with your AWS account in a specific Region. Unlike an auto-assigned public IP address, an Elastic IP address is preserved after you stop and start your instance in a virtual private cloud (VPC) Elastic IP addresses are most commonly used to help with fault tolerant instances or software For example, if you have an EC2 instance that has an Elastic IP address and that instance is stopped or terminated, you can remap the address and re-associate it with another instance in your account. Increases the availability MicrosoftAzure Microsoft Windows Azure is a cloud operating system built on top of Microsoft datacenters’ infrastructure and provides developers with a collection of services for building applications with cloudtechnology. Services range from compute, storage, and networking to application connectivity, access control, and business intelligence Windows Azure platform is made up of a foundation layer and a set of developer services that can be used to build scalable applications These services are compute, storage, networking, and identity management, which are tied together by middleware called AppFabric MicrosoftAzure Figure: Microsoft Windows Azure Platform Architecture MicrosoftAzure Compute Services Azure compute services are the hosting services responsible for hosting and running the application workloads. Azure Virtual Machines (VMs) - on-demand, scalable computing resource Azure Container Service -The fastest and simplest way to run a container in Azure, without having to provision any virtual machines Azure App Services - A managed service for hosting web apps, mobile app back ends, RESTful APIs, or automated business processes. Azure Batch - A managed service for running large-scale parallel and high- performance computing (HPC) applications Azure Service Fabric - A distributed systems platform that can run in many environments, including Azure or on premises. A role is a runtime environment that is customized for a specific compute task. Roles are managed by the Azure operating system and instantiated on demand in order to address surges in application demand. Currently, there are two different roles: 1. Web role - Automatically deploys and hosts your app through IIS(Internet Information Services). 2. Worker role - Does not use IIS, and runs your app standalone. For example, a simple application might use just a single web role, serving a website. A more complex application might use a web role to handle incoming requests from users, and then pass those requests on to a worker role An application can use both web and worker roles at the same time in for processing. the same Azure instance. For example, a web role can collect requests from end users and then pass them to a worker role for processing. The Web role is designed to implement scalable Web applications Web roles represent the units of deployment of Web applications within the Azure infrastructure - they are hosted on the IIS Web Server (Internet Information Service) When Azure detects peak loads, it instantiates multiple Web roles for that application Worker roles are designed to host general compute services onAzure They can be used to quickly provide compute power or to host services that do not communicate with the external world through HTTP. Storage Services Compute resources are equipped with local storage in the form of a directory on the local file system that can be used to temporarily store information Azure provides different types of storage solutions that complement compute services with a more durableand redundant option compared to localstorage. 1. Blobs Azure allows storing large amount of data in the form of binary large objects 2 types: Block blobs and Page blobs Block blobs are composed of blocks and areoptimized for sequential access Page blobs are made of pages that are identified by an offset from the beginning of the blob. Apage blob can be split into multiple pages or constituted of a single page Storage Services Blobs 2.Azure drive Page blobs can be used to store an entire file system in the form of a single Virtual Hard Drive (VHD)file. This can then be mounted as a part of the NTFS(New Technology File System) file system by Azure compute resources, thus providing persistent and durablestorage. NTFS, which stands for NT file system and the New Technology File System, is the file system that the Windows NT operating system (OS) uses for storing and retrieving files on hard disk drives (HDDs) and solid-state drives (SSDs). A page blob mounted as part of an NTFS tree is called anAzure Drive 3. Queue It allows applications to communicate by exchanging messages through durablequeues. Applications enter messages into a queue, and other applications can read them in a first-in, first- out (FIFO)style 4. Table It constitute a semi-structured storage solution, allowing users to store information in the form of entities with a collection of properties Core infrastructure :AppFabric AppFabric is a comprehensive middleware for developing, deploying, and managing applications on the cloud or for integrating existing applications with cloud services. Core infrastructure :AppFabric Core infrastructure :AppFabric 1. Access control - ability to secure components of the application and define access control policies for usersand groups. 2. Service bus- constitutes the messaging and connectivity infrastructure for building distributed and disconnected applications in thecloud Azure 3. Azure cache- Cache is a service that allows developers to quickly access data persisted on WindowsAzurestorage Google App Engine Google Cloud Platform Google Cloud is a suite of public cloud computing services offered by Google. Google Cloud Platform enables developers to build, test and deploy applications on Google's highly scalable and reliable infrastructure. Choose from computing, storage and application services from your web, mobile and Backend solutions. Google cloud platform is a set of modular cloud based services that allow you to create anything from simple to complex applications. Google Cloud Platform GCP Tools & Services Google Compute -Google App Engine Google AppEngine is a PaaS implementation that provides services for developing and hosting scalable Web applications. It is a distributed and scalable runtime environment that leverages Google’s distributed infrastructure to scale out applications facing a large number of requests by allocating more computing resources to them and balancing the load among them. Why App Engine? Total lower cost of ownership Rich set of APIs Fully featured SDK for local development Ease of deployment Google Compute - Google App Engine (GAE) Features: Popular language: Users can build the application using language runtimes such as Java, Python, C#, Ruby, PHP etc. Open and flexible: Custom runtimes allow users to bring any library and framework to App Engine by supplying a Docker container. Powerful application diagnostics: Google App engine uses cloud monitoring and cloud logging to monitor the health and performance of the app and to diagnose and fix bugs quickly it uses cloud debugger and error reporting. Application versioning: It easily hosts different versions of the app, and create development, test, staging, and production environments. Google Compute -Google App Engine Google Compute -Google App Engine Frontend: This component receives incoming requests from users and routes them to the appropriate service or application. Load Balancer: This component distributes incoming traffic among multiple instances of a service or application. Memcache: This component provides in-memory caching for the application, to improve performance. Task Queue: This component provides a way for the application to queue background tasks for execution. Cloud SQL: This component provides a way for the application to use a MySQL or PostgreSQL database for storage. Google Compute -Google App Engine Services provided by App Engine include: Platform as a Service (PaaS) to build and deploy scalable applications Hosting facility in fully-managed data centers A fully-managed, flexible environment platform for managing application server and infrastructure Support in the form of popular development languages and developer tools Google Compute -Google App Engine The building blocks of Google’s cloud computing application include the Google File System, the MapReduce programming framework, and Big Table. With these building blocks, Google has built many cloud applications. Google Compute -Google App Engine The platform is logically divided into four major components: 1. infrastructure, 2. the run- time environment, 3. the underlying storage and 4. the set of scalable services that can be used to develop applications. Architecture and coreconcepts 1. Infrastructure Figure: Google AppEngine platform architecture Architecture and coreconcepts 1. Infrastructure Architecture and coreconcepts 1. Infrastructure Architecture and coreconcepts 2.Runtime environment Sandboxing ✓To provide the application environment with an isolated and protected context in which it can execute without causing a threat to the server and without being influenced by other applications. ✓Sandboxing is achieved by means of modified runtimes for applications that disable some of the common features normally available with their default implementations. ✓If an application tries to perform any operation that is considered potentially harmful, an exception is thrown and the execution is interrupted. Supported runtimes AppEngine applications are developed using three different languages and related technologies:Java,Python, Go etc. Supports JSP, Java Servlet Support for Python is provided by an optimized Python 2.5.2 interpreter. Go runtime environment allows applications developed with the Go programming language to be hosted and executed inAppEngine 3.Storage ✓AppEngine provides various types of storage, which operate differently depending on the volatility of thedata. ✓Web applications are composed of dynamic and static data ✓Static Web servers are optimized for serving static content, and users can specify how dynamic content should be served when uploading their applications toAppEngine. There are three different levels of storage: in memory-cache, storage for semi-structured data, and long-term storage for static data 4.Application Services Extensible Messaging and Presence Protocol 4.Application Services i. UrlFetch UrlFetch is a service that allows your GAE application to make HTTP requests to external resources, such as other web services or APIs. ii. Mail and instant messaging Mail:GAE provides an email service that allows you to send and receive email messages from your application. Instant Messaging:Google App Engine doesn't provide a built-in instant messaging service, but you can integrate with third-party instant messaging services like Firebase Cloud Messaging (FCM) or other chat APIs iii. Account management Account management in the context of GAE typically refers to user authentication and authorization services. Google Cloud Identity Platform is often used for these purposes iv. Image manipulation Image manipulation in GAE involves processing and modifying images within your application. Compute Services i. Task queues i. TaskQueues allow applications to submit a task for a later execution ii. useful for long computations that cannot be completed within the maximum response time iii. Users to have up to 10 queues that can execute tasks ii. Cron jobs It is possible to schedule the required operation at the desired time by using the CronJobs service Application Lifecycle AppEngine provides support for almost all the phases characterizing the life cycle of an application: testing anddevelopment, deployment, and Monitoring Cost model AppEngine provides afree service with limited quotas thatget reset every 24 hours. An application is measuredagainst billable quotas, fixed quotas, and per- minute quotas Google Compute - Google App Engine (GAE) Benefits 83 Google Compute - Google App Engine (GAE) Pros Cons Very economical for low traffic apps. Generally more constrained Auto-scaling is fast. Although this is good for rapid autoscaling, Version management and traffic splitting many apps can benefit from larger instances, are fast and convenient such as GCE instance sizes up to 96 cores. Minimal management, developers need to Networking is not integrated focus only on their app. Cannot put App Engine behind a Google Cloud Access to Datastore is fast. Load Balancer. Access to Memcache is supported. Limited to supported runtimes. Supports App Engine sandbox is very secure. Python 2.7, Java 7 and 8, Go 1.6-1.9, and PHP Compared with the development on GCE 5.5. In Java, partial support for Servlets but not or other virtual machines. the full J2EE standard. 84 Sales Force Salesforce is one of the leading CRM platforms to provide various customized services to its customers, partners, and employees. It also provides the platform to build custom apps, pages, components, etc Although Salesforce started as a Software as a Service (SaaS) company, it has grown into a Platform as a Service (PaaS) company. Salesforce services allow businesses to use cloud technology to better connect with partners, customers, and potential customers. Using the Salesforce CRM, companies can track customer activity, market to customers, and many more services. Sales Force CRM In the multilayer salesforce architecture, the users are at the topmost layer. The user can access a layer below the user layer, which means various clouds offered by the salesforce, such as sales cloud, service cloud, AppExchange, etc. The third layer is the salesforce1 App, which allows the user to access the salesforce on mobile device. The last layer contains various other salesforce platforms, such as Force.com, Heroku, Exact TargetFuel, etc. Sales cloud Sales Cloud is one of the core Salesforce products designed for the management, automation, and analysis of sales processes. Both sales reps and sales managers can use Salesforce Sales Cloud functionality to complete the tasks of different priorities with high efficiency. Lead and Contact Management: Efficiently track and manage customer information and interactions. Sales reps manage contacts and accounts, tracking all interactions and customer information. Service cloud Service Cloud is a Salesforce product designed to help businesses manage and improve their customer service operations. One of the key strengths of Salesforce Service Cloud is its flexibility and customizability. In addition, Salesforce Service Cloud integrates seamlessly with other Salesforce products, such as Salesforce Sales Cloud, Salesforce Marketing Cloud, and Salesforce Commerce Cloud, to provide a comprehensive customer engagement solution. Exact market target cloud Salesforce’s ExactTarget Marketing Cloud—now simply known as Marketing Cloud—automate your company’s marketing activities It is a powerful suite of tools that combines the capabilities of multiple social media and digital marketing providers. Marketing Cloud features several useful marketing tools, including: Email Studio – creates personalized, targeted emails Data Studio – to help you monetize your customer data Social Studio – to connect social data with your marketing activities Advertising Studio – to engage with customers via social media, customer service, and other channels Mobile Studio – to manage push notifications via SMS messages Interaction Studio – to connect online and offline touchpoints Terminologies used in Salesforce Architecture App: An app in architecture allows us to collect various things visually. Instance: An instance of the salesforce architecture is the software configuration that appears in front of the user when he login to the salesforce system. It shows the server details of the particular salesforce organization on which it works. Superpod: Superpod is the set of frameworks and stack balancers. A "pod" in Salesforce terminology is essentially a cluster of servers that handle the data storage and processing for specific Salesforce customers. Org: Org or Organization is a particular customer of a salesforce application. When a new user starts a trial on saleforce.com or developer.force.com, it generates a new org in the system. Sandbox is the instance of the production. The sandbox allows the developer to test the various conditions for the development to accomplish the client's expectations for the applications. Architecture of Salesforce 1.Multi-Tenant Layer Salesforce architecture is so popular because of its multitenancy. The multitenant architecture means one common application for multiple groups or clients. In such architecture, multiple clients use the same server, but their oaks are isolated from each other. It means the data of one client is secure and isolated from other groups or clients. Because of multitenancy, any developer can develop an application, upload it on the cloud, and easily share it with multiple clients or groups. Multiple users share the same server and applications, hence it is very cost-effective. In salesforce, because of this multitenant architecture, all customers' data is saved in a single database. Multitenant Architecture The multitenant architecture is much efficient than single-tenant architecture. Some differences between both the architectures are given below: The development cost is much high in single-tenant architecture than the multitenant because, in single-tenant, each user on the application and the maintenance cost is also owned by the single user. To make any update in the application, the developer needs to do it for each client manually. Whereas in multitenant, the developer needs to do it in one place, and automatically each client will receive the updated version. 2.Metadata The Salesforce platform follows the meta-data development model. The metadata means data about the data. Salesforce stores the metadata in the shared database along with the data. It means it stores the data as well as what data does. 3. API Services The salesforce metadata-driven model allows the developers to create their applications easily with the help of various tools. But sometimes developers need some more functionalities for their apps to make some modifications. To make such modifications, salesforce provides a powerful source of the APIs. These APIs helps the developers to customize the Salesforce mobile application. These APIs allows the various bits of programming to interface with each other and trade data. Without knowing many details, we can connect our apps with other apps. The API provides a simple but powerful, and open way to programmatically access the data and any app running on the salesforce platform. These APIs helps the developers to access apps from any location, using any programming language that supports Web services, like Java, PHP, C#, or.NET.