Summary

This document provides an overview of serverless architecture, focusing on AWS Lambda. It explains the concepts behind serverless computing and describes AWS Lambda as a key serverless service.

Full Transcript

Serverless (AWS Lambda) Learning Objectives Understand what is Serverless and how AWS Lambda works. Know configurations, limits and best practices for AWS Lambda. What is Serverless? Serverless is a cloud development paradigm (model) which allows developers to build and run ap...

Serverless (AWS Lambda) Learning Objectives Understand what is Serverless and how AWS Lambda works. Know configurations, limits and best practices for AWS Lambda. What is Serverless? Serverless is a cloud development paradigm (model) which allows developers to build and run applications without having to manage servers. Ex ;need just function like : validation or 2 Factor authentication to be published need server Which is not worthy… It doesn’t mean that there is no servers, there is, but developers don’t need to worry about them. It is very close to FaaS (Function as a Service), but Serverless is a complete architectural pattern that make use of FaaS. FaaS is a specific type of service such as AWS Lambda, Google Cloud Functions and Azure Functions, that enables developers to deploy functions. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. Attributes of Serverless Small, discrete units of code. Often services written using serverless architecture are comprised of a single function. Event-based execution. (Standby function) The infrastructure needed to run a function doesn't exist until a function is triggered. Once an event is received an ephemeral compute environment is created to execute that request. The environment may be destroyed immediately, or more commonly stays active for a short period of time, commonly 5 minutes. Scale to zero. Once a function stops receiving requests the infrastructure is taken down and completely stops running. This saves on cost since the infrastructure only runs when there is usage. If there's no usage, the environment scales down to zero. Ephemeral computing is the practice of creating a virtual computing environment as a need arises and then destroying that environment when the need is met, and the resources are no longer in demand. You pay only for what's used when it's used. Ephemeral computing, also known as transient computing, refers to the use of temporary, short-lived computing resources that exist only for the duration of a specific task or session. These resources are then discarded or recycled, leaving no trace of the user's data or activity. Ephemeral environments improve the efficiency of your SDLC by enabling your teams access to features as they are being built. This greatly reduces bugs caught through early E2E testing and manual testing by Product teams before merge, resulting in faster, more frequent deployments. is a lightweight, short-lived, fully functional instance of your UAT or production environment. You can use it for testing, validation, and gathering feedback on your product features and bugs. An ephemeral environment is also known as an "on-demand", "preview", or "dynamic" environment Attributes of Serverless Scale to infinity. :facebook,.. The FaaS takes care of monitoring load and creating additional instances when needed, in theory, up to infinity. This virtually eliminates the need for developers to think about scale as they design applications. A single deployed function can handle one or one billion requests without any change to the code. Use of managed services. Often serverless architectures make use of cloud provided services for elements of their application that provide non-differentiated heavy lifting such as file storage, databases, queueing, etc. Non-differentiated heavy lifting is everything that an application needs to do but doesn't increase its competitive advantage in the eyes of its customers. It can be extremely difficult and crucial things like managing servers for a scaling applications or authentication security For example, Google's Firebase is popular in the serverless community as a database and state management service that connects to other Google services like Cloud Functions. , Google's Firebase Firebase provides detailed documentation and cross-platform app development SDKs, to help you build and ship apps for iOS, Android, the web, Flutter, Unity, and C++. Firebase differs from AWS in that many of its services are free such as user authentication and the ability to enable push notifications. In building real-time applications, Firebase is faster and cheaper than AWS -- it immediately updates in real-time without much oversight on your part Top Firebase Realtime Database Alternatives SQL Server. MongoDB Atlas. Oracle Database. Teradata Vantage. Amazon Redshift. SAP HANA. Snowflake Data Cloud. Db2 Why to use Serverless (AWS Lambda)? Amazon EC2: - Virtual Servers in the Cloud - Limited by RAM and CPU - Continuously running Amazon Lambda: - Virtual functions – no servers to manage! - Limited by time – short executions - Run on-demand - Scaling is automated! Benefits of AWS Lambda Easy Pricing: - Pay per request and compute time - Free tier of 1M AWS Lambda requests/m and 400,000 GBs/m of compute time. Integrated with all Cloud services. Integrated with most programming languages. Easy monitoring through AWS Cloud Watch. Increasing RAM will also improve CPU and network. The AWS Free Tier The AWS Free Tier provides customers the ability to explore and try out AWS services free of charge up to specified limits for each service. The Free Tier is comprised of three different types of offerings, a 12-month Free Tier, an Always Free offer, and short term trials. Unlike the EC2 free tier, which expires 12 months after account signup, you can use the benefits of the Lambda free tier in infinity. The free tier for Lambda includes 1 million requests per month and 400,000 GB- seconds of compute per month. This would cost you just under $7 a month without the free tier Cloud Watch CloudWatch enables you to monitor your complete stack (applications, infrastructure, network, and services) and use alarms, logs, and events data to take automated actions and reduce mean time to resolution (MTTR). You can get started with Amazon CloudWatch for free. Most AWS Services (EC2, S3, Kinesis, etc.) send metrics automatically for free to CloudWatch. Many applications should be able to operate within these free tier limits. monitors your AWS resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications Kinesis Data Analytics is used to transform and analyze streaming data in real-time, leveraging the open-source framework and engine of Apache Flink. It is designed to reduce the complexity of building, managing, and integrating Flink applications with other AWS services. Amazon Kinesis Data Streams is a serverless streaming data service that simplifies the capture, processing, and storage of data streams at any scale. Apache Kafka and Amazon Kinesis are both technologies that can help organizations manage real-time data streams, but they're each quite different. For one, Kinesis is an AWS managed service whereas Kafka can be installed anywhere What is Apache Flink vs Kafka? Kafka is a distributed event store or a buffer, while Flink is a stream processing framework that can act on a buffer or any data source. On that note, Kafka can be an upstream or downstream application to Kafka in architectures where both are present. When Kafka is upstream, Flink processes data present in Kafka Monitor and trigger alerts using Amazon Cloud Watch for Amazon Connect | AWS Contact Center Example: Serverless Thumbnail creation S3 Push New thumbnail in S3 S3 Lambda trigger Push Image name New image in S3 AWS Lambda Function Creates a Thumbnail Image size DB Creation date etc… Metadata in DynamoDB A thumbnail A thumbnail also means a small and approximate version of a full-size image or brochure layout as a preliminary design step. This is also referred to as a preview image and is a smaller version of the original image. The thumbnail service is a web container that takes big pictures and creates thumbnails out of them. As the picture is uploaded to Cloud Storage, a notification is sent via Cloud Pub/Sub to a Cloud Run web container, which then resizes images and saves them back in another bucket in Cloud. When you add an image file to your bucket, Amazon S3 invokes your Lambda function. The function then creates a thumbnail version of the image and outputs it to a different Amazon S3 bucket. a small image representation of a larger image, usually intended to make it easier and faster to look at or manage a group of larger images. Graphic designers and photographers typically use this term Using an Amazon S3 trigger to create thumbnail images AWS Lambda Configuration Timeout: default 3 seconds, max of 15 mins Environment variables compatible (ex: env=QA/Prod) Allocated memory (128M to 3GB) Ability to deploy within a VPC and assign security groups IAM execution role must be attached to the Lambda function AWS Lambda Concurrency and Throttling In Lambda, concurrency is the number of requests your function can handle at the same time. There are two types of concurrency controls available: Reserved concurrency – Reserved concurrency guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. There is no charge for configuring reserved concurrency for a function. Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function's invocations. Note that configuring provisioned concurrency incurs charges to your AWS account. AWS Lambda Concurrency and Throttling Concurrency: up to 1000 executions (can be increased through ticket) Each concurrency can be consumed individually one at a time. After completing the function, it becomes available to be consumed again. Any of the services you configured to trigger Lambda can consume multiple concurrency based on the business logic implemented and needs. So, to properly control the consumption of concurrency, we can set throttles per function. AWS Lambda Limits Execution: - Memory allocation: 128 MB – 3 GB - Max execution time: 15 mins - Disk capacity in the “function container” (in /tmp): 512 MB - Concurrency limits: 1,000 Deployment: - Lambda function deployment size (compressed.zip): 50 MB - Size of environment variables: 4 KB AWS Lambda Best Practices Perform heavy-duty work outside of your function handler - Connect to DB outside of your function handler - Initialize the AWS SDK outside function handler - Pull dependencies or datasets outside function handler Use Env. Variables for: - Database connection string, S3 bucket, etc.., don’t put these values directly inside the code. - Password and any sensitive values can be encrypted using KMS. AWS Lambda Best Practices Minimize your deployment package size to its runtime necessities. - Break down the function if needed. - Remember the AWS Lambda limits Avoid using recursive code, never have a Lambda function call itself Don’t put your Lambda function in a VPC unless you have to. Cloud Services Monitoring What is AWS CloudWatch service? Monitoring tools :Amazon CloudWatch : collects and visualizes real-time logs metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. CloudWatch enables you to monitor your complete stack (applications, infrastructure, network, and services) and use alarms, logs, and events data to take automated actions and reduce mean time to resolution (MTTR). This frees up important resources and allows you to focus on building applications and business value. What is AWS CloudWatch service? Infrastructure as code:Overview. Infrastructure as Code (IaC) is the managing and provisioning of infrastructure through code instead of through manual processes. With IaC, configuration files are created that contain your infrastructure specifications, which makes it easier to edit and distribute configurations example:AWS CloudFormation, Red Hat Ansible, Chef, Puppet, SaltStack and HashiCorp Terraform AWS CloudWatch Metrics: Collect and track key metrics Logs: Collect, monitor, analyze and store log files Events: Send notifications when certain events happen in your AWS Alarms: React in real-time to metrics / events Namespaces Metrics Dimensions Resolution Statistics Percentiles Alarms AWS CloudWatch Metrics CloudWatch provides metrics for every services in AWS. Metric is a variable to monitor (ex: CPUUtilization, NetworkIn,..) Dimension is an attribute of a metric (instance id, environment,..) Up to 10 dimensions per metric Can create CloudWatch dashboards of metrics. AWS CloudWatch Metrics EC2 instance metrics have metrics “every 5 minutes” With detailed monitoring (with additional cost), you get data “every 1 minute” Developer can define and send his own custom metrics to CloudWatch. Ability to use dimensions (attributes) to segment metrics. Instance.id Environment.name (ex: QA, Production) Cloud Design Patterns What is a design pattern? In software engineering, a design pattern is a general repeatable solution to a commonly occurring problem in software design. A design pattern isn't a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations. Uses of design patterns Design patterns can speed up the development process by providing tested, proven development paradigms. Effective software design requires considering issues that may not become visible until later in the implementation. Reusing design patterns helps to prevent hidden issues that can cause major problems. Often, people only understand how to apply certain software design techniques to certain problems. These techniques are difficult to apply to a broader range of problems. Design patterns provide general solutions, documented in a format that doesn't require specifics tied to a particular problem. Types of design patterns Creational design patterns These design patterns are all about class instantiation. This pattern can be further divided into class-creation patterns : patterns use inheritance effectively in the instantiation process, object-creational patterns:use delegation effectively to get the job done. Ex: Abstract Factory, creates an instance of several families of classes. Factory Method, creates an instance of several derived classes. Singleton, a class of which only a single instance can exist. instantiating a class The phrase "instantiating a class" means the same thing as "creating an object." When you create an object, you are creating an "instance" of a class, therefore "instantiating" a class. The new operator requires a single, postfix argument: a call to a constructor. Abstraction, encapsulation, polymorphism, and inheritance are the four main theoretical principles of object-oriented programming Creational design patterns - Singleton Structural design patterns These design patterns are all about Class and Object composition. Structural class-creation (car) patterns use inheritance to compose interfaces. Structural object-patterns (motor inside class (car))define ways to compose objects to obtain new functionality. Ex: Adapter, match interfaces of different classes. Decorator, add responsibilities to objects dynamically. Façade, a single class that represents an entire subsystem. Structural design patterns - Adapter Behavioral design patterns These design patterns are all about Class's objects communication. Behavioral patterns are those patterns that are most specifically concerned with communication between objects. Ex: Observer, a way of notifying change to a number of classes. State, alter an object's behavior when its state changes. Behavioral design patterns – Observer Social media; followers Monolithic Architecture The monolith pattern includes everything from the user interface, business codes, and database calls in the same codebase. All application artifacts (output files, or published files) are contained in a single huge deployment. Monolithic Architecture Here are some main advantages to the monolith approach: Since it is a single code base, it’s easy to pull and start a project. Since this project structure is contained in one project and easy to debug interactions across different modules. With fewer moving parts, it´s less complex to maintain and troubleshoot. Monolithic Architecture Here are some main disadvantages to the monolith approach: It becomes too large in code size with time; managing is challenging. Difficult to work in parallel in the same code base. Hard to implement new features on legacy big monolithic applications Any change requires deploying a new version of the entire application, and so on. Monolithic Architecture The KISS principle, which stands for "Keep It Simple, Stupid", emphasizes the importance of simplicity in software design and development. The goal is to prioritize straightforward solutions over complex ones. By keeping our code simple, we can enhance comprehensibility, usability, and maintainability. that designs and/or systems should be as simple as possible. Wherever possible, complexity should be avoided in a system—as simplicity guarantees the greatest levels of user acceptance and interaction. Yagni: You aren't gonna need it" (YAGNI) is a principle which arose from extreme programming (XP) that states a programmer should not add functionality until deemed necessary.. Monolithic Architecture We have added the big e-commerce box; those are the components of our e- commerce application: Shopping UI Catalog Service Shopping Cart Service Discount Service Order Service As you can see, all modules of this traditional web application are single artifacts in the container. Microservices Architecture Microservices are small business services that can work together and can be deployed autonomously / independently. It is a cloud-native architectural approach in which applications are composed of many loosely coupled and independently deployable smaller components. Microservices Characteristics Microservices are small, independent, and loosely coupled. A single small team of developers can write and maintain a service. Each service is a separate codebase(repo), which a small development team can manage. Simply stated a repository is a directory/folder where code is stored. We can call it a codebase also, a repo in short. A repository is maintained generally by a version control system tool. (git,google,zoom,..0 Those services can be deployed independently. A team can update an existing service without rebuilding and redeploying the entire application. Each service is responsible for persisting its data or external state. This capability differs from the traditional model, where a separate data layer handles data persistence. Benefits of Microservices Architecture Agility — One of the essential characteristics of microservices is that the services are smaller and independently deployable. Small, focused teams — A microservice should be small enough for a single feature team to build, test, and deploy. Scalability — Microservices can be scaled independently, so you scale-out sub- services that require fewer resources without scaling out the entire application. Design Microservice Architecture Microservices Communications Changing the communication mechanism is one of the biggest challenges when moving to a microservices-based application. By nature, microservices are distributed; microservices communicate with each other by inter- service communication on the network level. Each microservice has its instance and process. Therefore, services must interact using an inter-service communication protocol like HTTP or any message brokers protocol. Since microservices are complex structures into independently developed and deployed services, we should consider communication types and manage them into design phases. The Broker Protocol :is a set of guidelines established by major securities firms to govern the recruiting process of financial advisors. It is a mutual agreement between firms to protect the interests of both you and the firm and minimize client disruptions during the transition process. Microservices Communication Design Patterns — API Gateway Pattern The API gateway pattern is recommended if you want to design and build complex large microservices-based applications with multiple client applications. An API Gateway provides a single endpoint for the client applications and internally maps the requests to internal microservices. We should use API Gateways between client and internal microservices. API Gateways can handle generic technical concerns like authorization, so instead of writing the same functionality in every microservices, authorization can be handled in a centralized way using the API Gateways and sent to internal microservices. API gateway pattern The pattern provides a reverse proxy to redirect or route requests to your internal microservices endpoints. An API gateway provides a single endpoint or URL for the client applications, and it internally maps the requests to internal Microservices Communication Design Patterns — API Gateway Pattern Also, the API Gateway manages routing to internal microservices and can aggregate several microservice requests in one response to the client. In summary, the API Gateway will be placed between the client apps and the internal microservices routing client requests to backend services. It also provides generic technical concerns like authentication, SSL termination, and cache. SSL termination refers to the process of decrypting encrypted traffic before passing it along to a web server. Microservices Communication Design Patterns — API Gateway Pattern Microservices Communication Design Patterns — API Gateway Pattern We will continue to evolve our architecture, but please look at the current design and consider how we can improve the design? There are several client applications connected to a single API Gateway here. We should be careful about this situation because if we put a single API gateway here, it’s possible to include risk associated with a single-point-of-failure. Microservices Communication Design Patterns — API Gateway Pattern Virtual Private Cloud (VPC) What is a virtual network? Azure, Google Cloud Platform and Amazon Web Services all provide a virtual network. It is like a virtual routing switch hosted in the Cloud. It’s what all the services connect to and use to communicate with each other. In Azure, the virtual network is referred to as a VNet. For AWS and GCP, the network is referred to as a Virtual Private Cloud or VPC. In each case, they serve a similar function. They contain one or more subnets and allow communication between resources and the subnets. Basics of Cloud Network Each cloud provider has the concept of regions. A region is a grouping of one or more data centers. Spreading workloads across multiple regions provides high availability by duplicating services across those regions. We can also place resources closer to customers. The way networking utilizes the regions is similar between Azure and AWS — and different with GCP. AWS VPC A VPC is created in a region and uses availability zones, which are distinct locations, isolated from failures in another availability zone, subnets exist inside these availability zones. In an AWS VPC there are two types of subnets, a public subnet and a private subnet. A public subnet has access to the internet while a private subnet does not. By default, all resources or instances, connected to a VPC can communicate within the VPC. AWS VPC VPC Peering and Gateways Now that we understand the networks and subnets for each provider, let’s look at how we can connect them so we can communicate between the different VPCs in our services. After all, cloud services are not that useful if we can’t communicate with them. Transitive peering is not supported. So if network A is peered with network B and B with C, network A and network C will not be able to communicate that is at least until we add another peering between A and C. VPC Peering and Gateways With AWS, a transitive gateway is used to connect multiple VPCs. The transit of gateway connects to the VPCs within a region and allows traffic to flow between them. If there are multiple regions involved inter-region peering connects the transit of gateways, providing connectivity between the networks. Connectivity to a remote network can take place over a VPN with the use of a virtual private gateway. VPC Peering and Gateways For a dedicated connection, AWS Direct Connect gateway is used to provide a private, high bandwidth dedicated connection between an on-premises network and the VPC. A third-party provider is required in this scenario for connectivity between the data center and AWS. These providers are located close to the AWS Data Center, providing a private, reliable connection between an on-premises network and AWS. Load Balancing Another important feature of networking is the ability to distribute connections between multiple instances of a service. This is referred to as load balancing. Not only does load balancing help availability, but it also helps performance by spreading the workload across multiple instances of the same service. Load Balancing AWS offers multiple load balancing solutions: The network load balancer in AWS distributes connections based on the transport or SSL traffic layer. An application load balancer: makes routing decisions at the application layer, directing traffic using path-based routing. DNS load balancer (called Route 53). Route 53 uses DNS to route traffic based on rules you configure that include health checks for DNS endpoints, geography, and latency-based decisions. Load Balancing Remember, if we deploy a load balancer, but the resources behind it are all in a single data center, the solution would be at risk if that data center should become unavailable. A better solution is to use a combination of global, regional, or internal and external load balancers to design a highly available solution. POS in Microservices Architecture Introduction A Point of Sale (POS) system is responsible for processing sales, managing inventory, and tracking customer data. However, traditional POS systems can be inflexible difficult to scale and difficult to maintain. One solution to these issues is to use a microservice architecture, which breaks down the traditional monolithic POS system into more minor, independently deployable services that communicate with each other through APIs. Introduction This approach would allow for greater Flexibility scalability, and ease of maintenance compared to the traditional monolithic approach. It would also make updating and evolving the POS system easier over time, as the changes can be made to a specific service without affecting the rest of the system. Additionally, the system could be easily scaled up or down as needed to handle changes in traffic or usage. Overview of Microservices Microservices are a software architecture pattern that involves breaking down an extensive, monolithic application into a set of small, independently deployable services. Each Service is responsible for a specific set of functionality and communicates with other services through APIs. The main benefits of Microservices architecture are that it allows for a faster development deployment process increased scalability Flexibility more efficient troubleshooting and maintenance. Microservices are also easier to test, as each Service can be tested independently. What features are in POS? POS Features Inventory Management Service: The Inventory Management Service is responsible for maintaining proper inventory levels and managing the flow of products in and out of the inventory. Its primary responsibilities include: Stock Management: The Service is responsible for keeping track of the current stock levels of products in the inventory. It also updates the stock levels when products are sold or returned. Inventory Reordering: The Service monitors the stock levels of products and generates purchase orders when stock levels fall below a certain threshold. POS Features Inventory Management Service: Product Movement: The Service tracks the movement of products between different locations within the inventory, such as from the warehouse to the store. Reporting: The Service generates inventory levels, product movement, and reorder reports. These reports can be used to make informed decisions about inventory management. Integration with other services: The Inventory Management Service communicates with other services like the Order Service to update the stock levels after an order is placed or the Purchase Service to update the stock levels after a purchase. It also communicates with Catalog Management Service to obtain product information such as name, description, and SKU(stock keeping unit). is a store or catalog's product and service identification code; it is often in the form of a machine-readable bar code POS Features Purchasing Service The Purchasing Service is responsible for managing the purchasing process from suppliers. Its primary responsibilities include: Supplier Management: Creating, updating, and deleting supplier information and maintaining information on supplier lead times and payment terms. Purchase Invoice Management: Generating and recording purchase invoices, the total cost of the purchase, and any applicable taxes or discounts. POS Features Purchasing Service Payment Management: Managing the process of paying suppliers, recording payments, and tracking outstanding balances. Reporting: Generating reports on purchase invoices and payments can be used to track spending and monitor supplier performance. Integration with other services: The purchasing Service communicates with other services like the Order service to obtain purchase orders, the Financial assistance to record and track the payments, and the Logistics service to track the supplier shipments and deliveries. POS Features Order Service The Order Service is responsible for creating and managing purchase orders and ensuring that the right products are ordered in the right quantities at the right time. Its primary responsibilities include: Purchase Order Management: The Service generates and manages purchase orders for products that need to be restocked. It communicates with the Inventory Management Service to determine which products need to be refilled and in what quantities based on the current stock levels and sales data. Order Processing: The Service receives and processes customer orders, updates the inventory, and generates invoices. POS Features Order Service Reporting: The Service generates reports on purchase orders and customer orders, which can be used to track sales and inventory levels. Integration with other services: The Order Service communicates with other services like Inventory Management Service to update stock levels, Purchasing Service to generate purchase orders, Financial Service to record revenue, Customer Management Service to obtain customer information, Billing Service to generate invoices for customer orders and get billing information, and Shipping Service to provide the information required for shipping products to customers. POS Features Billing Service The Billing Service generates invoices for customer orders and manages the billing process. Its primary responsibilities include: Invoice Generation: The Service generates invoices for customer orders from the Order Service. It uses the customer's billing information to create the invoice, including the total cost of the order, applicable taxes, and any applicable discounts. Payment Processing: The service processes payments for the invoices generated. It communicates with the Payment Processing Service to process payments and update the invoice status accordingly. POS Features Billing Service Credit Management: The Service manages credit for customers, including generating credit invoices, tracking credit balances, and applying credits to future invoices. Reporting: The Service generates reports on invoices and payments, which can be used to track revenue and monitor the billing process. Integration with other services: The Billing Service communicates with other services like Order Service to receive customer orders, the Payment Processing Service to process payments and apply them to the correct invoices, Financial Service to record revenue, and Customer Management Service to obtain customer information. POS Features Point of Sale Console Service * The POS Console Service provides a user interface for employees to process transactions and manage customer orders. Its primary responsibilities include: Transaction Processing: The Service allows employees to process transactions, including sales, returns, and exchanges. It communicates with Payment Processing Service to complete the transaction and update the customer's account information. Customer Order Management: The Service allows employees to manage customer orders, including creating new orders, updating existing orders, and canceling orders. It communicates with Order Service to update the customer's order information. POS Features Point of Sale Console Service * Inventory Management: The Service allows employees to manage inventory levels, including checking stock levels and updating inventory information. It communicates with Inventory Management Service to update inventory levels. Customer Management: The Service allows employees to manage customer information, including creating new customers, updating existing customers, and viewing customer information. It communicates with Customer Management Service to update customer information. Integration with other services: The POS Console Service communicates with Payment Processing Service, Order Service, Inventory Management Service, and Customer Management Service to update and retrieve necessary information. Console Service Inventory Purchasing Order Billing Management Service Service Service Service Build Cloud-Based Application Architecture Introduction Anyone who has built applications understands that applications designed specifically for the platform on which they'll run will perform better and are more resilient and easier to manage. Developing for public or private cloud platforms is no exception. Steps to design Cloud-based Architecture 1- Design the application as a collection of services This is service-based or service-oriented architecture. While many understand the concepts, developers still have a tendency to create tightly coupled applications that focus on the user interface, rather than expose the underlying functions as services they can leverage independently. When developing an application architecture for the cloud, you deal with complex distributed systems that can take advantage of loosely coupled applications built on many services that can also be decoupled from the data. Steps to design Cloud-based Architecture 2- Decouple the data (break out processing and data into separate components) You must consider performance. Database reads and writes across the open Internet can cause latency, and database communications may determine how close your data sits to the services and applications that need to leverage it. Consider using caching systems. These provide additional database performance by locally storing commonly accessed data, thereby reducing all database read requests back to the physical database. Steps to design Cloud-based Architecture 3- Consider communications between application components Focus on designing applications that optimize communications between application components. For example, combine communications into a single stream of data or a group of messages, rather than constantly communicating as if the application components reside on a single platform. An API, or application programming interface, is a set of defined rules that enable different applications to communicate with each othe API’s:Postman is an API platform for building and using APIs. Postman simplifies each step of the API lifecycle and streamlines collaboration so you can create better APIs—faster. Steps to design Cloud-based Architecture 4- Model and design for performance and scaling Designing for performance means first building a model that represents how the application behaves under an increasing load. If 1,000 or more users log on at the same time, how will the application handle the increased traffic on the network, the increased load on the application servers, and the load placed on the back-end databases? Monitor overall application performance using application-aware performance monitoring tools. Steps to design Cloud-based Architecture 5- Make security systemic within the application Cloud-based applications should leverage identity and access management (IAM). Enterprises that develop mature IAM capabilities can reduce their security costs and, more importantly, become significantly more agile at configuring security for cloud-based applications. Your core objective is to design security into the application and take advantage of the native features of both the cloud and the IAM system you use. Distributed Systems and Application Integration Learning Objectives Understand what is distributed systems and application integration, how they work and why we do need this architecture. Introduction When we start deploying multiple applications, they will certainly need to communicate with one another. There are two patterns of application communication: 1- Synchronous communications (app to app) 2- Asynchronous / Event based (application to queue to application) Buying Shipping Buying Shipping Queue Service Service Service Service Introduction Synchronous between applications can be problematic if there are sudden spikes of traffic. Or you suddenly need to encode 1,000 videos while usually they are 10? In that case, it’s better to decouple your applications, Using SQS: Queue model. Amazon SQS fully managed message queuing makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Using SNS: Pub/Sub model. Amazon Simple Notification Service (SNS) makes it easy for you to build an application using the pub/sub messaging model. You can send messages from your applications to customers or other applications in a scalable and cost-efficient manner These services can scale independently from our application. SQS: Queue model. AWS Simple Queue Service (SQS) What is queue? Consumer Producer Consumer Send Messages Poll Messages Producer SQS Queue Consumer Consumer Producer Consumer AWS SQS – Standard Queue Fully managed (by AWS) Scales from 1 message per second to 10,000 per second Default retention (validity period) of messages: 4 days, maximum of 14 days No limit to how many messages can be in the queue Low latency (

Use Quizgecko on...
Browser
Browser