🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

personal-notes.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

SpotlessQuail

Uploaded by SpotlessQuail

Tags

cloud computing aws amazon web services

Full Transcript

INTRODUCTION Amazon Web Services (AWS) is a cloud provider that provides you with servers and services that you can use on demand and scale easily. A Region is a cluster of data centers. We call each group of logical data centers an availability zone. Each AWS region consists of multiple, isolated,...

INTRODUCTION Amazon Web Services (AWS) is a cloud provider that provides you with servers and services that you can use on demand and scale easily. A Region is a cluster of data centers. We call each group of logical data centers an availability zone. Each AWS region consists of multiple, isolated, and physically separate AZs within a geographic area (usually 3, minimum is 2, maximum is 6). if you don’t specify or configure a default region, then us-east-1 will be chosen by default. Choosing an AWS region:     Compliance with data governance and legal requirements. Proximity to customers (reduced latency). Available services within a region (new services and features aren’t available in every region). Pricing (varies from region to region). An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS region. AZs give customers the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center. Points of Presence (Edge Locations) are a site that CloudFront uses to cache copies of your content for faster delivery to users at any location. There are many more edge locations than regions. AWS has Global Services:     Identity and Access Management (IAM). Route53. CloudFront (CDN). Web Application Firewall (WAF). 1 Most AWS services are Region-scoped:     Amazon EC2 (Infrastructure as a Service). Elastic Beanstalk (Platform as a Service). Lambda (Function as a Service). Rekognition (Software as a Service). IDENTITY AND ACCESS MANAGEMENT (IAM) IAM is a global service that provides fine-grained access control across all of AWS (universal). With IAM, you can specify who can access which services and resources, and under which conditions. With IAM policies, you manage permissions to your workforce and systems to ensure least-privilege permissions. New Users have NO permissions when first created. New Users are assigned Access Key ID & Secret Access Keys when first created. IAM offers the following features:          Centralized control of your AWS account. Shared access to your AWS account. Granular permissions. Identity Federation (including Active Directory, Facebook, LinkedIn etc). Multifactor Authentication. Provide temporary access for users/devices and services where necessary. Allows you to set up your own password rotation policy. Integrates with many different AWS services. Supports Payment Card Industry Data Security Standard (PCI DSS) compliance. Users — End users such as people, employees of an organization etc. Users don’t have to belong to a group and users can belong to multiple groups. aws configure aws iam list-users Groups — A collection of users. Each user in the group will inherit the permissions of the group. Groups only contain users, not other groups. Roles — You create roles and then assign them to AWS resources. Common roles include: 2    EC2 Instance Roles. Lambda Function Roles. Roles for CloudFormation. Users or Groups can be assigned JSON documents called policies. These policies define the permissions of the users/group/role. In AWS you apply the least privilege principle, you don’t give more permissions than a user needs. A password policy is a set of rules that define complexity requirements and mandatory rotation periods for your IAM user’s passwords. Multi Factor Authentication (MFA) is a combination of using a password you know and a security device you own. The main benefit of MFA is that if a password is stolen or hacked, the account will not be compromised. MFA device options in AWS:  Virtual MFA device:  Google authenticator (phone only).  Authy (multi-device and has support for multiple tokens on a single device). 3  Universal 2nd Factor (U2F) security key:  YubiKey by Yubico (has support for multiple root and IAM users using a single security key).  Hardware Key Fob MFA Device:  Gemalto.  Hardware Key Fob MFA Device for AWS GovCloud (US):  SurePassID. 4 Tips to remember when creating IAM roles:     Roles are more secure than storing your access key and secret access key on individual EC2 instances. Roles are easier to manage. Roles can be assigned to an EC2 instance after it is created using both the console and the command line. Roles are universal you can use them in any region. A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources, and cost allocate tags to track your AWS costs on a detailed level. IAM Credentials Report (account-level) is a report that lists all your account’s users and the status of their various credentials. IAM Access Advisor (user-level) is going to show the service permissions granted to a user and when those services were last accessed. You can use this information to revise your policies. IAM Guidelines & Best Practices:          Don’t use the root account except for AWS account setup. One physical user = one AWS user. Assign users to groups and assign permissions to groups. Create a strong password policy. Use and enforce the use of MFA. Create and use roles for giving permission to AWS services. Use access keys for programmatic access (CLI/SDK). Audit permissions of your account with the IAM credentials report. Never share IAM users & access keys. AWS Organizations AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. 5 AWS organization policy types:   Service control policies — Offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines. Tag policies — Help you standardize tags on all tagged resources across your organization. You can use tag policies to define tag keys (including how they should be capitalized) and their allowed values. You can also specify that the tagging rules in the tag policy be enforced on certain resource types. You can use organizational units (OUs) to group accounts together to administer as a single unit. This greatly simplifies the management of your accounts. For example, you can attach a policy-based control to an OU, and all accounts within the OU automatically inherit the policy. You can create multiple OUs within a single organization, and you can create OUs within other OUs. Each OU can contain multiple accounts, and you can move accounts from one OU to another. However, OU names must be unique within a parent OU or root. 6 AWS Organizations provide consolidated billing so that you can track the combined costs of all the member accounts in your organization. The management account is billed for all charges of the member accounts. However, unless the organization is changed to support all features in the organization (not consolidated billing features only) and member accounts are explicitly restricted by policies, each member account is otherwise independent from the other member accounts. For example, the owner of a member account can sign up for AWS services, access resources, and use AWS Premium Support unless the management account restricts those actions. Each account owner continues to use their own IAM user name and password, with account permissions assigned independently of other accounts in the organization. Advantages of consolidated billing:    One bill per AWS account. Very easy to track charges and allocate costs. Volume pricing discount. Best practice with AWS Organizations:     Always enable multi-factor authentication on the root account. Always use a strong and complex password on the root account. The paying account should be used for billing purposes only. Do not deploy resources into the paying account. Enable/Disable AWS services using Service Control Policies (SCP) either on organizational units or on individual accounts. Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled. SCPs aren't available if your organization has enabled only the consolidated billing features. SCPs alone are not sufficient to grant permissions to the accounts in your organization. No permissions are granted by an SCP. An SCP defines a guardrail, or sets limits, on the actions that the account's administrator can delegate to the IAM users and roles in 7 the affected accounts. The administrator must still attach identity-based or resourcebased policies to IAM users or roles, or to the resources in your accounts to actually grant permissions. The effective permissions are the logical intersection between what is allowed by the SCP and what is allowed by the IAM and resource-based policies. SCP Overview:        Whitelist or blacklist IAM actions. Applied at the OU or Account level. Does not apply to the Master Account. SCP can be applied to all the Users and Roles of the Account, including the Root user. The SCP does not affect service-linked roles — Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs. SCP must have an explicit Allow (does not allow anything by default). Use cases:  Restrict access to certain services (for example, you can’t use ElasticMapReduce).  Enforce PCI compliance by explicitly disabling services. Moving Accounts in AWS Organizations:   To migrate accounts from one organization to another, do the following:  Remove the member account from the old organization.  Send an invite to the new organization.  Accept the invite to the new organization from the member account. If you want the master account of the old organization to also join the new organization, do the following:  Remove the member accounts from the organizations using the procedure above.  Delete the old organization.  Repeat the process above to invite the old master account to the new organization. Identity-based policies and resource-based policies A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. When you create a permissions policy to restrict access to a resource, you can choose an identity-based policy or a resource-based policy. Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions). For example, you can attach the policy to the IAM user named John, stating that he is allowed to perform the Amazon EC2 RunInstances action. The policy could further state that John is allowed to get items from an Amazon DynamoDB table named MyCompany. You can also 8 allow John to manage his own IAM security credentials. Identity-based policies can be managed or inline. Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, VPC endpoints, and AWS Key Management Service encryption keys. With resource-based policies, you can specify who has access to the resource and what actions they can perform on it. Resource-based policies are inline only, not managed. Resource-based policies differ from resource-level permissions. You can attach resource-based policies directly to a resource. Resource-level permissions refer to the ability to use ARNs to specify individual resources in a policy. Resource-based policies are supported only by some AWS services. To better understand these concepts, view the following figure. The administrator of the 123456789012 account attached identity-based policies to the JohnSmith, CarlosSalazar, and MaryMajor users. Some of the actions in these policies can be performed on specific resources. For example, the user JohnSmith can perform some actions on Resource X. This is a resource-level permission in an identity-based policy. The administrator also added resource-based policies to Resource X, Resource Y, and Resource Z. Resource-based policies allow you to specify who can access that resource. For example, the resource-based policy on Resource X allows the JohnSmith and MaryMajor users list and read access to the resource. 9 The 123456789012 account example allows the following users to perform the listed actions:     JohnSmith — John can perform list and read actions on Resource X. He is granted this permission by the identity-based policy on his user and the resource-based policy on Resource X. CarlosSalazar — Carlos can perform list, read, and write actions on Resource Y, but is denied access to Resource Z. The identity-based policy on Carlos allows him to perform list and read actions on Resource Y. The Resource Y resource-based policy also allows him write permissions. However, although his identity-based policy allows him access to Resource Z, the Resource Z resource-based policy denies that access. An explicit Deny overrides an Allow and his access to Resource Z is denied. MaryMajor – Mary can perform list, read, and write operations on Resource X, Resource Y, and Resource Z. Her identity-based policy allows her more actions on more resources than the resource-based policies, but none of them deny access. ZhangWei – Zhang has full access to Resource Z. Zhang has no identity-based policies, but the Resource Z resource-based policy allows him full access to the resource. Zhang can also perform list and read actions on Resource Y. Identity-based policies and resource-based policies are both permissions policies and are evaluated together. For a request to which only permissions policies apply, AWS first checks all policies for a Deny. If one exists, then the request is denied. Then AWS checks for each Allow. If at least one policy statement allows the action in the request, the request is allowed. It doesn't matter whether the Allow is in the identity-based policy or the resource-based policy. This logic applies only when the request is made within a single AWS account. For requests made from one account to another, the requester in Account A must have an identity-based policy that allows them to make a request to the resource in Account B. Also, the resource-based policy in Account B must allow the requester in Account A to access the resource. There must be policies in both accounts that allow the operation, otherwise the request fails. A user who has specific permissions might request a resource that also has a permissions policy attached to it. In that case, AWS evaluates both sets of permissions when determining whether to grant access to the resource. How IAM roles differ from resource-based policies For some AWS services, you can grant cross-account access to your resources. To do this, you attach a policy directly to the resource that you want to share, instead of using a role as a proxy. The resource that you want to share must support resourcebased policies. Unlike an identity-based policy, a resource-based policy specifies who (which principal) can access that resource. 10 IAM roles and resource-based policies delegate access across accounts only within a single partition. For example, assume that you have an account in US West (North California) in the standard aws partition. You also have an account in China (Beijing) in the aws-cn partition. You can't use an Amazon S3 resource-based policy in your account in China (Beijing) to allow access for users in your standard aws account. Cross-account access with a resource-based policy has some advantages over crossaccount access with a role. With a resource that is accessed through a resourcebased policy, the principal still works in the trusted account and does not have to give up his or her permissions to receive the role permissions. In other words, the principal continues to have access to resources in the trusted account at the same time as he or she has access to the resource in the trusting account. This is useful for tasks such as copying information to or from the shared resource in the other account. The principals that you can specify in a resource based policy include accounts, IAM users, federated users, IAM roles, assumed-role sessions, or AWS services. The following list includes some of the AWS services that support resource-based policies:    Amazon S3 buckets — The policy is attached to the bucket, but the policy controls access to both the bucket and the objects in it. Amazon Simple Notification Service (Amazon SNS) topics. Amazon Simple Queue Service (Amazon SQS) queues. About delegating AWS permissions in a resource-based policy If a resource grants permissions to principals in your account, you can then delegate those permissions to specific IAM identities. Identities are users, groups of users, or roles in your account. You delegate permissions by attaching a policy to the identity. You can grant up to the maximum permissions that are allowed by the resourceowning account. Assume that a resource-based policy allows all principals in your account full administrative access to a resource. Then you can delegate full access, read-only access, or any other partial access to principals in your AWS account. Alternatively, if the resource-based policy allows only list permissions, then you can delegate only list access. If you try to delegate more permissions than your account has, your principals will still have only list access. IAM roles and resource-based policies delegate access across accounts only within a single partition. For example, you can't add cross-account access between an account in the standard aws partition and an account in the aws-cn partition. For example, assume that you manage AccountA and AccountB. In AccountA, you have the Amazon S3 bucket named BucketA. You attach a resource-based policy to BucketA that allows all principals in AccountB full access to objects in your bucket. They can create, read, or delete any objects in that bucket. In AccountB, you attach a 11 policy to the IAM user named User2. That policy allows the user read-only access to the objects in BucketA. That means that User2 can view the objects, but not create, edit, or delete them. 1. AccountA gives AccountB full access to BucketA by naming AccountB as a principal in the resource-based policy. As a result, AccountB is authorized to perform any action on BucketA, and the AccountB administrator can delegate access to its users in AccountB. 2. The AccountB root user has all of the permissions that are granted to the account. Therefore, the root user has full access to BucketA. 3. The AccountB administrator does not give access to User1. By default, users do not have any permissions except those that are explicitly granted. Therefore, User1 does not have access to BucketA. 4. The AccountB administrator grants User2 read-only access to BucketA. User2 can view the objects in the bucket. The maximum level of access that AccountB can delegate is the access level that is granted to the account. In this case, the resource-based policy granted full access to AccountB, but User2 is granted only read-only access. IAM evaluates a principal's permissions at the time the principal makes a request. Therefore, if you use wildcards (*) to give users full access to your resources, principals can access any resources that your AWS account has access to. This is true even for resources you add or gain access to after creating the user's policy. In the preceding example, if AccountB had attached a policy to User2 that allowed full access to all resources in all accounts, User2 would automatically have access to any resources that AccountB has access to. This includes the BucketA access and access to any other resources granted by resource-based policies in AccountA. 12 Give access only to entities you trust, and give the minimum level of access necessary. Whenever the trusted entity is another AWS account, that account can in turn delegate access to any of its IAM users. The trusted AWS account can delegate access only to the extent that it has been granted access; it cannot delegate more access than the account itself has been granted. ELASTIC COMPUTE CLOUD (EC2) Amazon Elastic Compute Cloud (EC2) provides scalable computing capacity in the AWS cloud. Using EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. The AWS EC2 Nitro is the underlying platform for the next generation of EC2 instances that enables AWS to innovate faster, further reduce cost for customers, and deliver added benefits like increased security and new instance types. Amazon EC2 consists of the following capabilities:     Renting virtual machines (EC2). Storing data on virtual drives (EBS). Distributing load across machines (ELB). Scaling the services using an auto-scaling group (ASG). EC2 sizing & configuration options:        Operating System (OS) — Linux, Windows or Mac OS. How much compute power & cores (CPU). How much random-access memory (RAM). How much storage space:  Network-attached (EBS & EFS).  Hardware (EC2 Instance Store). Network card — Speed of the card, public IP address. Firewall rules — Security Group. Bootstrap script (configure at first launch) — EC2 User Data. Bootstrapping means launching commands when a machine starts using an EC2 user data script (which runs with the root user). That script is only run once at the instance first start. Bootstrap scripts run when an EC2 instance first boots and can be a powerful way of automating software installs and updates. #!/bin/bash yum update -y yum install httpd y # Install the apache service. service httpd start chkconfig httpd on # Start the service on system reboot. cd /var/www/html 13 echo "Hello world" > index.html aws s3 mb s3://bucket_name aws s3 cp index.html s3://bucket_name EC2 instance types:      General Purpose (t2.micro) — Great for a diversity of workloads such as web servers or code repositories. Compute Optimized — Greate for compute-intensive tasks that require high performance processors such as:  Batch processing workloads.  Media transcoding.  High performance web servers.  High performance computing (HPC).  Scientific modeling & machine learning.  Dedicated gaming servers. Memory Optimized — Fast performance for workloads that process large datasets in memory. Use cases include:  High performance, relational/nonrelational databases.  Distributed web scale cache stores.  In-memory databases optimized for business intelligence.  Applications performing real-time processing of big unstructured data. Accelerated Computing. Storage Optimized — Great for storage-intensive tasks that require high, sequential read and write access to large datasets on local storage. Use cases include:  High frequency OLTP systems.  Relational & NoSQL databases.  Cache for in-memory databases e.g., Redis.  Data warehousing applications.  Distributed file systems. EC2 Instance Metadata allows EC2 instances to “learn about themselves” without using an IAM Role for that purpose. The URL is “http://169.254.169.254/latest/metadata”. You can retrieve the IAM Role name from the metadata, but you CANNOT retrieve the IAM Policy.   Metadata — Information about the EC2 instance. Userdata — Launch script of the EC2 instance. sudo su # Display the bootstrap script curl http://169.254.169.254.latest/user-data # Get data on individual ec2 instances. curl http://169.254.169.254.latest/meta-data/ # Get your ipv4 address curl http://169.254.169.254.latest/meta-data/local-ipv4 14 EC2 pricing models (purchasing options):        On Demand — Allows you to pay a fixed rate by the hour (or by the second) with no commitment. On Demand pricing is useful for:  Users that want the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment.  Applications with short term, spiky, or unpredictable workloads that cannot be interrupted.  Applications being developed or tested on Amazon EC2 for the first time.  Pay for what you use:  Linux — Billing per second after the first minute.  All other operating systems (e.g., Windows) — Billing per hour. Reserved — Provides you with a capacity reservation, and offer a significant discount on the hourly charge for an instance. Contract terms are 1 Year or 3 Year terms. Consists of 3 types (standard reserved instances, convertible reserved instances and scheduled reserved instances). Reserved pricing is useful for:  Applications with steady state or predictable usage.  Applications that require reserved capacity.  Users are able to make upfront payments to reduce their total computing costs even further. Purchasing options include no upfront, partial upfront, or all upfront discounts. Standard Reserved Instances — Offer up to 75% off on demand instances. The more you pay up front and the longer the contract, the greater the discount. Convertible Reserved Instances — Offer up to 54% off on demand capability to change the attributes of the reserved instance as long as the exchange results in the creation of reserved instances of equal or greater value. Scheduled Reserved Instances — Available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month. Spot — Enables you to bid whatever price you want for instance capacity, providing for even greater savings if your applications have flexible start and end times. If the Spot instance is terminated by Amazon EC2, you will not be charged for a partial hour of usage, if you terminate the instance yourself, you will be charged for any hour in which the instance run. Spot pricing is useful for:  Applications that have flexible start and end times.  Applications that are only feasible at very low compute prices.  Users with urgent computing needs for large amounts of additional capacity. Dedicated Hosts — Physical EC2 server dedicated for your use. Dedicated Hosts can help you reduce costs by allowing you to use your existing serverbound software licenses. Dedicated Hosts pricing is useful for:  Useful for regulatory requirements that may not support multi-tenant virtualization. 15    Great for licensing which does not support multi-tenancy or cloud deployments. Can be purchased On-Demand (hourly). Can be purchased as a Reservation for up to 70% off the On-Demand price. Dedicated instance vs. Dedicated host Way back in March of 2011, AWS announced the release of Dedicated Instances, which allows organizations to launch EC2 instances on dedicated infrastructure. This led to a lot of questions about AWS Dedicated Instances vs Dedicated Hosting. Typically, when an EC2 instance is launched in a VPC, the virtualized infrastructure is built from a pool of shared resources (e.g., CPU units) that is in use by all customers within a given Availability Zone. When an instance is turned off or terminated, those resources are then released back into the shared pool of available resources. This violates compliance regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), for example, which requires completely dedicated infrastructure for any instances that process Protected Health Information (PHI). If Dedicated Instances already allow for compliance and increased performance, then what is the purpose of Dedicated Hosts, which were released more recently in November of 2015? Let’s start with the technical difference between Dedicated Instances and Dedicated Hosts. The AWS docs are not sufficiently clear on what the real differences are. The best summary found in the docs is: “An important difference between a Dedicated Host and a Dedicated Instance is that a Dedicated Host gives you additional visibility and control over how instances are placed on a physical server, and you can consistently deploy your instances to the same physical server over time.” Simply put, there are no apparent technical differences between Dedicated Instances and Dedicated Hosts from the physical host level. Both services give the option to launch instances to your own Dedicated Hosts with resources that will not be consumed by other customers. The real difference is in the visibility into the physical host that Dedicated Hosts gives you. While Dedicated Instances are extremely valuable from a compliance perspective, Dedicated Hosts also give you the visibility into the physical host that is required for a Bring Your Own License (BYOL) model i.e., if you want to use your own Windows Server, SQL Server, SUSE, or RHEL licenses that are provided on a CPU core basis. In addition to licensing visibility, Dedicated Hosts give you the same level of compliance as Dedicated Instances and also add one additional benefit in increased network performance. When all instances are on the same physical host, network latency is minimized (only within that physical host, of course). Dedicated Instances can all potentially launch on the same physical host, but there is no way to know for sure. With Dedicated Hosts, you get the visibility into physical hosts from the AWS console that you need. 16 Dedicated Instances pricing model:    You pay a flat fee of $2/hour for each region in which at least 1 Dedicated instance is running (this fee is the same regardless if you have 1 Dedicated Instance or 1,000). You then pay hourly for each individual Dedicated Instance at a premium of around 10% from the cost of a shared, on-demand instance. You can purchase reserved Dedicated Instances to reduce cost further. Dedicated hosts pricing model:       You pay hourly for a single physical host based on instance type, which can then hold a certain number of instances (pricing is based on the instance size). Each Dedicated Host that is allocated costs the same regardless of the number of instances launched on the host, a C4 instance (the latest generation of Compute-optimized instances), for example, costs $1.75 per hour. Each Dedicated Host must run the same instance type and instance size (no mixing and matching within a single Dedicated Host not even different instance sizes of the same family). For example, if you are using the C3 instance family, you must pay hourly for an entire physical C3 host that can launch up to 16 c3.large instances, 8 c3.xlarge instances, etc. If you need to launch 17 c3.large instances, you need to pay for 2 entire Dedicated Hosts, and you will have 15 open slots on the 2nd host. If you fill an entire on-demand Dedicated Host with instances, you will be paying around a 10% premium vs. on-demand instances. You can purchase reserved Dedicated Hosts to reduce cost. In summary, the differences between Dedicated Instances and Dedicated Hosts are:    Use Dedicated Hosts for more visibility into the physical host for BYOL. You should also try Dedicated Hosts if your app requires minimal latency between instances. Use Dedicated Instances if you are only concerned with compliance with regulations. In terms of pricing, if you are able to maximize the number of instances on a Dedicated Host, you will pay less vs. Dedicated Instances. The percentage savings depends on your scale due to the $2/hour flat fee. Unless BYOL or intranet network performance is a concern, Dedicated Instances is the way to go due to the flexibility of launching instances vs. Dedicated Hosts. Spot Fleets Spot Fleets allow you to automatically request spot instances with the lowest price. The Spot Fleet attempts to launch the number of Spot Instances and On-Demand Instances to meet the target capacity that you specified in the Spot Fleet request. The request for Spot Instances is fulfilled if there is available capacity and the 17 maximum price you specified in the request exceeds the current Spot price. The Spot Fleet also attempts to maintain its target capacity fleet if your Spot Instances are interrupted. You can also set a maximum amount per hour that you’re willing to pay for your fleet, and Spot Fleet launches instances until it reaches the maximum amount. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. A Spot capacity pool is a set of unused EC2 instances with the same instance type (for example, m5.large), operating system, Availability Zone, and network platform. When you make a Spot Fleet request, you can include multiple launch specifications, that vary by instance type, AMI, Availability Zone, or subnet. The Spot Fleet selects the Spot capacity pools that are used to fulfill the request, based on the launch specifications included in your Spot Fleet request, and the configuration of the Spot Fleet request. The Spot Instances come from the selected pools. m5.2xlarge (m — Instance class, 5 — Generation, 2xlarge — Size within the instance class). Status Checks detect problems that may impair an instance from running your application:   System Status Checks — These checks monitor the AWS systems required to use an instance and ensure they are functioning properly. Instance Status Checks — These checks monitor your software and network configuration for an instance. Placement Groups Sometimes you want control over the EC2 instance placement strategy. That strategy can be defined using placement groups (describe how you can place your EC2 instances). There are 3 types of placement groups:  Cluster Placement Group — A grouping of instances within a single availability zone. Placement groups are recommended for applications that need low network latency, high network throughput, or both. Only certain instances can be launched into a clustered placement group. 18   Spread Placement Group — A group of instances that are each placed on distinct underlying hardware (think individual instances). Recommended for applications that have a small number of critical instances that should be kept separate from each other. Partition Placement Group — Amazon EC2 divides each grop into logical segments called partitions (think multiple instances). EC2 ensures that each partition within a placement group has its own set of racks. Each rack has its own network and power source. No two partitions within a placement group share the same racks, allowing you to isolate the impact of hardware failure within your application. Use cases include HDFS, HBase, and Cassandra. 19 Tips to remember The following are tips to remember when working with placement groups:       A clustered placement group can’t span multiple availability zones while a spread placement and partitioned group can. The name you specify for a placement group must be unique within your AWS account. Only certain types of instances can be launched in a placement group (Compute Optimized, GPU, Memory Optimized, Storage Optimized). AWS recommends homogenous (parts that are all the same) instances within clustered placement groups. You can’t merge placement groups. You can move an existing instance into a placement group. Before you move the instance, the instance must be in the stopped state. You can move or remove an instance using the AWS CLI or an AWS SDK, you can’t do it via the console yet. Placement groups general rules and limitations Before you use placement groups, be aware of the following rules:       You can create a maximum of 500 placement groups per account in each Region. The name that you specify for a placement group must be unique within your AWS account for the Region. You can't merge placement groups. An instance can be launched in one placement group at a time; it cannot span multiple placement groups. On-Demand Capacity Reservation and zonal Reserved Instances provide a capacity reservation for EC2 instances in a specific Availability Zone. The capacity reservation can be used by instances in a placement group. However, it is not possible to explicitly reserve capacity for a placement group. You cannot launch Dedicated Hosts in placement groups. 20 Cluster placement group rules and limitations The following rules apply to cluster placement groups:       The following instance types are supported:  Current generation instances, except for burstable performance instances (for example, T2) and Mac1 instances.  The following previous generation instances: A1, C3, cc2.8xlarge, cr1.8xlarge, G2, hs1.8xlarge, I2, and R3. A cluster placement group can't span multiple Availability Zones. The maximum network throughput speed of traffic between two instances in a cluster placement group is limited by the slower of the two instances. For applications with high-throughput requirements, choose an instance type with network connectivity that meets your requirements. For instances that are enabled for enhanced networking, the following rules apply:  Instances within a cluster placement group can use up to 10 Gbps for single-flow traffic. Instances that are not within a cluster placement group can use up to 5 Gbps for single-flow traffic.  Traffic to and from Amazon S3 buckets within the same Region over the public IP address space or through a VPC endpoint can use all available instance aggregate bandwidth. You can launch multiple instance types into a cluster placement group. However, this reduces the likelihood that the required capacity will be available for your launch to succeed. It is recommended to use the same instance type for all instances in a cluster placement group. Network traffic to the internet and over an AWS Direct Connect connection to on-premises resources is limited to 5 Gbps. Partition placement group rules and limitations The following rules apply to partition placement groups:    A partition placement group supports a maximum of seven partitions per Availability Zone. The number of instances that you can launch in a partition placement group is limited only by your account limits. When instances are launched into a partition placement group, Amazon EC2 tries to evenly distribute the instances across all partitions. Amazon EC2 doesn’t guarantee an even distribution of instances across all partitions. A partition placement group with Dedicated Instances can have a maximum of two partitions. Spread placement group rules and limitations The following rules apply to spread placement groups:  A spread placement group supports a maximum of seven running instances per Availability Zone. For example, in a Region with three Availability Zones, you can run a total of 21 instances in the group (seven per zone). If you try to 21  start an eighth instance in the same Availability Zone and in the same spread placement group, the instance will not launch. If you need to have more than seven instances in an Availability Zone, then the recommendation is to use multiple spread placement groups. Using multiple spread placement groups does not provide guarantees about the spread of instances between groups, but it does ensure the spread for each group, thus limiting impact from certain classes of failures. Spread placement groups are not supported for Dedicated Instances. EC2 Hibernate When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance's EBS root volume and any attached EBS data volumes. When you start your instance:     The EBS root volume is restored to its previous state. The RAM contents are reloaded. The processes that were previously running on the instance are resumed. Previously attached data volumes are reattached and the instance retains its instance ID. You can hibernate an instance only if it's enabled for hibernation and it meets the hibernation prerequisites. If an instance or application takes a long time to bootstrap and build a memory footprint in order to become fully productive, you can use hibernation to pre-warm the instance. To pre-warm the instance, you:    Launch it with hibernation enabled. Bring it to a desired state. Hibernate it so that it's ready to be resumed to the desired state whenever needed. You're not charged for instance usage for a hibernated instance when it is in the stopped state. However, you are charged for instance usage while the instance is in the stopping state, while the contents of the RAM are transferred to the EBS root volume. This is different from when you stop an instance without hibernating it. You're not charged for data transfer. However, you are charged for storage of any EBS volumes, including storage for the RAM contents. If you no longer need an instance, you can terminate it at any time, including when it is in a stopped (hibernated) state. Use cases:    If you want to keep a long-running process running. Saving the RAM state. Services that take time to initialize. 22 Tips to remember when working with EC2 Hibernate:      Instance RAM size must be less than 150 GB. Instance size is not supported for bare metal instances. Root volume must be EBS, encrypted, not instance store, and large. Available for on-demand and reserved instances. An instance cannot be hibernated for more than 60 days. Scalability and High Availability Scalability means that an application or system can handle greater loads by adapting. There are two kinds of scalability:   Vertical scalability — Means increasing the size of the instance. It is very common for non distributed systems such as a database. RDS and ElastiCache are services that can scale vertically. There is usually a limit to how much you can vertically scale which is a hardware limit. Horizontal scalability (elasticity) — Means increasing the number of instances or systems for your application. Horizontal scaling implies distributed systems. This is very common for web applications (ASG, Load Balancers). It’s easy to scale thanks to EC2. 23 High availability means running your application or system in at least two data centers or availability zones. The goal of high availability is to survive a data center loss. Scaling out, or horizontal scaling, contrasts to scaling out, or vertical scaling. The idea of scaling cloud resources may be intuitive. As your cloud workload changes it may be necessary to increase infrastructure to support increasing load or it may make sense to decrease infrastructure when demand is low. The “up or out” part is perhaps less intuitive. Scaling out is adding more equivalently functional components in parallel to spread out a load. This would be going from two load-balanced web server instances to three instances. Scaling up, in contrast, is making a component larger or faster to handle a greater load. This would be moving your application to a virtual server (VM) with 2 CPU to one with 3 CPUs. For completeness, scaling down refers to decreasing your system resources, regardless of whether you were using the up or out approach. Resources such as CPU, network, and storage are common targets for scaling up. The goal is to increase the resources supporting your application to reach or maintain 24 adequate performance. In a hardware-centric world, this might mean adding a larger hard drive to a computer for increased storage capacity. It might mean replacing the entire computer with a machine that has more CPU and a more performant network interface. If you are managing a non-cloud system, this scaling up process can take anywhere from weeks up to months as you request, purchase, install, and finally deploy the new resources. In a cloud system, the process should take seconds or minutes. A cloud system might still target hardware and that will be on the tens of minutes end of the time to scale range. But virtualized systems dominate cloud computing and some scaling actions, like increasing storage volume capacity or spinning up a new container to scale up a microservice can take seconds to deploy. What is being scaled will not be that different. One may still shift applications to a larger VM or it may be as simple as allocating more capacity on an attached storage volume. Regardless of whether you are dealing with virtual or hardware resources, the takehome point is that you are moving from one smaller resource and scaling up to one larger, more performant resource. Scaling up makes sense when you have an application that needs to sit on a single machine. If you have an application that has a loosely coupled architecture, it becomes possible to easily scale out by replicating resources. Scaling out a microservices application can be as simple as spinning up a new container running a webserver app and adding it to the load balancer pool. When scaling out the idea is that it is possible to add identical services to a system to increase performance. Systems that support this model also tolerate the removal of resources when the load decreases. This allows greater fluidity in scaling resource size in response to changing conditions. The incremental nature of the scale out model is of great benefit when considering cost management. Because components are identical, cost increments should be relatively predictable. Scaling out also provides greater responsiveness to changes in demand. Typically services can be rapidly added or removed to best meet resource needs. This flexibility and speed effectively reduces spending by only using (and paying for) the resources needed at the time. EC2 Capacity Reservations On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. This gives you the ability to create and manage Capacity Reservations independently from the billing discounts offered by Savings Plans or Regional Reserved Instances. By creating Capacity Reservations, you ensure that you always have access to EC2 capacity when you need it, for as long as you need it. You can create Capacity Reservations at any time, without entering into a one-year or three-year term commitment, and the capacity is available immediately. Billing starts as soon as the 25 capacity is provisioned and the Capacity Reservation enters the active state. When you no longer need it, cancel the Capacity Reservation to stop incurring charges. When you create a Capacity Reservation, you specify:    The Availability Zone in which to reserve the capacity. The number of instances for which to reserve capacity. The instance attributes, including the instance type, tenancy, and platform/OS. Capacity Reservations can only be used by instances that match their attributes. By default, they are automatically used by running instances that match the attributes. If you don't have any running instances that match the attributes of the Capacity Reservation, it remains unused until you launch an instance with matching attributes. In addition, you can use Savings Plans and Regional Reserved Instances with your Capacity Reservations to benefit from billing discounts. AWS automatically applies your discount when the attributes of a Capacity Reservation match the attributes of a Savings Plan or Regional Reserved Instance. Tips to remember when creating EC2 Instances:     Termination protection is turned off by default, you must turn it on. On an EBS-backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated. EBS Root Volumes of your DEFAULT AMI’s can be encrypted. You can also use a third party tool (such as bitlocker) to encrypt the root volume, or this can be done when creating AMI’s in the AWS console or using the API. Additional volumes can be encrypted. LOAD BALANCERS A load balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the availability of your application. You add one or more listeners to your load balancer. In an organization’s attempt to meet the application demands, Load Balancers assist in deciding which server can efficiently handle the requests. This creates a better user experience. Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers. Amazon ECS services can use these types of load balancers. Application Load Balancers are used to route HTTP/HTTPS (or Layer 7) traffic. Network Load Balancers and Classic Load Balancers are used to route TCP (or Layer 4) traffic. 26 Application Load Balancer An Application Load Balancer makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each container instance in your cluster. Application Load Balancers support dynamic host port mapping. For example, if your task's container definition specifies port 80 for an NGINX container port, and port 0 for the host port, then the host port is dynamically chosen from the ephemeral port range of the container instance (such as 32768 to 61000 on the latest Amazon ECS-optimized AMI). When the task is launched, the NGINX container is registered with the Application Load Balancer as an instance ID and port combination, and traffic is distributed to the instance ID and port corresponding to that container. This dynamic mapping allows you to have multiple tasks from a single service on the same container instance. Network Load Balancer A Network Load Balancer makes routing decisions at the transport layer (TCP/SSL). It can handle millions of requests per second. After the load balancer receives a connection, it selects a target from the target group for the default rule using a flow hash routing algorithm. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. It forwards the request without modifying the headers. Network Load Balancers support dynamic host port mapping. For example, if your task's container definition specifies port 80 for an NGINX container port, and port 0 for the host port, then the host port is dynamically chosen from the ephemeral port range of the container instance (such as 32768 to 61000 on the latest Amazon ECS optimized AMI). When the task is launched, the NGINX container is registered with the Network Load Balancer as an instance ID and port combination, and traffic is distributed to the instance ID and port corresponding to that container. This dynamic mapping allows you to have multiple tasks from a single service on the same container instance. 27 Classic Load Balancer A Classic Load Balancer makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). Classic Load Balancers currently require a fixed relationship between the load balancer port and the container instance port. For example, it is possible to map the load balancer port 80 to the container instance port 3030 and the load balancer port 4040 to the container instance port 4040. However, it is not possible to map the load balancer port 80 to port 3030 on one container instance and port 4040 on another container instance. This static mapping requires that your cluster has at least as many container instances as the desired count of a single service that uses a Classic Load Balancer. Gateway Load Balancers Gateway Load Balancers allow you to deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. It combines a transparent network gateway (that is, a single entry and exit point for all traffic) and distributes traffic while scaling your virtual appliances with the demand. A Gateway Load Balancer operates at the third layer of the Open Systems Interconnection (OSI) model, the network layer. It listens for all IP packets across all ports and forwards traffic to the target group that's specified in the listener rule. It maintains stickiness of flows to a specific target appliance using 5tuple (for TCP/UDP flows) or 3-tuple (for non-TCP/UDP flows). The Gateway Load Balancer and its registered virtual appliance instances exchange application traffic using the GENEVE protocol on port 6081. It supports a maximum transmission unit (MTU) size of 8500 bytes. Gateway Load Balancers use Gateway Load Balancer endpoints to securely exchange traffic across VPC boundaries. A Gateway Load Balancer endpoint is a VPC endpoint that provides private connectivity between virtual appliances in the service provider VPC and application servers in the service consumer VPC. You deploy the Gateway Load Balancer in the same VPC as the virtual 28 appliances. You register the virtual appliances with a target group for the Gateway Load Balancer. SSL/TLS certificates for Classic Load Balancers An SSL Certificate allows traffic between your clients and your load balancer to be encrypted in transit (in-flight encryption). SSL is used to encrypt connections. TLS refers to Transport Layer Security, which is a newer version. Public SSL certificates are issued by Certificate Authorities (CA) such as Comodo, Symantec, GoDaddy, GlobalSign, Digicert, Letsencrypt, etc. SSL certificates have an expiration date (you set) and must be renewed. If you use HTTPS (SSL or TLS) for your front-end listener, you must deploy an SSL/TLS certificate on your load balancer. The load balancer uses the certificate to terminate the connection and then decrypt requests from clients before sending them to the instances. The SSL and TLS protocols use an X.509 certificate (SSL/TLS server certificate) to authenticate both the client and the back-end application. An X.509 certificate is a digital form of identification issued by a certificate authority (CA) and contains identification information, a validity period, a public key, a serial number, and the digital signature of the issuer. You can create a certificate using AWS Certificate Manager or a tool that supports the SSL and TLS protocols, such as OpenSSL. You will specify this certificate when you create or update a HTTPS listener for your load balancer. When you create a certificate for use with your load balancer, you must specify a domain name. Amazon recommends that you use AWS Certificate Manager (ACM) to create or import certificates for your load balancer. ACM integrates with Elastic Load Balancing so that you can deploy the certificate on your load balancer. To deploy a certificate on your load balancer, the certificate must be in the same Region as the load balancer. To allow an IAM user to deploy the certificate on your load balancer using the AWS Management Console, you must allow access to the ACM ListCertificates API action. If you are not using ACM, you can use SSL/TLS tools, such as OpenSSL, to create a certificate signing request (CSR), get the CSR signed by a Certificate Authority (CA) to produce a certificate, and upload the certificate to AWS Identity and Access Management (IAM). Server Name Indication (SNI) solves the problem of loading multiple SSL certificates onto one web server (to serve multiple websites). It’s a newer protocol, and requires the client to indicate the hostname of the target server in the initial SSL handshake. The server will then find the correct certificate, or return the default one. 29 SECURITY GROUPS A security group acts as a virtual firewall, controlling the traffic that is allowed to reach and leave the resources that it is associated with. For example, after you associate a security group with an EC2 instance, it controls the inbound and outbound traffic for the instance. When you create a VPC, it comes with a default security group. You can create additional security groups for each VPC. You can associate a security group only with resources in the VPC for which it is created. For each security group, you add rules that control the traffic based on protocols and port numbers. There are separate sets of rules for inbound traffic and outbound traffic. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. Security group basics The following are the characteristics of security groups:    When you create a security group, you must provide it with a name and a description. The following rules apply:  A security group name must be unique within the VPC.  Names and descriptions can be up to 255 characters in length.  Names and descriptions are limited to the following characters: a-z, A-Z, 0-9, spaces, and._-:/()#,@[]+=&;{}!$*.  When the name contains trailing spaces, AWS trims the space at the end of the name. For example, if you enter "Test Security Group " for the name, AWS stores it as "Test Security Group".  A security group name cannot start with sg-. Security groups are stateful. For example, if you send a request from an instance, the response traffic for that request is allowed to reach the instance regardless of the inbound security group rules. Responses to allowed inbound traffic are allowed to leave the instance, regardless of the outbound rules. There are quotas on the number of security groups that you can create per VPC, the number of rules that you can add to each security group, and the number of security groups that you can associate with a network interface. The following are the characteristics of security group rules:    You can specify allow rules, but not deny rules. When you first create a security group, it has no inbound rules. Therefore, no inbound traffic is allowed until you add inbound rules to the security group. When you first create a security group, it has an outbound rule that allows all outbound traffic from the resource. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic is allowed. 30    When you associate multiple security groups with a resource, the rules from each security group are aggregated to form a single set of rules that are used to determine whether to allow access. When you add, update, or remove rules, your changes are automatically applied to all resources associated with the security group. The effect of some rule changes can depend on how the traffic is tracked. When you create a security group rule, AWS assigns a unique ID to the rule. You can use the ID of a rule when you use the API or CLI to modify or delete the rule. Default security groups for your VPCs Your default VPCs and any VPCs that you create come with a default security group. With some resources, if you don't associate a security group when you create the resource, AWS associates the default security group. For example, if you do not specify a security group when you launch an EC2 instance, AWS associates the default security group. You can change the rules for a default security group. You can't delete a default security group. If you try to delete the default security group, you get the following error: Client.CannotDelete. The following table describes the default rules for a default security group: Inbound Source The security group ID (its own resource ID). Protocol All Port range All Description Allows inbound traffic from resources that are assigned to the same security group. Outbound Destination 0.0.0.0/0 Protocol All Port range All ::/0 All All Description Allows all outbound IPv4 traffic. Allows all outbound IPv6 traffic. This rule is added only if your VPC has an associated IPv6 CIDR block. 31 Security group rules The rules of a security group control the inbound traffic that's allowed to reach the resources that are associated with the security group. The rules also control the outbound traffic that's allowed to leave them. You can add or remove rules for a security group (also referred to as authorizing or revoking inbound or outbound access). A rule applies either to inbound traffic (ingress) or outbound traffic (egress). You can grant access to a specific CIDR range, or to another security group in your VPC or in a peer VPC (requires a VPC peering connection). For each rule, you specify the following:      Protocol — The protocol to allow. The most common protocols are 6 (TCP), 17 (UDP), and 1 (ICMP). Port range — For TCP, UDP, or a custom protocol, the range of ports to allow. You can specify a single port number (for example, 22), or range of port numbers (for example, 7000-8000). ICMP type and code — For ICMP, the ICMP type and code. For example, use type 8 for ICMP Echo Request or type 128 for ICMPv6 Echo Request. Source or destination — The source (inbound rules) or destination (outbound rules) for the traffic to allow. Specify one of the following:  A single IPv4 address. You must use the /32 prefix length. For example, 203.0.113.1/32.  A single IPv6 address. You must use the /128 prefix length. For example, 2001:db8:1234:1a00::123/128.  A range of IPv4 addresses, in CIDR block notation. For example, 203.0.113.0/24.  A range of IPv6 addresses, in CIDR block notation. For example, 2001:db8:1234:1a00::/64.  The ID of a prefix list. For example, pl-1234abc1234abc123.  The ID of a security group (referred to here as the specified security group). For example, the current security group, a security group from the same VPC, or a security group for a peered VPC. This allows traffic based on the private IP addresses of the resources associated with the specified security group. This does not add rules from the specified security group to the current security group. (Optional) Description. You can add a description for the rule, which can help you identify it later. A description can be up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and._-:/()#,@[]+=;{}!$*. If you configure routes to forward the traffic between two instances in different subnets through a middlebox appliance, you must ensure that the security groups for both instances allow traffic to flow between the instances. The security group for each instance must reference the private IP address of the other instance, or the CIDR range of the subnet that contains the other instance, as the source. If you 32 reference the security group of the other instance as the source, this does not allow traffic to flow between the instances. Example rules The rules that you add to a security group often depend on the purpose of the security group. The following table describes example rules for a security group that's associated with web servers. Your web servers can receive HTTP and HTTPS traffic from all IPv4 and IPv6 addresses and send SQL or MySQL traffic to your database servers. Inbound Source 0.0.0.0/0 Protocol TCP Port range 80 ::/0 TCP 80 0.0.0.0/0 TCP 443 ::/0 TCP 443 Your network’s public IPv4 address range. TCP 22 Your network’s public IPv4 address range. TCP 3389 Protocol TCP Port range 1433 Description Allow outbound Microsoft SQL Server access. TCP 3306 Allow outbound MySQL access. Outbound Destination The ID of the security group for your Microsoft SQL Server database servers. The ID of the security group for your MySQL database servers. 33 Description Allows inbound HTTP access from all IPv4 addresses. Allows inbound HTTP access from all IPv6 addresses. Allows inbound HTTPS access from all IPv4 addresses. Allows inbound HTTPS access from all IPv6 addresses. Allows inbound SSH access from IPv4 addresses in your network. Allows inbound RDS access from IPv4 addresses in your network. A database server needs a different set of rules. For example, instead of inbound HTTP and HTTPS traffic, you can add a rule that allows inbound MySQL or Microsoft SQL Server access. Stale security group rules If your VPC has a VPC peering connection with another VPC, or if it uses a VPC shared by another account, a security group rule in your VPC can reference a security group in that peer VPC or shared VPC. This allows resources that are associated with the referenced security group and those that are associated with the referencing security group to communicate with each other. If the security group in the shared VPC is deleted, or if the VPC peering connection is deleted, the security group rule is marked as stale. You can delete stale security group rules as you would any other security group rule. Create a security group By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. A security group can be used only in the VPC for which it is created. To create a security group using the console:         Open the Amazon VPC console. In the navigation pane, choose Security Groups. Choose Create security group. Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. From VPC, choose the VPC. You can add security group rules now, or you can add them later. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. Choose Create security group. Tag your security groups Add tags to your resources to help organize and identify them, such as by purpose, owner, or environment. You can add tags to your security groups. Tag keys must be unique for each security group. If you add a tag with a key that is already associated with the rule, it updates the value of that tag. To tag a security group using the console:    Open the Amazon VPC console. In the navigation pane, choose Security Groups. Select the check box for the security group. 34    Choose Actions, Manage tags. The Manage tags page displays any tags that are assigned to the security group. To add a tag, choose Add tag and enter the tag key and value. To delete a tag, choose Remove next to the tag that you want to delete. Choose Save changes. Centrally manage VPC security groups using AWS Firewall Manager AWS Firewall Manager simplifies your VPC security groups administration and maintenance tasks across multiple accounts and resources. With Firewall Manager, you can configure and audit your security groups for your organization from a single central administrator account. Firewall Manager automatically applies the rules and protections across your accounts and resources, even as you add new resources. Firewall Manager is particularly useful when you want to protect your entire organization, or if you frequently add new resources that you want to protect from a central administrator account. You can use Firewall Manager to centrally manage security groups in the following ways:    Configure common baseline security groups across your organization — You can use a common security group policy to provide a centrally controlled association of security groups to accounts and resources across your organization. You specify where and how to apply the policy in your organization. Audit existing security groups in your organization — You can use an audit security group policy to check the existing rules that are in use in your organization's security groups. You can scope the policy to audit all accounts, specific accounts, or resources tagged within your organization. Firewall Manager automatically detects new accounts and resources and audits them. You can create audit rules to set guardrails on which security group rules to allow or disallow within your organization, and to check for unused or redundant security groups. Get reports on non-compliant resources and remediate them — You can get reports and alerts for non-compliant resources for your baseline and audit policies. You can also set auto-remediation workflows to remediate any noncompliant resources that Firewall Manager detects. Tips to remember when creating security groups:       All inbound traffic is blocked by default. All outbound traffic is allowed. Changes to security groups take effect immediately (stateful). You can have any number of EC2 instances within a security group. You can have multiple security groups attached to EC2 instances. If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out again. 35   You cannot block specific IP addresses using security groups, instead use Network Access Control Lists (ACLs). You can specify allow rules, but not deny rules. ELASTIC FILE SYTEM (EFS) Elastic File System (EFS) is a managed NFS (Network File System) that can be mounted on many EC2 instances across different availability zones. EFS is a file storage service for Amazon EC2 instances. EFS is easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily. With EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. Use EFS when you need distributed, highly resilient storage for Linux instances and Linux-based applications. Tips to remember when creating EFS:       It supports the Network File System version 4 (NFSv4) protocol. You only pay for the storage you use (no pre-provisioning required). It can scale up to petabytes. It can support thousands of concurrent NFS connections. Data is stored across multiple AZ’s within a region. Read after Write consistency. 36 EFS storage classes First-byte latency when reading from or writing to the IA storage class is higher than that for the Standard storage class. For file systems using Bursting Throughput, the allowed throughput is determined based on the amount of data stored in the Standard storage class only. For file systems using Provisioned Throughput mode, you're billed for the throughput provisioned above what you are provided based on the amount of data that is in the Standard storage class. Amazon EFS offers a range of storage classes that are designed for different use cases. These include EFS Standard, EFS Standard–Infrequent Access (Standard-IA), EFS One Zone, and EFS One Zone–Infrequent Access (EFS One Zone-IA). Amazon EFS Standard and Standard–IA storage classes EFS Standard and Standard-IA storage classes are regional storage classes that are designed to provide continuous availability to data, even when one or more Availability Zones in an AWS Region are unavailable. They offer the highest levels of availability and durability by storing file system data and metadata redundantly across multiple geographically separated Availability Zones within a Region. The EFS Standard storage class is used for frequently accessed files. It is the storage class to which customer data is initially written for Standard storage classes. The Standard–IA storage class reduces storage costs for files that are not accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and POSIX file system access that Amazon EFS provides. AWS recommends Standard-IA storage if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. Examples include keeping files accessible to satisfy audit requirements, performing historical analysis, or performing backup and recovery. Standard-IA storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available. Amazon EFS One Zone and EFS One Zone–IA storage classes EFS One Zone and One Zone–IA storage classes are designed to provide continuous availability to data within a single Availability Zone. The EFS One Zone storage classes store file system data and metadata redundantly within a single Availability Zone in an AWS Region. Because they store data in a single AWS Availability Zone, data that is stored in these storage classes might be lost in the event of a disaster or other fault that affects all copies of the data within the Availability Zone, or in the event of Availability Zone destruction. For added data protection, Amazon EFS automatically backs up file systems using One Zone storage classes with AWS Backup. You can restore file system backups to any operational Availability Zone within an AWS Region, or you can restore them to a different Region. Amazon EFS file system backups that are created and managed 37 using AWS Backup are replicated to three Availability Zones and are designed for 11 9’s durability. EFS One Zone–Standard is used for frequently accessed files. It is the storage class to which customer data is initially written for One Zone storage classes. The EFS One Zone–IA storage class reduces storage costs for files that are not accessed every day. AWS recommends EFS One Zone–IA storage if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. One Zone-IA storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available. Infrequent access performance First-byte latency when reading from or writing to either of the IA storage classes is higher than that for the EFS Standard or EFS One Zone storage classes. You can move files from an IA storage class to a frequently accessed storage class by copying them to another location on your file system. If you want your files to remain in the frequently accessed storage class, stop Lifecycle Management on the file system, and then copy your files. Storage class Designed for Durability (designed for) Availabilit y Availabilit y zones EFS Standard Frequently accessed data requiring the highest durability and availability. Long lived, infrequentl y accessed data requiring the highest durability and availability. Frequently accessed data that doesn’t require 99.999999999 % (11 9's) 99.99% >=3 99.999999999 % (11 9's) 99.99% >=3 Per GB retrieval fees apply. 99.999999999 % (11 9's)* 99.90% 1 Not resilient to the loss of the Availability Zone. EFS StandardInfrequen t Access (IA) EFS One Zone 38 Other consideration s None EFS One Zone-IA highest levels of durability and availability. Long lived, infrequentl y accessed data that doesn’t require highest levels of durability and availability. 99.999999999 % (11 9's)* 99.90% 1 Not resilient to the loss of the Availability Zone. Per GB retrieval fees apply. Because EFS One Zone storage classes store data in a single AWS Availability Zone, data stored in these storage classes may be lost in the event of a disaster or other fault that affects all copies of the data within the Availability Zone, or in the event of Availability Zone destruction. Storage class pricing You are billed for the amount of data in each storage class. You are also billed for data access when files in IA storage are read and when files are transitioned to IA storage from EFS Standard or One Zone storage. The AWS bill displays the capacity for each storage class and the metered access against the file system's IA storage class (Standard-IA or One Zone-IA). For file systems using Bursting Throughput, the allowed throughput is determined based on the amount of data stored in the EFS Standard and EFS One Zone storage classes only. For file systems using Provisioned Throughput mode, you're billed for the throughput provisioned above what you are provided based on the amount of data that is in the EFS Standard and EFS One Zone storage classes. You don't incur data access charges when using AWS Backup to back up lifecycle management-enabled EFS file systems. Viewing storage class size You can view how much data is stored in each storage class of your file system using the Amazon EFS console, the AWS CLI, or the EFS API. The Metered size tab on the File system details page displays the current metered size of the file system in binary multiples of bytes (kibibytes, mebibytes, gibibytes, and tebibytes). The metric is emitted every 15 minutes and lets you view your file 39 system's metered size over time. Metered size displays the following information for the file system storage size:    Total size — Is the size (in binary bytes) of data stored in the file system, including all storage classes. Size in Standard / One Zone — Is the size (in binary bytes) of data stored in the EFS Standard or EFS One Zone storage class. Size in Standard-IA / One Zone-IA — Is the size (in binary bytes) of data stored in the Standard-IA or One Zone-IA storage class, depending on whether your file system uses Standard or One Zone storage classes. You can also view the Storage bytes metric on the Monitoring tab on the File system details page in the Amazon EFS console. Using Amazon EFS storage classes To use the EFS Standard and Standard–IA storage classes, create a file system that stores data redundantly across multiple Availability Zones in an AWS Region. To do this when creating a file system using the AWS Management Console, choose Availability and Durability, and then choose Regional. To use the Standard–IA storage class, you must enable the Amazon EFS lifecycle management feature. When enabled, lifecycle management automates moving files from Standard storage to IA storage. When you create a file system using the console, Lifecycle management is enabled by default with a setting of 30 days since last access. To use the EFS One Zone and One Zone–IA storage classes, you must create a file system that stores data redundantly within a single Availability Zone in an AWS Region. To do this when creating a file system using the console, choose Availability and Durability, and then choose One Zone. To use the One Zone–IA storage class, you must enable the Amazon EFS lifecycle management feature. When enabled, lifecycle management automates moving files from Standard storage to IA storage. When you create a file system using the console, Lifecycle management is enabled by default with a setting of 30 days since last access. Amazon FSx for Windows File Server provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require file storage to AWS. Amazon FSx is built on Windows Server. Use when you need centralized storage for Windows-based applications such as Sharepoint, Microsoft SQL Server, Workspaces, Internet Information Service (IIS) Web Server for any other native Microsoft application:     Supports SMB protocol & Windows NTFS. Supports Microsoft Active Directory integration, ACLs, user quotas. Built on SSD, scales upto 10s of GB/s, millions of IOPS, 100s of PB of data. Can be accessed from your on-premises infrastructure. 40   Can be configured to be Multi-AZ (high availability). Data is backed up daily to S3. Windows FSx A managed Windows Server that runs Windows Server Message Block (SMB)based file services. Designed for Windows and Windows applications. Supports Active Directory (AD) users, access control lists, groups and security policies, along with Distributed File System (DFS) namespaces and replication. EFS A managed Network Attached Storage (NAS) filer for EC2 instances based on NFSv4. One of the first network file sharing protocols native to Unix and Linux. Amazon FSx for Lustre is a fully managed file system that is optimized for computeintensive workloads, such as high-performance computing, machine learning, media data processing workflows, and Electronic Design Automation (EDA). With Amazon FSx you can launch and run a Lustre file system that can process massive data sets at up to hundreds of gigabytes per second of throughput, millions of IOPS, and submillisecond latencies. Use when you need high-speed, high-capacity distributed storage used for applications that do High Performance Compute (HPC), financial modeling, etc., remember that FSx for Lustre can store data directly on S3. Lustre FSx Designed specifically for fast processing of workloads such as machine learning, High Performance Computing (HPC), video processing, financial modeling, and Electronic Design Automation (EDA). Lets you launch and run a file system that provides sub-millisecond access to your data and allows you to read and write data at speeds of up to hundreds of gigabytes per second of throughput and millions of IOPS. EFS A managed Network Attached Storage (NAS) filer for EC2 instances based on NFSv4. One of the first network file sharing protocols native to Unix and Linux. Storage comparison:        S3 — Object storage. Glacier — Object archival. EFS — Network File System for Linux instances, POSIX file system. FSx for Windows — Network File System for WIndows servers. FSx for Lustre — High Performance Computing Linux file system. EBS volumes — Network storage for one EC2 instance at a time. Instance Storage — Physical storage for your EC2 instance (high IOPS). 41    Storage Gateway — File Gateway, Volume Gateway (cache and store), Tape Gateway. Snowball/Snowmobile — To move large amounts of data to the cloud physically. Database — For specific workloads, usually with indexing and querying. AWS Storage Types - S3, EFS and EBS Ultimately all storage has "block storage" at the lowest level (even SSDs which emulate disk blocks). The terminology used is not as important as understanding the distinctions in how the storage is interacted with. Rather than trying to memorize that this service has this title, or that service has that title, it is more useful to understand how they are used.    S3, files on S3 can only be addressed as objects. You cannot interact with the contents of the object until you have extracted the object from S3 (GET). It cannot be edited inplace, you must extract it, change it, and then put it back to replace the original (PUT). What this comes down to is that there is no user "locking" functionality as might be offered by a 'file system' This is why it is called "Object storage". EFS is basically NFS (Network File System) which is an old and still viable Unix technology. As the title implies it is a "File System" and offers more capabilities than "Object Storage". The key to these is grades of 'File Locking' which makes it suitable for shared use. This is what makes it suitable for a share NETWORK file system. It is important to note that like Object Stores, you are still restricted to handling the file as a complete object. You can lock it so that you can write back to it, but you are restricted in the extent that you do partial content updates based on blocks. This gets a bit grey as there are ways to do partial updates, but they are limited. EBS is closer to locally attached disk whether that be IDE, SAS, SCSI (or it's close cousin iSCSI/Fibre Channel, which is in reality just SCSI over a pipe). With Locally attached disk you have better responsiveness and addressing. This allows you to use File Systems that can address the disk at a block level. This includes part reads, part writes, read ahead, delayed writes, file system maintenance where you swap block under the file, block level cloning, thin provisioning, deduplication etc. As noted above, Block Storage sits underneath both NFS and Object stores, but as a consumer you are abstracted away and cannot make direct use of them. AMAZON FSX FOR LUSTRE FSx for Lustre makes it easy and cost-effective to launch and run the popular, highperformance Lustre file system. You use Lustre for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling. The open-source Lustre file system is designed for applications that require fast storage where you want your storage to keep up with your compute. Lustre was 42 built to solve the problem of quickly and cheaply processing the world's evergrowing datasets. It's a widely used file system designed for the fastest computers in the world. It provides sub-millisecond latencies, up to hundreds of GBps of throughput, and up to millions of IOPS. As a fully managed service, Amazon FSx makes it easier for you to use Lustre for workloads where storage speed matters. FSx for Lustre eliminates the traditional complexity of setting up and managing Lustre file systems, enabling you to spin up and run a battle-tested high-performance file system in minutes. It also provides multiple deployment options so you can optimize cost for your needs. FSx for Lustre is POSIX-compliant, so you can use your current Linux-based applications without having to make any changes. FSx for Lustre provides a native file system interface and works as any file system does with your Linux operating system. It also provides read-after-write consistency and supports file locking. Multiple deployment options Amazon FSx for Lustre offers a choice of scratch and persistent file systems to accommodate different data processing needs. Scratch file systems are ideal for temporary storage and shorter-term processing of data. Data is not replicated and does not persist if a file server fails. Persistent file systems are ideal for longer-term storage and throughput-focused workloads. In persistent file systems, data is replicated, and file servers are replaced if they fail. Multiple storage options Amazon FSx for Lustre offers a choice of solid state drive (SSD) and hard disk drive (HDD) storage types that are optimized for different data processing requirements:   SSD storage options – For low-latency, IOPS-intensive workloads that typically have small, random file operations, choose one of the SSD storage options. HDD storage options – For throughput-intensive workloads that typically have large, sequential file operations, choose one of the HDD storage options. If you are provisioning a file system with the HDD storage option, you can optionally provision a read-only SSD cache that is sized to 20 percent of your HDD storage capacity. This provides sub-millisecond latencies and higher IOPS for frequently accessed files. Both SSD-based and HDD-based file systems are provisioned with SSDbased metadata servers. As a result, all metadata operations, which represent the majority of file system operations, are delivered with sub-millisecond latencies. FSx for Lustre S3 data repository integration FSx for Lustre integrates with Amazon S3, making it easier for you to process cloud datasets using the Lustre high-performance file system. When linked to an Amazon S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files. Amazon FSx imports listings of all existing files in your S3 bucket at file system creation. Amazon FSx can also import listings of files added to the data repository 43 after the file system is created. You can set the import preferences to match your workflow needs. The file system also makes it possible for you to write file system data back to S3. Data repository tasks simplify the transfer of data and metadata between your FSx for Lustre file system and its durable data repository on Amazon S3. FSx for Lustre and on-premises data repositories With Amazon FSx for Lustre, you can burst your data processing workloads from onpremises into the AWS Cloud by importing data using AWS Direct Connect or AWS VPN. Accessing FSx for Lustre file systems You can mix and match the compute instance types and Linux Amazon Machine Images (AMIs) that are connected to a single FSx for Lustre file system. Amazon FSx for Lustre file systems are accessible from compute workloads running on Amazon Elastic Compute Cloud (Amazon EC2) instances, on Amazon Elastic Container Service (Amazon ECS) Docker containers, and containers running on Amazon Elastic Kubernetes Service (Amazon EKS).    Amazon EC2 — You access your file system from your Amazon EC2 compute instances using the open-source Lustre client. Amazon EC2 instances can access your file system from other Availability Zones within the same Amazon Virtual Private Cloud (Amazon VPC), provided your networking configuration provides for access across subnets within the VPC. After your Amazon FSx for Lustre file system is mounted, you can work with its files and directories just as you do using a local file system. Amazon EKS — You access Amazon FSx for Lustre from containers running on Amazon EKS using the open-source FSx for Lustre CSI driver. Your containers running on Amazon EKS can use high-performance persistent volumes (PVs) backed by Amazon FSx for Lustre. Amazon ECS — You access Amazon FSx for Lustre from Amazon ECS Docker containers on Amazon EC2 instances. Amazon FSx for Lustre is compatible with the most popular Linux-based AMIs, including Amazon Linux 2 and Amazon Linux, Red Hat Enterprise Linux (RHEL), CentOS, Ubuntu, and SUSE Linux. The Lustre client is included with Amazon Linux 2 and Amazon Linux. For RHEL, CentOS, and Ubuntu, an AWS Lustre client repository provides clients that are compatible with these operating systems. Using FSx for Lustre, you can burst your compute-intensive workloads from onpremises into the AWS Cloud by importing data over AWS Direct Connect or AWS Virtual Private Network. You can access your Amazon FSx file system from onpremises, copy data into your file system as-needed, and run compute-intensive workloads on in-cloud instances. 44 Integrations with AWS services Amazon FSx for Lustre integrates with SageMaker as an input data source. When using SageMaker with FSx for Lustre, your machine learning training jobs are accelerated by eliminating the initial download step from Amazon S3. Additionally, your total cost of ownership (TCO) is reduced by avoiding the repetitive download of common objects for iterative jobs on the same dataset as you save on S3 requests costs. FSx for Lustre integrates with AWS Batch using EC2 Launch Templates. AWS Batch enables you to run batch computing workloads on the AWS Cloud, including high performance computing (HPC), machine learning (ML), and other asynchronous workloads. AWS Batch automatically and dynamically sizes instances based on job resource requirements. FSx for Lustre integrates with AWS ParallelCluster. AWS ParallelCluster is an AWSsupported open-source cluster management tool used to deploy and manage HPC clusters. It can automatically create FSx for Lustre file systems or use existing file systems during the cluster creation process. Security and compliance FSx for Lustre file systems support encryption at rest and in transit. Amazon FSx automatically encrypts file system data at rest using keys managed in AWS Key Management Service (AWS KMS). Data in transit is also automatically encrypted on file systems in certain AWS Regions when accessed from supported Amazon EC2 instances. Amazon FSx has been assessed to comply with ISO, PCI-DSS, and SOC certifications, and is HIPAA eligible. Pricing for Amazon FSx for Lustre With Amazon FSx for Lustre, there are no upfront hardware or software costs. You pay for only the resources used, with no minimum commitments, setup costs, or additional fees. WEB APPLICATION FIREWALL (WAF) WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to CloudFront, an ALB or API Gateway. WAF also lets you control access to your content. You can configure conditions such as what IP addresses are allowed to make this request or what query string parameter need to be passed for the request to be allowed. Then the ALB or CloudFront or API Gateway will either allow this content to be received or give a HTTP 403 Status Code. At its most basic level, AWS WAF allows 3 different behaviors: 45     Allow all requests except the ones you specify — This is useful when you want Amazon CloudFront, Amazon API Gateway, Application Load Balancer, or AWS AppSync to serve content for a public website, but you also want to block requests from attackers. Block all requests except the ones that you specify — This is useful when you want to serve content for a restricted website whose users are readily identifiable by properties in web requests, such as the IP addresses that they use to browse to the website. Count requests that match your criteria — You can use the count action to track your web traffic without modifying how you handle it. You can use this for general monitoring and also to test your new web request handling rules. When you want to allow or block requests based on new properties in the web requests, you can first configure AWS WAF to count the requests that match those properties. This lets you confirm your new configuration settings before you implement new allow or block actions. Run CAPTCHA checks against requests that match your criteria — You can implement CAPTCHA controls against requests to help reduce bot traffic to your protected resources. Scenarios involving blocking malicious IP addresses can be handled using either WAF or Network ACLs. Using AWS WAF has several benefits:       Additional protection against web attacks using criteria that you specify. You can define criteria using characteristics of web requests such as the following:  IP addresses that requests originate from.  Country that requests originate from.  Values in request headers.  Strings that appear in requests, either specific strings or strings that match regular expression (regex) patterns.  Length of requests.  Presence of SQL code that is likely to be malicious (known as SQL injection).  Presence of a script that is likely to be malicious (known as cross-site scripting). Rules that can allow, block, or count web requests that meet the specified criteria. Alternatively, rules can block or count web requests that not only meet the specified criteria, but also exceed a specified number of requests in any 5-minute period. Rules that you can reuse for multiple web applications. Managed rule groups from AWS and AWS Marketplace sellers. Real-time metrics and sampled web requests. Automated administration using the AWS WAF API. 46 AWS Shield You can use AWS WAF web access control lists (web ACLs) to help minimize the effects of a Distributed Denial of Service (DDoS) attack. For additional protection against DDoS attacks, AWS also provides AWS Shield Standard and AWS Shield Advanced. AWS Shield Standard is automatically included at no extra cost beyond what you already pay for AWS WAF and your other AWS services. AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators. AWS Shield Advanced incurs additional charges. AWS Firewall Manager AWS Firewall Manager simplifies your administration and maintenance tasks across multiple accounts and resources for a variety of protections, including AWS WAF, AWS Shield Advanced, Amazon VPC security groups, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall. With Firewall Manager, you set up your protections just once and the service automatically applies them across your accounts and resources, even as you add new accounts and resources. You use AWS WAF to control how an Amazon CloudFront distribution, an Amazon API Gateway REST API, an Application Load Balancer, or an AWS AppSync GraphQL API responds to HTTP(S) web requests.    Web ACLs — You use a web access control list (ACL) to protect a set of AWS resources. You create a web ACL and define its protection strategy by adding rules. Rules define criteria for inspecting web requests and specify how to handle requests that match the criteria. You set a default action for the web ACL that indicates whether to block or allow through those requests that pass the rules inspections. Rules — Each rule contains a statement that defines the inspection criteria, and an action to take if a web request meets the criteria. When a web request meets the criteria, that's a match. You can configure rules to block matching requests, allow them through, count them, or run CAPTCHA controls against them. Rules groups — You can use rules individually or in reusable rule groups. AWS Managed Rules and AWS Marketplace sellers provide managed rule groups for your use. You can also define your own rule groups. After you create your web ACL, you can associate it with one or more AWS resources. The resource types that you can protect using AWS WAF web ACLs are an Amazon CloudFront distribution, an Amazon API Gateway REST API, an Application Load Balancer, and an AWS AppSync GraphQL API. AWS WAF is available in the Regions listed at AWS service endpoints:  For an Amazon API Gateway REST API, an Application Load Balancer, or an AWS AppSync GraphQL API, you can use any of the Regions in the list. 47  For a CloudFront distribution, AWS WAF is available globally, but you must use the Region US East (North Virginia) for all of your work. You must create your web ACL using the Region US East (North Virginia). You must also use this Region to create any other resources that you use in your web ACL, like rule groups, IP sets, and regex pattern sets. Some interfaces offer a region choice of "Global (CloudFront)". Choosing this is identical to choosing Region US East (North Virginia) or "us-east-1". You can associate a web ACL with a CloudFront distribution when you create or update the distribution itself. You can only associate a web ACL to an Application Load Balancer within AWS Regions. For example, you cannot associate a web ACL to an Application Load Balancer that is on AWS Outposts. You can associate a single web ACL with one or more AWS resources, according to the following restrictions:   You can associate each AWS resource with only one web ACL. The relationship between web ACL and AWS resources is one-to-many. You can associate a web ACL with one or more CloudFront distributions. You cannot associate a web ACL that you have associated with a CloudFront distribution with any other AWS resource type. AWS WAF Web ACL capacity units (WCU) AWS WAF uses web ACL capacity units (WCU) to calculate and control the operating resources that are required to run your rules, rule groups, and web ACLs. AWS WAF enforces WCU limits when you configure your rule groups and web ACLs. WCUs don't affect how AWS WAF inspects web traffic. AWS WAF calculates capacity differently for each rule type, to reflect each rule's relative cost. Simple rules that cost little to run use fewer

Use Quizgecko on...
Browser
Browser